text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Acoustic Velocity Correcting Method for the Tilted Acoustic Tube in Testing of Pile by Ultrasonic Transmission
The finite element software ABAQUS was used to simulate the detection of piles by ultrasonic transmission. The influence of the tilted acoustic tube on the testing results of the pile was analyzed. The results showed that, when the pile was complete, the velocity of the sound-depth curve of the received signal was inclined to one side due to the inclination of the acoustic tube and the velocity of the sound seriously deviated from the normal value; when there was a defect in the pile, the signal of the defect was not obvious due to the tilt of the acoustic tube, which was easy to miss or misjudge the defects of the pile. To solve the problem of the inclined acoustic tube, the mathematical model of the position relation of the acoustic tube was established, and the method for correcting the velocity of the sound based on the angle of the acoustic tube was derived. Numerical simulation and engineering examples were used to verify the modified method; the verification showed that the corrected acoustic signal could accurately determine the defects and their positions in the pile, and this method effectively reduced the influence of the tilted acoustic tube on the detection signal, which was beneficial to improve the accuracy of the testing for the pile.
Introduction
Ultrasonic transmission is widely used to test the integrity of the pile because the accuracy of this test is very good, and it is not limited by the length and diameter of the pile and it is easy to collect and analyze the detected signals [1]. However, the specification [2] of the ultrasonic transmission clearly requires that the acoustic tube be parallel. In fact, it is often difficult to achieve this in the construction of the pile. Especially in the bored pile without the reinforcing cage, the acoustic tube often tilts, which affects the accuracy of the testing and causes the misjudgment of the defect in the pile. It brings hidden dangers to the quality of the project [3]. e problem of the inclination of the pre-buried acoustic tube has been plaguing the testers of the pile. In view of this problem, many domestic scholars have carried out relevant researches. In [4], the abnormal characteristics of the detected data are analyzed when the acoustic tube is tilted and a method in which abnormal features are eliminated by reasoning is proposed. In [5], a comprehensive method to evaluate the detected data of acoustic transmission is proposed, which effectively improves the objectivity and scientific nature of the analysis and processing of the detected data. In [6], the function of wavelet which is constructed by one-dimensional wavelet is used to correct the acoustic tube. In [7], the principle of polynomial fitting to eliminate the influence of the inclined acoustic tube is used. In [8], the least squares method of the series of powers is used to fit the time of the sound to depth curve and then the acoustic tube is corrected. e above researches essentially use the fitting elimination. is method requires a lot of segmentation and fitting, which has a large workload and a large personal error. Based on the ABAQUS numerical simulation software, this paper analyzes the influence of the inclined acoustic tube on the results of testing of the pile and proposes a method for correcting the velocity of sound based on the angle of the acoustic tube and verifies the corrected results through numerical simulation. e results show that the corrected velocity of the sound has a good consistency with the velocity of the sound when the acoustic tube is parallel, and we can accurately determine the position of the defect in the pile.
Numerical Simulation
e influence of the inclination of the acoustic tube on the detected signal was studied by ABAQUS numerical simulation software. Since the process of the simulation focuses on the changes of the detected signals in the testing of the pile, the pile and soil model adopted the criterion of the linear elasticity [9]. e physical and mechanical parameters of the pile and soil are shown in Table 1. Figure 1 shows the complete pile with the acoustic tube. e pile was a square pile; its length was 1 m, width was 1 m, and height was 8 m. Two acoustic tubes were arranged in the pile. e diameter of the acoustic tube was 5 cm, and the length was 8 m. e acoustic tube was 5 cm from the edge of the pile. Figure 1(a) shows the model of the complete pile when the acoustic tubes were parallel. Figure 1(b) shows the model of the complete pile when the right acoustic tube was inward tilted by 3.2 degrees. e contact between the pile and the soil was set as follows: surface to surface contact was adopted between the side of the pile and the soil around the pile, the tangential behavior was rough, and the normal behavior was hard contact; tie constraints were adopted between the bottom of the pile and the soil at the bottom of the pile [10]. e influence of the artificial boundary on the detected signal was considered; this model used the infinite element to simulate the infinite boundary to eliminate the reflection of the artificial boundary [11]. Both the pile and the soil used the C3D8R element, which was three-dimensional and eight-node. e mesh of the model is shown in Figure 2. It used the mode of sweep from top to bottom. e mesh of the soil near the pile was dense, and the mesh of the soil away from the pile was sparse. From the top of the pile, loading nodes were established every 20 cm in an acoustic tube, and the nodes in another acoustic tube were correspondingly set as receiving nodes. Moreover, the loading node was guaranteed to be on the same horizontal plane with the corresponding receiving node.
Duration of the
Step. e ultrasonic testing of the piles was performed by 2P waves. According to the formula for solving the velocity of the longitudinal waves [12], the velocity of the three-dimensional Pwave in the testing of the pile can be calculated as follows: e horizontal testing of the piles with the parallel acoustic tube was carried out, and the time of the propagation of the elastic wave in the pile was obtained as follows: where l′ is the clear distance between the inner walls of the two acoustic tubes in each profile for testing. In order to ensure that the elastic wave can propagate from the loading point to the signal receiving point, the time of the calculation was slightly greater than the time the elastic wave propagated in the pile. In the calculation of the model, the total time of the step was 0.0003 s, and the increment of the time was 10 − 6 s.
Condition of Load.
e most basic form of vibration of particle in the elastic solid was simple harmonic vibration. In the actual detection, the wave of the sound emitted by the ultrasonic transmitter had simple harmonic characteristics, so a simple harmonic force was adopted to simulate the wave of the sound [13]. In order to be closer to the harmonic signal emitted by the ultrasonic transmitter, the convex cosine function [14] was used as the simple harmonic force, as shown in the following equations: where ω 0 � 2πf 0 , f 0 is the center frequency, and n is a positive integer 2, which controls the sharpness of the signal. e pattern of the load is shown in Figure 3. e horizontal load P 0 was 10 N. e loading time for a complete cycle was 10 μs.
Results of the Complete Pile.
ere were 41 detections of numerical simulation, respectively, carried out for the model of the complete pile with the parallel acoustic tube and the inclined acoustic tube, and the horizontal velocity of each receiving point was extracted, respectively. Figure 4 shows the velocity-time curve of the receiving point at a depth of 4 m when the acoustic tube was parallel. e time of the first positive peak of the velocity to time curve on each receiving point was extracted separately. According to formula (5), the vocal velocity of each receiving point was calculated, and then, the vocal velocity-depth curve was drawn: where v is the velocity of the sound, l 0 is the distance between the two acoustic tubes at the top of the pile, t p is the time of the first positive peak in the velocity-time curve of each receiving point, and t ′ is the time of the first positive peak in the curve of the load. Figure 5 shows the results of ultrasonic inspection of the complete pile when the acoustic tube was parallel and the acoustic tube was tilted. It could be seen that the velocity of the sound when the acoustic tubes were parallel was unchanged from the top of the pile to the bottom of the pile, and when the acoustic tube was tilted inward, the speed of the sound gradually increased within the depth of the pile. Figure 6 shows a model of the pile with cavity. e length of the cavity was 0.1 m, the width was 0.1 m, and the height was 0.4 m, which was located in the middle of the pile. e top surface of the cavity was Figure 7 shows the results of ultrasonic inspection of the pile with the cavity when the acoustic tube was parallel and the acoustic tube was tilted. Figure 7(a) shows the vocal velocitydepth curve of the detection of the defective pile when the acoustic tubes were parallel. It could be found that the vocal velocity remained unchanged in the nondefective section of the pile, while it began to decrease sharply near the defect from 3.8 m to 4.2 m in depth, and it reached a minimum of 3333 m/s. e signal of the defect was very obvious. Figure 7(b) shows the testing results of the defective pile when the acoustic tube was tilted. It could be seen that, due to the inclination of the acoustic tube, the speed of the sound gradually increased within the length of the pile. e speed of the sound was only slightly reduced at the position of the cavity. e signal of the defect was not obvious, and it was easy to cause the defect to be missed. erefore, it was necessary to correct the vocal velocity-depth curve when the acoustic tube was tilted. is paper proposed a method based on the angle of the acoustic tube to correct the velocity of the sound.
Principle of Correction.
e inclined section of the acoustic tube was divided by the turning point of the vocal velocity-depth curve as the cutoff point. As shown in Figure 8, the distance between the two acoustic tubes at the top of the pile was set to be l 0 , the measured time of the sound was t 0 , and the speed of the sound was v 0 . e acoustic tube A was vertical, and the BC section of the acoustic tube B was inclined. Points A and B were horizontal testing. e distance between the acoustic tubes A and B was set to be l 1 , the measured sound time was t 1 , and the speed of the sound was v 1 . Keep point A still and move point B down to ΔH to point C. Points A and C were oblique testing. e distance between A and C was l 2 , the measured time of the sound was t 2 , and the velocity of the sound was v 2 . e vertical distance between the two points B and C was ΔH. According to the geometric relationship, the following equations were established: Combine formulas (6) and (7) to obtain the inclination angle of the acoustic tube: For other receiving points of the inclined section, the real distance of the acoustic tube can be obtained as follows: According to the measured sound time of each measuring point, the true velocity of the sound can be obtained as follows: where l i is the corrected distance between the two acoustic tubes at the measuring point i, H i is the vertical distance between the measuring point i and the turning point, t i is the measured time of the sound at the measuring point i, l 0 is the distance between the two acoustic tubes at the top of the pile, and v i is the corrected speed of the sound at the measuring point i.
Validation of Corrected Result.
e velocity of the sound in the case where the acoustic tube was tilted was corrected by the above method. According to the velocity of the sounddepth curve in Figure 7(b), it could be found that the velocity of the sound gradually increased with the increase of depth, and the velocity of the sound at the last point reached 7045 m/s, indicating that there was an obvious inclination of the acoustic tube. e above method of correction was applied to determine the inclined angle of the acoustic tube at the depth of 3 m, 6 m, and 7 m, respectively, and took the average value of α � 3.21 o . e tangent of α was equal to 0.0561. e speed of the sound was corrected according to equation (10). e corrected speed of the sound-depth curve is shown in Figure 9. It could be seen that the corrected velocity of the sound-depth curve had a slight fluctuation within the range of the no-defective section in the pile, which was approximately unchanged. e velocity of the sound decreased sharply at the position of the defect at the depth of 3.8 m-4.2 m. e signal of the defect fluctuated significantly. e corrected velocity of the sound-depth curve was very similar to the detected result when the acoustic tube was parallel. e results showed that the defect of the pile and its position can be accurately judged based on the velocity of the sound-depth curve after being corrected by the inclined angle of the acoustic tube. It can solve the problem of misjudgments and missing judgment of defects caused by the inclination of the acoustic tube.
Engineering Profile.
No.13 pile was detected at No. 45 pier of a construction site, the strength of the concrete was C20, the diameter of the pile was 1 m, the effective length of the pile was 5.25 m, and the two acoustic tubes ( Figure 10) were symmetrically distributed. e RS-ST01 C ultrasonic detector ( Figure 11) produced by Wuhan Haiyan Company was used, the transducer was ZDBS-YGD type, which was interchangeable with the transmitter and receiver, and its frequency was 45 KHz. e horizontal testing was adopted, and the distance between measuring points was 25 cm. e speed of the sound-depth curve was gradually decreasing, which indicated that the acoustic tube was inclined, so the integrity of the pile cannot be determined. e method proposed in this paper was used to determine the inclination angle of the acoustic tube at 1 m, 3 m, and 4 m, respectively, and calculated the average value. e tangent of α was equal to 0.0021. e speed of the sound was corrected according to equation (10). e corrected speed of the sound-depth curve is shown in Figure 12. It can be seen that the corrected speed of the sound-depth curve tended to be horizontal and had a slight fluctuation from the top of the pile to the bottom of the pile. It showed that the pile was relatively complete.
In order to verify the accuracy of this correction method, the integrity of the pile was verified by the core drilling method at the scene. e diameter of the concrete core sample was 86 mm. As shown in Figure 13, the concrete core of the pile was continuous, complete, and long columnar, with smooth surface, good cementation, uniform distribution of aggregates, and basically consistent fractures, with only a few pores on the side of the core.
rough comprehensive determination of the drilled core samples, it can be seen that pile was relatively complete. e results were consistent with the corrected speed of the sound-depth curve, which showed that the method for correcting the velocity of the sound based on the angle of the acoustic tube can reduce the influence of the inclined acoustic tube on the detection results.
Conclusions
e detection of the pile by the ultrasonic transmission is greatly affected by the inclination of the acoustic tube [15]. According to the analysis of simulation and experimental results, the following conclusions can be drawn: (1) When the acoustic tube tilts, the velocity of the sound-depth curve of the received signal will tilt to one side, which will cause the wrong judgment of internal defects in the pile. (2) e method for correcting the velocity of the sound based on the angle of the acoustic tube was proposed in this paper. rough numerical simulation verification, it is found that this method effectively reduces the influence of the tilted acoustic tube on the detection signal. It is helpful to improve the accuracy of the inspectors' judgment on the integrity of piles. (3) Combined with the pile in engineering, it is found that the modified ultrasonic testing results are in good agreement with the core drilling method, which further indicates that the modified method has a high accuracy.
Data Availability e processed data required to reproduce these findings cannot be shared at this time as the data also form a part of an ongoing study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. Advances in Civil Engineering 7 | 4,163.6 | 2020-11-21T00:00:00.000 | [
"Materials Science",
"Physics"
] |
A Functional Estimate of Covariation
ABSTRACT The analysis of functional data calls for a bivariate functional covariance function σ(s, t) that may be evaluated at any discrete set of points to define a variance-covariance matrix This article uses finite element methodology to construct a representation of a functional Choleski factor λ(w, s) to define σ(s, t) = ∫λ(w, s)λ(w, t) dw. An estimate of is especially important for applications and, where the eigenstructure of the covariance permits, this is readily available since the resulting is almost always positive definite. A simulation study compares the performance of estimates of and to those from the classic covariance matrix estimate and an estimate using glasso package in R. The method’s capability of constraining estimates of to be strongly band-structured resulted in superior estimates. The real data application is to the smoothing of the Fels female growth data where σ(s, t) estimates the residual covariance structure in the presence of sampling points varying from one case to another. Supplementary materials are available online.
Introduction
This article presents a methodology for functional covariance estimation that uses a finite approximation of the bivariate covariance function with values σ (s, t ). Finite element methods are widely used in the modelling of variation over spatial regions, and especially the models are defined by partial differential equations. A recent application of finite element smoothing is Sangalli, Ramsay, and Ramsay (2013).
An important application of the finite element approximate of σ (s, t ) is the evaluation of σ (s, t ) at arbitrary discrete points, (s, t), where vectors s and t contain strictly increasing series of points. In the special case where s = t and the series is equally spaced, contains the boundaries of the functional observations and is of length n, we will use the notation (n) , along with the notations −1 (s, t) and −1 (n) for their respective inverses, which will be referred to as the precision matrices.
This estimation strategy, is denoted here as the FE-model, has many applications. An estimate of σ (s, t ) can fill gaps in covariance values due to coarse sampling, missing data or uneven spacing of observation times as a preliminary to further analyses or graphical display. The representation almost always provides a nonsingular estimate of (s, t) and, therefore, easy access to an estimate of −1 (s, t). The computational efficiency of the estimation permits its use in simulation studies where it offers a new approach to the generation of random symmetric positive definite matrices. The number of parameters used in the estimation does not depend on the size of the functional data sample or the size of either s or t, and as a consequence the amount of detail in the estimate can be controlled so as to avoid over-fitting the data as well as nonsingularity, which is important in the mixed effect modeling of longitudinal or functional data.
The estimation itself proceeds directly from the raw data available and, therefore, does not depend on a preliminary smoothing of the data, as suggested in Ramsay and Silverman (2005) or Yao, Müller, and Wang (2005).
In this introductory section the properties of the population σ are reviewed, with a particular focus on the properties and estimation of its functional inverse σ −1 and the corresponding matrix inverses of evaluations. The finite element approximation is briefly introduced. Some relationships between an evaluation (s, t) of a smooth σ and its inverse are indicated and illustrated. Some historical notes conclude the section.
The Covariance Surface σ(s, t )
When an observation x is a random function with mean μ, the covariance kernel is a symmetric nonnegative definite bivariate function with values In most applications, x and σ will be smooth in the sense of possessing a certain number of derivatives, and an example of σ drawn from spatial data analysis is in the upper left panel of Figure 1, which displays (21) for the Matérn covariance σ (s, t ) (Matérn 1960), σ (s, t ) = α |s − t| β ν K ν |s − t| β over the interval [0,20] for parameter values α = 1, β = 0.1 and ν = 4; and where K ν is the modified Bessel function of the second kind. The lower left panel shows the profile of the surface in the cross-diagonal direction, which is a plot of n points (t i , σ i,n−i+1 ) along with the n − 2 averaged Although the Matérn σ is atypical in having a constant diagonal height, it will serve in this article to illustrate a number of aspects of functional covariance estimation, including the need for caution in estimating the inverse evaluation.
The Inverse Covariance or Precision Surface σ −1 (s, t )
The inverse covariance matrix −1 is in many ways more important than the covariance matrix itself. The linear model requires it, the Gaussian, Wishart, and inverse Wishart distributions are defined in terms of −1 , the Mahalanobis distance D 2 = (x − μ) −1 (x − μ) defines the metric for the analysis of covarying observations and the precision matrix offers a conditional dependency perspective on the data since ( −1 ) i j = 0 if and only if x i and x j are independent given the remainder of Gaussian observations x i , contrasting with the marginal dependency framework provided by . Recent applications of Gaussian Markov random fields have capitalized on sparse models for −1 to develop fast methods for spatial data analysis (Held and Rue 2005;Lindgren, Rue, and Lindström 2011).
The functional analogue of the matrix equation −1 x = x is σ −1 denotes the functional inverse of σ rather than its reciprocal, and cov( f ) is the linear covariance operator for which σ is the kernel.
The compact operator cov is usually smooth in the sense of having a differentiable kernel, but this unfortunately implies that σ −1 is a kernel function so rough that it inhabits an overspace of L 2 , and is referred to in functional analysis as a generalized function or a distribution (Aubin 2000). An image of the effect of inverting a smooth matrix can be seen in the right panels of Figure 1, where we see that the Matérn inverse surface exhibits rapidly decaying symmetric oscillations around the diagonal, with amplitudes that are about two orders of magnitude greater than the cross-diagonal variation of . The rate of decay as a function of the number of points on either side of the diagonal remains virtually unchanged as n increases, as does the overall shape. Thus, going to infinite n, we can imagine the Matérn σ −1 as displaying five substantial oscillations infinitely close to the diagonal and infinitely large in magnitude.
This smooth/rough relation between a smooth linear operator and its inverse is familiar in the theory of linear differential equations and their associated Green's functions, and also for other common integral transforms such as the Laplace transform, the Fourier transform, and convolutions. The linear inverse problem is that of finding a useful approximation to the inverse of such transforms, and successful strategies almost inevitably involve some degree of regularization.
This is a stylized version of what will be estimated in Section 4 as the covariance structure of residuals from the smoothing of human growth data. We see in the cross-diagonal of the covariance an oscillation decaying to zero in about five units. The lower right panel shows the precision matrix oscillation, and we see that this decays much more rapidly than its counterpart in Figure 1. This illustrates that rougher covariance surfaces tend to correspond to more gentle cross-diagonal variation in their inverses.
The Inversion of Perturbed Covariance Surfaces
The smooth/rough relation between a smooth covariance surface and its inverse can create a second difficulty of great concern to statisticians whose data framework involves measurement error. We already know from numerical analysis that, if the condition number of a matrix is large, the computed inverse of the covariance matrix will have substantial rounding error. The condition numbers of the Matérn surface in Figure 1 for 11, 21, and 41 equal-spaced observations are 1.3 × 10 4 , 6.0 × 10 6 , and 3.1 × 10 9 , respectively. These already dangerous levels illustrate that the higher the sampling rate, the more severe the conditioning problem will become. On the other hand, the condition number of the growth residual surface in Figure 2 is a safe 7.7. In this article we consider a method for approximating a smooth covariance surface σ by a bivariate functionσ which we expect will be computed from data that are noisy. This implies two sources of error: that arising from the approximation procedure, and that due to the variability of the data. Figure 3 displays the consequences of these two sources of error for the respective inverses of the perturbed covariance matrices. In Figure 3 the top panel shows the approximations of the Matérn covariance and the direct approximation of its inverse, both resulting from evaluations at 21 equally spaced points. A visual comparison of these approximations with their exact counterparts in Figure 1 shows that the approximation to both (21) and −1 (21) and are useful. For the covariance surface the square root of squared errors summed over 441 values is 0.026, and the corresponding value for the inverse matrix is 0.1, an impressive value when viewed against matrix values reaching about ±2000 for −1 . But the bad news is seen in the lower two panels, which shows what happens when each approximation is inverted. Inverting the approximation wipes out most of the interesting detail seen in the exact inverse. The nature of the approximation is not the problem. For example, when the error arises by perturbing each surface by a random Wishart-distributed matrix having a small number degrees of freedom and rescaled so has to have amplitudes that are only 0.1% of that of the corresponding matrix diagonal, we can barely see the perturbations in a plot of (21) , but there is still severe degradation of shape from inverting the corresponding respective perturbed inverses. In spite of the popularity of the Matérn covariance model in spatial data analysis, its high condition number makes it an extreme case that we should rather hope to avoid in practice. A good account of the impact of perturbation on the inversion of matrix can be found in Stoer and Bulirsch (2010).
On the other hand, the inverse growth residual surface in Figure 2 is hardly affected at all by perturbations from these sources, and the inverse of the approximation to the surface is an excellent representation of the exact inverse. The conclusion is that one should estimate an inverse by inverting an estimate only after careful consideration of the condition number of the matrix to be estimated. To do so, however, is not a simple matter since we usually don't have the real surfaces at our disposal. A comparison of the cross-diagonals of the Wishart and growth residual surfaces suggests that those with most of their variation concentrated around the diagonal combined with healthy condition numbers for their approximations are apt to have useful inverse-approximations. These two difficulties, the smoothness/roughness tradeoff and the sensitivity of the inverse matrix to perturbation, are further compounded by the problems caused by the use of covariance estimation methods appropriate to multivariate statistical data in the functional situation. When a sample of N observations y i j , j = 1, . . . , n i ; i = 1, . . . , N are available, the usual estimate of σ involves an initial approximation of the discrete data by a set of functions x i , and then the application of the functional version of the usual multivariate estimate, namelỹ where thex i 's andx are estimates of the x i 's and their pointwise mean, respectively. However, this estimation procedure has a number of undesirable features. It ignores the nature of the dependency of each x i on the corresponding data due to measurement errors, variation in the distribution number of the y i j 's, the number K of basis functions used to represent the x i 's, and other aspects of the smoothing process. Moreover, positive definiteness ofσ will be completely lost if N is smaller than the n i 's or K. The methodology presented in this article works directly with the discrete data rather than requiring an intermediate smoothing step and is virtually always positive definite.
An Overview of FE-Smoothed Covariance Surfaces
The FE-estimation strategy uses a functional version of the Choleski decomposition S = L L of an estimate S of a covariance matrix derived from functional data. This iŝ where the linear integral operator on function f is the functional analogue of matrix operation Lf on vector f. The domain of λ is a parallelogram, a natural modification in the functional context of the lower triangular structure of L, and is illustrated for a simple example in the left panel Figure 4. The width of the parallelogram can be chosen so as to construct band-structured covariance surfaces, and the right panel of the Figure displays the corresponding positive support of the estimate of σ . Finite element approximations of surfaces over spatial domains are essentially the generalization of spline approximations over line segments. A regular triangular mesh is constructed over the parallelogram domain, where "regular" means that no vertex of a triangle is in the interior of another triangle's edge, just as the design of spline representation of over a line begins with the subdivision of an interval into interior line segments. In Figure 4 this is achieved by subdividing the ordinate into I = 4 equal intervals, subdividing the abscissa into J = 2 intervals of the same width, and dividing each square within the parallelogram into two right triangles.
The second step is construction of basis functions having local support, analogous to the of a spline basis function over a small number of adjoining intervals. As with polygonal or order two splines constructed from linear combination of tent functions centered on knots, a simple but effective basis function is centered on a single vertex, is piece-wise linear, and nonzero only over the interior of the hexagonal region covered by triangles sharing that vertex location. For example, the basis function centered on the point at (1,2) in the left panel of Figure 4 has value one at that point and declines linearly to a value of zero on the hexagonal region surrounding that point. Further details are taken up in Section 2.1.
We designate such a basis function for buildingλ as φ k (w, s), the use of index pairs being natural for this application, so that the representing function iŝ The number (I + 1)(J + 1) of basis functions does not depend on the order of any or −1 to be estimated, so thatσ (s, t|C) is defined by a coefficient matrix C whose size can be adapted to the amount and accuracy of available data and the desired complexity ofσ . Figure 5 displays the λ surface corresponding to the Matérn covariance surface in Figure 1. The ridge, starting at a sharp peak near (T, 0), and progressively broadening toward (−10, T ) is fairly typical of the topography of λ when the covariance surface is strongly diagonal.
Historical Notes and Overview
There is already a substantial literature in multivariate statistics on parameterizations (θ) of the covariance matrix that circumvent the greediness of the estimate S with respect to degrees of freedom. Pinheiro and Bates (2000) described several patterned covariance structures, and Smith and Kohn (2002) have added to these by using the normalized Choleski decomposition = L DL, where the diagonal of the Choleski factor L contains ones. Band-structured matrices, however, do have a design dependence on the order n of the covariance matrix that is avoided by the approach in this article. Most structured covariance proposals also presume equally spaced times of observation.
FE-smoothing shares some features with the graphical lasso that Friedman, Hastie, and Tibshirani (2008) developed to estimate large −1 where n >> N. Their approach uses the 1 norm to impose a sparsity on the estimate. Their R package glasso, moreover, permits the constraint that values inˆ −1 beyond a prespecified distance from the diagonal are zero. However, a functional version of σ is not produced that can be used to compute covariance matrices for any vector t of time values; and, since the method does not use the spacings between t-values, glasso would only be appropriate for equally-spaced observation points. Moreover, the aim in this article is not sparsity, but rather smoothness; although admittedly sparsity also acts as a regularization principle. Simulation-based comparisons of the performance of the glasso package and this article's methods are provided in Section 3. Yao, Müller, and Wang (2005) developed a technique for the principal components analysis of sparsely and irregularly sampled functional observations that has been successfully adapted to a considerable number of other functional estimation problems. They use conditional expectation to compute principal component scores, and this requires an estimate of −1 i for each functional observation. In their software package PACE, a relatively involved and ad hoc bivariate kernel smoothing procedure is used to convert N one-dimensional smoothing matrices S i into an estimate of σ (s, t ) whose evaluation matrices can be inverted. Unfortunately, the covariance surfaces produced in this fashion are not guaranteed to be positive definite. In fact, using Matlab code supplied by one of the authors for estimating surfaces from data simulating those in Figures 1 and 2, it was observed that not one of 1024 estimated surfaces was positive definite. Section 3 provides further details.
Finite element methods were originally developed to approximation solutions to partial differential equations, and consequently there is a large literature on the topic in both numerical analysis and engineering. A readable introduction that focusses on both spline approximation and computation is Larson and Bengzon (2013) and a more comprehensive treatment of higher dimensional spline approximation is Lai and Schumaker (2007).
The following section provides details on how the functional Choleski factor λ(w, s) is constructed using a finite element representation and the properties of the corresponding functional variance estimate, as well as the estimation of and −1 from a collection of sample covariance matrices. Section 3 contains the results of some simulation experiments, and this is followed by Section 4 describing a real-data application. The supplemental materials associated with this article provide further detail on computational aspects of FE-.
The FE-Estimateσ(s, t )
A method for fast computation of the corresponding representation ofσ (s, t ) is presented, and extended to include the superposition of two or more covariance surfaces to permit varying levels of resolution over the domain ofσ . A roughness penalty is also specified to impose smoothness on the result.
Linear Finite Element Basis Functions φ k (w, s)
We assume without loss of generality that the functional data are defined over the interval Figure 4. The interval (s, t ) of integration over w in (2) depends on (s, t ) and is But, when B > T , the capacity of λ to capture covariance as a far out asσ (0, T ) improves, and B = 2T corresponds to an effectively unconstrained covariance kernel. That is, variation in λ over s tends to capture variation in σ along its diagonal, and variation over w does the same for its cross-diagonal shape. The domain for the nonzero portion of σ is plotted in the right panel of Figure 4. The basis function expansion (4) is defined by subdividing the domain into triangular subregions, and the regular triangulation shown in Figure 4 is natural and convenient for computation, although by no means the only possible triangulation. Let I > 0 be a number of equal intervals, each of length δ = T/I, spanning [0, T ], and define B = Jδ, so that J > 0 is the number of intervals of the same size spanning [0, B]. That is, T/B is a simple fraction, and in the Figure 4 is 1/2. The resolution of the triangulation, therefore, is completely defined by the number of intervals I and the lag number J. There are 2IJ right triangles covering the domain, divided equally into those with left-and right-orientations; and there are M = (I + 1)(J + 1) vertices. Each triangle is a finite element in the terminology of finite element analysis used in the approximation of solutions of partial differential equations (Grossmann, Roos, and Stynes 2007).
Each basis function φ k is associated with one of the M vertices, so that coefficient indices k = 0, . . . , I and = 0, . . . , J are associated with the vertical and horizontal coordinates of nodes definingλ, respectively. Let I + 1 by J + 1 matrix C contain these coefficients, and let c = vec(C) denote this matrix stored column-wise as a vector. The basis function φ k is defined to be piecewise linear, to have value one at node (k, ), to decline to value 0 at the opposite edges of the six triangles that share this node, and to be zero beyond the hexagonal region covered by these triangles. However, when a vertex is on the boundary of the domain, it can be assumed without loss of generality that the basis function height outside of the domain is 0, which permits the notational simplification of defining all integrations over w as being over [−B, T ]. Defining the basis functions in this way implies that c k is the height of the λ-surface at node (k, ).
The corresponding covariance surface (5) is also piecewise linear, and is defined over the triangular mesh shown in the right panel of Figure 4. Covarianceσ = 0 outside of the triangulated region and we see, therefore, thatσ (0.5, 3.5) = σ (3.5, 0.5) = 0.
The choice of I and J very much depends on whether one is approximating (t) or −1 (t), as well as on the number of sampling points and their spacing in t. The number I of intervals over [0, T ] controls the resolution or amount of detail in the estimated surface. The quality of the resolution is of course also limited by the number and spacing of the sampling points. When σ is smooth, as will usually be the case, the shape of the estimated surface will not depend critically on I, and therefore one can, as we do in spline smoothing, control the smoothness of the estimated surface by the choice of I. That is, the smaller I, the smoother the estimated surface, but at a cost of missing fine detail. On the other hand, we see in both the Matérn and growth surfaces the general principle that the shape of −1 (t) will depend critically on the number and spacing in t. Consequently, in the equal spacing case, it can be useful to set I to n − 1.
The number J plays the role of eliminating the estimation of the surface at lags |s − t| larger than those where the covariance has any appreciable deviation from zero. For the estimation of smooth covariance surfaces, this will have to be chosen on the basis of some preliminary exploration of the surface. As noted, J = I implies a surface equal to zero for lags of size T; but this will not generally make sense for periodic records, where J = 2I may be required to permit an unrestricted covariance surface. On the other hand, the estimation of −1 (t) will benefit from making J only large enough to catch the oscillations of this surface around the diagonal, which, for smooth covariance structure, will decay rapidly and at a rate that depends on n. In other words, the need for I to be large in this inverse covariance case can be offset by the possibility of reducing J.
Computing Inner Product Matrices R(s, t )
Equation (5) implies that the covariance surface σ (s, t ) will be piecewise linear and differentiable to the first order. Defining the order M symmetric matrix R(s, t ), called the mass matrix in the finite element literature, as containing the inner products leads to an expression that is quadatric in c, The computation and storage of the R(s, t ) matrices will be a first step in an application prior to an optimizing of some fitting criterion with respect to c. Fortunately, most of the elements in each matrix will correspond to nonoverlapping basis functions and, therefore, have the value 0 so that storing them in a sparse storage mode will greatly economize on disk space. For example, an application involving daily annual weather data (n = 365) and M = 72 would require about 8 × 10 10 bytes to store all the matrices in full storage model, but only a tolerable 6 × 10 5 in sparse mode. A typical application will involve both a computation of R-matrices over a set of t-values at data points, and over a fine mesh for display purposes. If the sampling points vary in spacing and/or number from one record to another, a set of n i (n i + 1)/2 R-matrices will be required for each record. The computation of these matrices requires considerable discussion, and is, therefore, detailed in the supplementary materials.
Combining Multiple Covariance Surfaces
The phenomenon of covariance can arise from multiple sources, and in particular it is not uncommon to observe strong local covariances associated with small lags |s − t|, combined with a longer-range smoother covariance structure over larger lags. This is illustrated in the analysis of the Fels growth data in Section 4 and is also the principal rational for the factor analysis model in multivariate statistics. Covariance matrices with dominant diagonals in general have much lower condition numbers, so that enhancing the diagonal regions of an approximation can greatly improve the quality of the inverse of the approximation as an estimate of its true inverse.
Superimposing two or more covariance structures can be accommodated within this framework by combining two or more covariance surfaces within the additive structure σ (s, t|I, J) = p r=1 c r R(s, t|I r , J r )c r .
The short-range covariance would involve a term in this expansion with a fairly large I combined with a small J, while the longrange covariance detail would be modeled with smaller values of I and J with J being equal to or even greater than I.
A Roughness Penalty γP(c) for λ(w, s)
The smoothness of σ is determined by that of λ, which in turn can be quantified by where the highly sparse order (I + 1)(J + 1) matrix K contains the inner products By augmenting a fitting criterion F (c) to be minimized to G(c) = F (c) + γ P(c), finer control over the smoothness of σ can be achieved by manipulating the smoothing parameter γ than is possible by keeping I small.
Estimation from N Covariance Matrices
Let the observed data be in the form of N sample covariance matrices S i of order n i , respectively, possibly involving unequally spaced t-values; and let i (c) denote the estimated covariance matrix for case I and let G( , S) be a loss function. Then the fitting criterion measures the total fit to the data with an added penalty for the roughness, where the stiffness matrix K is defined in (8) and γ controls the size of the penalty on roughness. In the balanced design case where all records have the same observation points, N can be taken as 1. The negative log-likelihood of the Wishart distribution in the loss function is equivalent to and its counterpart for inverse Wishart distribution has a negative sign on the second term. Note that this loss function is defined in terms of −1 rather than and, therefore, its natural application is to the estimation of −1 .
A Simulation Experiment
This simulation experiment is designed to show how well S −1 , FE-and glasso perform for the more difficult problem of estimating −1 for data arising from two functional covariance structures representative of what might be seen in practice. Figure 6 displays the covariance surface for the logarithm of average annual precipitation for 35 Canadian weather stations, described by Ramsay and Silverman (2005). This surface is periodic with strong covariance patterns widely disbursed over the year, and has condition number 581.4. The second example is the stylized growth residual surface shown in Figure 2 which has covariance more concentrated about the diagonal. Both matrices are the result of evaluating the surfaces over 21 equally spaced points.
The results in Table 1 root-mean-squared-errors (RMSE) and squared multiple correlations or variance ratios R 2 = ˆ −1 − −1 2 / −1 2 . The accuracy of the estimate S −1 provides a useful baseline for assessing performance. All the FE-analyses used the inverse Wishart criterion since the objective was to estimate −1 .
We see in the top panel of the table that the FE-analysis with largest basis defined by I = J = 20 performs about as well as −1 for both surfaces, which is to be expected since it has the capacity to nearly interpolate each data matrix. Good recovery is obtained for N = 200 and 400, some value is in the N = 100 estimates, but N = 50 is not enough data to support a useful estimate.
There are three ways to impose some smoothness on the estimated surfaces: (1) use a coarser grid such as I = J = 10 shown in the second panel, (2) take advantage of the known strongly diagonal structure in −1 by reducing J while keeping I = 20, as shown in the third and fourth panels where J = 6, and (3) use the roughness penalty in (9). Results with the roughness penalty failed to achieve any interesting improvement, and are not shown. We see that a compact band-structured estimate using FE-(20,6) was able to recover useful information about −1 for N = 50 and also improve the estimation for the growth surface right up to N = 400. The coarse grid approach did not work, however, because too much detail in the surface was sacrificed. The band-structured fit by glasso failed to improve the estimation of the inverse relative to −1 , primarily because it essentially interpolated the surface.
The recoveries of the surface itself using the Wishart criterion are not shown, since both FE-and glasso performed apprximately as well as S itself, but with the exception that the coarse grid FE-(10,10) was substantially better for the growth surface. Analyses of simulated samples using the PACE code supplied by the authors were also carried out, but the quality of the estimates was found to be too poor to be competitive for recovery of , and to be virtually always nonpositive definite. The PACE code compensates for this by adding a judicious constant to the diagonal of the surface. However, this seems undesirable given that it is an estimate of −1 that is used for estimation of principal component and other structures by that approach.
In summary, the simulation suggests that FE-will do at least as well as S −1 , but that building a strong band-structure into the estimate will be profitable, and especially for noisier data and smaller sample sizes. The use of coarser bases can also be helpful for smoothing small sample covariance matrices. R function glasso seems not to have much to offer to the estimation of −1 .
A Factor Analysis of the FELS Female Growth Residuals
The Fels growth data bank (Roche 1992) is one of the world's largest collections of systematic measurements of the height of children, and now contains third generation records. Heights were measured at roughly six-month intervals after age 3 and up to a maximum of age 22; but the years from birth to three were more densely sampled. A variable indicating when the measurement was of supine body length was included. Ramsay, Altman, and Bock (1994) compared results from the Fels data base with those from the Zurich data of similar size, and noted that the only significant difference was that the average age of puberty in the Fels data was advanced by about six months. In the analyses described here, the focus is on detecting any residual covariance structure that may be present in a child's height measurements after its growth curve has been estimated using the monotone smoothing process described in Ramsay (1998), Ramsay and Silverman (2005), and Ramsay, Hooker, and Graves (2009). The variation from child to child in the age of puberty is the largest source of variation in growth data, and inevitably affects the size of the estimated residual variation. Consequently, the fitsŷ i (t ), i = 1, . . . , 161 to the data between ages 3 and 18 were registered by the continuous registration method of Ramsay and Li (1998). The registration process nonlinearly transforms the physiological or growth times t i j , j = 1, . . . , n i to the clock times h i (t i j ) so that, even if the original measurement times were common across records (they were not for the Fels data), the spacings between observation times after registration inevitably varied within a girl's record and from one girl to another.
After monotone smoothing and registration, the residual values for each girlê i j = y i j −ŷ i [h i (t i j )] were calculated and a rank 1 covariance matrix S i = e i e i computed. The FE-estimation used the full objective function for unbalanced data described in Section 2.5. The model combined two FE-covariance structures as described in Section 2.3, the first defined by a coarse mesh with numbers of intervals I = 15, J = 5, and lag B = 5 years; and the second with a fine but strongly diagonal mesh with numbers of intervals I = 30, J = 4, and lag B = 2 years. In this way, short-range strongly diagonal variation could be combined with longer-range covariation while still using only a modest number of basis functions for the representation of the Choleski factor. Cross-validation was used to select a smoothing parameter γ by using a random 2/3 of the cases as a training sample and the remainder as a validation sample. This selected γ = 0 as the optimum smoothing parameter, as was expected in view of the limited dimensionality of the basis system. Figure 7 shows the composite residual covariance surface estimated at 31 equally spaced values in the upper left panel. Along the diagonal, the residual standard deviation is about 0.4 cm at age 3, declines to about 0.3 cm by age 8, moves back to 0.4 cm over the adolescent years, and then declines to 0.3 cm at 18 years. The standard deviation contributed by fine diagonal mesh contributes almost all of the total diagonal at age 3, a substantial proportion in later childhood, and again at age 18, but relatively little to the adolescent years. The covariance contributed by the fine mesh at lag one year is −0.05 cm at almost all ages. The cross-diagonal plot in the lower left panel, which is positioned at 13 years at the diagonal, indicates that covariance drops from its diagonal value of 0.2 to −0.05 over a lag of two years, returns to positivity at 3 years, and then drops to zero just beyond 4 years. The negative trench on either side of the diagonal persists over all but the last years of growth, which suggests a pulse-like character to growth. This is consistent with the oscillation in the first derivative of height noted in Ramsay and Silverman (2005) in the analysis of the data the growth of a ten year old boy (Thalange et al. 1996).
The condition number of the covariance matrix is 525.7, and the simulation results encourage the inspection of the inverse of this matrix. The right panels display the residual precision matrix and its cross-diagonal, also positioned at 13 years, respectively. If used as a weight matrix,ˆ −1 would weight the nearest neighbors of a wide neighborhood during the childhood years; but we recall that the data were registered before the analysis, and this probably had some effect on these results, and especially at the age of puberty.
Discussion and Conclusions
The primary benefit of the FE-methodology is to represent covariation in truly functional terms as a bivariate function σ (s, t ) constructed from a finite element approximation of its functional Choleski factor. That this σ -surface can be evaluated anywhere over the interval of observation is an aspect that is convenient for graphical display as well as filling in or gridding for unevenly distributed or sparse data. The FE-procedure also permits the estimation of composite surfaces composed from fits with different horizontal and vertical resolutions of λ(w, s), as illustrated by the analysis of the Fels residual data. This aspect provides a promising alternative to the PACE method for covariation estimation of Yao, Müller, and Wang (2005).
A key insight during the development of this method was that a satisfactory approximation of covariance matrix does not necessarily imply a useful approximation of its inverse. A large condition number can mean that a small perturbation of can result in an arbitrarily large perturbation of −1 . This is important for the many statistical methods using matrix inverses, including regression analysis and regression smoothing where the least squares estimate depends on (X X) −1 , and where recent research has highlighted the surprisingly large impact of even fairly small errors in covariates on estimates of regression coefficients and the predictions that they define (Carroll et al. 2006;Ruppert, Wand, and Carroll 2003).
Nevertheless, where the eigenstructure of the matrix is reasonably well conditioned, as for the log precipitation and growth residual structures, the simulation results confirmed that the FEcan provide a more accurate estimate of −1 than either the inverse of S or the glasso estimate, which in any case is inapplicable for unevenly distributed sampling times such as those in the Fels data.
FE-methodology can be generalized in many ways. Although the use of a lattice structure for λ(w, s) with w and s increments of the same size brings computational and programming advantages, lattices constructed to allow J to vary over s are feasible and under development by the author. Extensions to quadratic and higher order finite elements are described in many texts on finite element methodology.
Supplementary Materials
An appendix is provided with additional details on computational issues. | 9,094.4 | 2017-01-02T00:00:00.000 | [
"Mathematics"
] |
Photo-Induced Force Microscopy by Using Quartz Tuning-Fork Sensor
We present the photo-induced force microscopy (PiFM) studies of various nano-materials by implementing a quartz tuning fork (QTF), a self-sensing sensor that does not require complex optics to detect the motion of a force probe and thus helps to compactly configure the nanoscale optical mapping tool. The bimodal atomic force microscopy technique combined with a sideband coupling scheme is exploited for the high-sensitivity imaging of the QTF-PiFM. We measured the photo-induced force images of nano-clusters of Silicon 2,3-naphthalocyanine bis dye and thin graphene film and found that the QTF-PiFM is capable of high-spatial-resolution nano-optical imaging with a good signal-to-noise ratio. Applying the QTF-PiFM to various experimental conditions will open new opportunities for the spectroscopic visualization and substructure characterization of a vast variety of nano-materials from semiconducting devices to polymer thin films to sensitive measurements of single molecules.
Introduction
Combining scan probe techniques with optical illumination is a powerful approach in nano-imaging technology to add spectroscopic sensitivity to the nanoscopic resolution provided by a sharp tip. A new and emerging technique in this area is photo-induced force microscopy (PiFM) [1,2], which enables the spectroscopic probing of materials with a spatial resolution well under 10 nm. The contrast in PiFM is directly related to the optical properties of the sample underneath the tip and can, in principle, be conducted in noncontact/tapping mode atomic force microscopy (AFM). One of the advantages of this approach is a far-field noise-free detection so that it shows a superior sensitivity to the light detection-based nano-optic measurement techniques such as scanning nearfield optical microscopy (SNOM). The PiFM approach can be used for probing optical fields near surfaces [3] and nanostructures [4,5], as well as for detecting spectroscopic transitions in the sample [6][7][8]. The photo-induced forces measured in PiFM act in a spatially confined region on a nanometer scale, which translates into a very high spatial resolution even under ambient conditions. Moreover, the PiFM approach is compatible with a wide range of optical excitation frequencies from the visible [9] to the mid-infrared [10,11], enabling the nanoscale imaging of various contrast mechanisms based on either electronic or vibrational transitions in the sample.
Despite this promising feature making PiFM an attractive method for the characterization of nanomaterials, it is still a challenge to install the system in various environmental conditions such as the vacuum chamber or the liquid system because of the complex beam bouncing optics to monitor the probe motion. Replacing the conventional cantilever probe with a quartz tuning fork (QTF) can be a solution because the QTF is a self-sensing sensor, which does not require the complex beam bouncing optics [12]. In this article, we demonstrate the nano-optical imaging performance of the quartz tuning fork-based photo-induced force microscopy by compactly configuring the system with the versatile light coupling modes for the transparent (bottom illumination) and the opaque (side illumination) samples.
Photo-Induced Forces
When the light illuminates the tip-sample junction, two kinds of photo-induced forces are manifested. One is the optical field gradient and scattering force [2], and the other one is the thermal expansion force mediated by the intermolecular interaction between the tip and the sample [7]. The schematic diagrams of the two kinds of force interactions are sketched in Figure 1a,b respectively. the versatile light coupling modes for the transparent (bottom illumination) and the opaque (side illumination) samples.
Photo-Induced Forces
When the light illuminates the tip-sample junction, two kinds of photo-induced forces are manifested. One is the optical field gradient and scattering force [2], and the other one is the thermal expansion force mediated by the intermolecular interaction between the tip and the sample [7]. The schematic diagrams of the two kinds of force interactions are sketched in Figure 1a,b respectively. Figure 1. Diagrams of (a) the field gradient and scattering force and of (b) the thermal expansion force: μt and μs are the induced dipoles on the tips and samples, respectively. Vt is the absorption volume due to the tip-enhanced field, and Vd is the absorption volume due to the transmitted light.
Optical Field Gradient and Scattering Force
The incident light induces a dipole on the tip, which then mutually interacts with the reflected field from the sample. The mechanisms are illustrated in a simplified picture in Figure 1a. When the field near the tip is E, an optical field gradient and scattering force on the tip is given as Refs. [2,13] where and are the complex effective polarizabilities of the tip and sample, respectively, within the point dipole approximation; and are the real and imaginary part of the tip's polarizability; and z is the distance from the tip end to the surface. The field E consists of the incident field and the nearfield at the tip-sample junction. In a tightly focused beam case, the long-range gradient force from the first term in Equation (1) is generated due to the strong z-dependence of the incident field. However, it becomes ignorable at the center of the focal spot where the incident field shows approximately a uniform field distribution. Then the total force at the tip end becomes where the E0 is the incident field. The first term in Equation (2) is the short-range attractive force due to the multi-reflection between the tip and the sample, which can be analyzed by the image dipole method with the sample information. The second term is the long-range repulsive scattering force, which is based on the direct light-tip interaction. This force formula is also applicable to the sideillumination geometry by replacing the E0 with E0 cosθ, where the θ is the incident angle. The shortrange force strongly increases with the z −4 dependence as the tip approaches the sample. In classical nearfield scattering theory, can be successfully explained with the help of an image dipole model, which yields αs = β αt, where β is the complex electrostatic reflection coefficient, given as Figure 1. Diagrams of (a) the field gradient and scattering force and of (b) the thermal expansion force: µ t and µ s are the induced dipoles on the tips and samples, respectively. V t is the absorption volume due to the tip-enhanced field, and V d is the absorption volume due to the transmitted light.
Optical Field Gradient and Scattering Force
The incident light induces a dipole on the tip, which then mutually interacts with the reflected field from the sample. The mechanisms are illustrated in a simplified picture in Figure 1a. When the field near the tip is E, an optical field gradient and scattering force on the tip is given as Refs. [2,13] where α t and α s are the complex effective polarizabilities of the tip and sample, respectively, within the point dipole approximation; α t and α t are the real and imaginary part of the tip's polarizability; and z is the distance from the tip end to the surface. The field E consists of the incident field and the nearfield at the tip-sample junction. In a tightly focused beam case, the long-range gradient force from the first term in Equation (1) is generated due to the strong z-dependence of the incident field. However, it becomes ignorable at the center of the focal spot where the incident field shows approximately a uniform field distribution. Then the total force at the tip end becomes where the E 0 is the incident field. The first term in Equation (2) is the short-range attractive force due to the multi-reflection between the tip and the sample, which can be analyzed by the image dipole method with the sample information. The second term is the long-range repulsive scattering force, which is based on the direct light-tip interaction. This force formula is also applicable to the side-illumination geometry by replacing the E 0 with E 0 cosθ, where the θ is the incident angle. The short-range force strongly increases with the z −4 dependence as the tip approaches the sample. In classical nearfield scattering theory, α s can be successfully explained with the help of an image dipole model, which yields α s = β α t , where β is the complex electrostatic reflection coefficient, given as β = ε − 1 / ε + 1, where ε is the dielectric constant of the sample. This short-range-induced dipole force typically shows the dispersive spectral line shape and strongly increases in the metal/plasmonic material where ε is negative, whereas the force is typically small under a few pN range in the organic and inorganic sample where ε is positive, even at its molecular resonance [14][15][16][17].
Thermal Expansion Force Mediated by Intermolecular Interaction between Tip and Sample
Near molecular resonances, there is a strong light absorption of the sample followed by a temperature rise, which results in the strain deformation of the sample that eventually gives rise to the thermal expansion. The thermal expansion generates a force, which is the result of several causal processes: First, there is an energy exchange between the sample and the light field, which scales with the optical absorption coefficient and results in a temperature rise (∆T~P abs ). The strong field-enhancement near a sharp metal tip can further increase the absorption of the sample under the tip. Second, the accumulated heat diffuses to deform the sample to induce a thermal expansion (∆L~∆T). Third, the thermal expansion changes the tip-sample distance, which introduces a modulation of the intermolecular force between the tip and the sample (∆F~∆L). The force is given in Ref. [7]: where ∆L tot is the total thermal expansion of the sample and F ts is the intermolecular force which is typically made up of the attractive force in the noncontact region and the repulsive force in the contact region [18]. Because the force is based on the absorption, the force spectrum follows the dissipative line shape. There are two kinds of thermal expansion of the sample underneath a sharp metal tip. These mechanisms rely on the fact that the sample experiences two kinds of optical fields; one is the incident laser field, and the other one is the tip-enhanced nearfield. The direct thermal expansion (∆L d ), which absorbs the incident light, is irrelevant to the presence of the tip and, thus, exhibits a continuous increase with the sample thickness. On the other hand, because the field-enhancement at the tip-sample junction is limited by the extent of the nearfield underneath the tip and decreases with increasing sample thickness on high-index substrates such as Si, the tip-enhanced thermal expansion (∆L t ) increases to a certain sample thickness and eventually decreases, i.e., it shows a maximum [7]. The expansion mechanisms are illustrated in a simplified picture in Figure 1b. This force is typically in the range between a few tens of pN to a few hundreds of pN. Thus, on the molecular resonance, the thermal expansion force overwhelms the short-range-induced dipole force.
PiFM Sideband Coupling Dynamics
The PiFM sideband coupling dynamics can be understood as a bimodal AFM operation, where the probe is driven by two external driving sources, a piezo ditherer (f 2 ) and a light modulation with where i is the ith eigenmode of the probe, z i is the ith sinusoidal motion with amplitude A i and phase θ i , z ≈ z 1 + z 2 , F 2 is the force from the external piezoelectric actuator to mechanically dither the quartz tuning fork, and F ts is the intermolecular force between the tip and the sample. The PiFM sideband-coupled force acting on the probe at the fundamental eigenmode (f 1 ) is given in Ref. [19]: The tapping amplitude, which is demodulated at the second mechanical resonance (i = 2), is sideband-coupled with the photo-induced force gradient and carried it to the fundamental mechanical resonance (i = 1). The PiFM amplitude with a sideband-coupled mode, which is demodulated at f 1 , depends on the derivative of the photo-induced force rather than the force F pif itself. Thus, the constant forces such as the scattering force in Equation (1) and the thermal expansion force based on direct thermal expansion in Equation (2) are excluded with the PiFM sideband mode.
Instruments
The MultiView 2000 and MultiView 4000 tuning fork-based AFM system from Nanonics Imaging Ltd. (Jerusalem, Israel) are customized for the PiFM experiment. The MultiView 2000 is coupled to an 800-nm CW laser for the transmission system. The laser beam illuminated the sample in an inverted microscope equipped with a high-numerical-aperture (NA = 1.25) objective lens as sketched in Figure 2a. The MultiView 4000 is coupled to a 660-nm CW laser with the side illumination geometry. The laser beam illuminated the sample around 40 degrees with a long working distance objective (NA = 0.6) as sketched in Figure 2b. In the rest of the experiments, the average illumination power for the two lasers are around 400-800 µW. The microscope is operated in tapping mode with a commercial gold-coated tuning fork from Nanonics Imaging Ltd. Typically the fundamental resonance is around 32 kHz, and the second mechanical resonance is around 190 kHz. To gain a higher PiFM signal to noise ratio, we use the second resonance for the topography feedback and the fundamental resonance for the PiFM demodulation with an external lock-in (7280 DSP lock-in, Signal Recovery). The laser intensity is modulated by using an accusto-optical modulator at the frequency of f m which sets to the difference frequency between the fundamental and the second resonance (f m = f 2 − f 1 = 158 kHz) for the sideband operation. For the PiFM demodulation in the lock-in amplifier at the fundamental resonance, the frequency mixer couples the driving frequency (f 2 ) with the light modulation frequency (f m ) to generate the reference signal (f 1 ). Both MultiView AFM systems have two kinds of scanners: one is the tip scanner in which the sample stage is fixed and the tip scans over the sample. The other one is the sample scanner in which the tip is fixed and then the sample moves. In the PiFM operation, the tip scan finds the focal spot and parks the tip at the center of it. Then, the sample scan is used for the PiFM imaging of the sample. itself. Thus, the constant forces such as the scattering force in Equation (1) and the thermal expansion force based on direct thermal expansion in Equation (2) are excluded with the PiFM sideband mode.
Instruments
The MultiView 2000 and MultiView 4000 tuning fork-based AFM system from Nanonics Imaging Ltd. (Jerusalem, Israel) are customized for the PiFM experiment. The MultiView 2000 is coupled to an 800-nm CW laser for the transmission system. The laser beam illuminated the sample in an inverted microscope equipped with a high-numerical-aperture (NA = 1.25) objective lens as sketched in Figure 2a. The MultiView 4000 is coupled to a 660-nm CW laser with the side illumination geometry. The laser beam illuminated the sample around 40 degrees with a long working distance objective (NA = 0.6) as sketched in Figure 2b. In the rest of the experiments, the average illumination power for the two lasers are around 400-800 μW. The microscope is operated in tapping mode with a commercial gold-coated tuning fork from Nanonics Imaging Ltd. Typically the fundamental resonance is around 32 kHz, and the second mechanical resonance is around 190 kHz. To gain a higher PiFM signal to noise ratio, we use the second resonance for the topography feedback and the fundamental resonance for the PiFM demodulation with an external lock-in (7280 DSP lock-in, Signal Recovery). The laser intensity is modulated by using an accusto-optical modulator at the frequency of fm which sets to the difference frequency between the fundamental and the second resonance (fm = f2 − f1 =158 kHz) for the sideband operation. For the PiFM demodulation in the lock-in amplifier at the fundamental resonance, the frequency mixer couples the driving frequency (f2) with the light modulation frequency (fm) to generate the reference signal (f1). Both MultiView AFM systems have two kinds of scanners: one is the tip scanner in which the sample stage is fixed and the tip scans over the sample. The other one is the sample scanner in which the tip is fixed and then the sample moves. In the PiFM operation, the tip scan finds the focal spot and parks the tip at the center of it. Then, the sample scan is used for the PiFM imaging of the sample.
Sensitivity of a Probe in PiFM
The minimum detectable force of a QTF is derived from the thermal noise of the ith eigenmode of the QTF which is given as N i = √ 4K B TBQ i /ω i k i , where k i , Q i , and ω i are the stiffness, quality factor, and angular resonance frequency of the ith eigenmode of the QTF; K B is the Boltzmann constant; B is the system bandwidth; and T is the absolute temperature. For our tuning-fork parameters given as ω 1 = 2 π × 32.3 KHz, Q 1 = 1125, k 1 = 2000 N/m, ω 2 = 2 π × 188.5 KHz, Q 2 = 613, and k 2 = 7.8 × 10 5 N/m, the thermal noises are N 1 ≈ 0.21 pm and N 2 ≈ 0.01 pm for the fundamental and the second eigenmodes respectively, where T = 300 K and B = 1 Hz. The minimum detectable forces, F min i = k i N i , are 0.38 pN and 1.33 pN for the fundamental and the second eigenmodes, respectively. Because the second eigenmode is less sensitive than the fundamental eigenmode, we use the fundamental resonance for the PiFM demodulation and the second resonance for the feedback.
Sample Preparation
The diluted Silicon 2,3-naphthalocyanine bis(trihexylsilyloxide) (SiNc) in the toluene from Sigma Aldrich Inc. (St. Louis, MO, USA) is spin-coated onto the glass substrate. It resulted in various sizes of molecular clusters from a few nm to a few hundred nm. The graphene layered sample is prepared by mechanically exfoliating a graphite and transferring it to a Si substrate with the 3M tape method which resulted in various layers of graphene from a monolayer to a few tens of layers.
Results
By using the mechanical piezo actuator, the fundamental and the second resonance curves of the quartz tuning fork are measured in Figure 3a,b respectively. As reported in Ref. [20], the mechanical piezo actuator is better to excite the higher eigenmodes of the QTF than the electrical self-oscillation. The measured results (black dots) are well-fitted by the Lorentzian curve (red solid line), which presents the resonance and quality factor of 32.3 kHz and 1125 for the fundamental eigenmode and of 188.5 kHz and 613 for the second eigenmode respectively. Because the quality factor of the second eigenmode is smaller than the fundamental, to gain a higher PiFM signal-to-noise ratio, the second resonance is used for the topography probing and the fundamental resonance demodulates the PiFM amplitude with the sideband mode. , are 0.38 pN and 1.33 pN for the fundamental and the second eigenmodes, respectively. Because the second eigenmode is less sensitive than the fundamental eigenmode, we use the fundamental resonance for the PiFM demodulation and the second resonance for the feedback.
Sample Preparation
The diluted Silicon 2,3-naphthalocyanine bis(trihexylsilyloxide) (SiNc) in the toluene from Sigma Aldrich Inc. (St. Louis, MO, USA) is spin-coated onto the glass substrate. It resulted in various sizes of molecular clusters from a few nm to a few hundred nm. The graphene layered sample is prepared by mechanically exfoliating a graphite and transferring it to a Si substrate with the 3M tape method which resulted in various layers of graphene from a monolayer to a few tens of layers.
Results
By using the mechanical piezo actuator, the fundamental and the second resonance curves of the quartz tuning fork are measured in Figure 3a,b respectively. As reported in Ref. [20], the mechanical piezo actuator is better to excite the higher eigenmodes of the QTF than the electrical self-oscillation. The measured results (black dots) are well-fitted by the Lorentzian curve (red solid line), which presents the resonance and quality factor of 32.3 kHz and 1125 for the fundamental eigenmode and of 188.5 kHz and 613 for the second eigenmode respectively. Because the quality factor of the second eigenmode is smaller than the fundamental, to gain a higher PiFM signal-to-noise ratio, the second resonance is used for the topography probing and the fundamental resonance demodulates the PiFM amplitude with the sideband mode. The SiNc molecular clusters are mapped in Figure 4. The tip scanner visualizes the focal spot by crossing over the focused beam on the clean glass substrate area in Figure 4b with the simultaneously measured topography in Figure 4a. The full-width half maximum (FWHM) of the focal spot is around 350 nm. After parking the tip at the center of the spot, the SiNc clusters are successfully visualized by using the sample scanner in Figure 4d. Compared to the simultaneously measured topography in Figure 4c, the PiFM shows a better signal-to-noise ratio because it follows the derivative of the photoinduced force. The SiNc molecular clusters are mapped in Figure 4. The tip scanner visualizes the focal spot by crossing over the focused beam on the clean glass substrate area in Figure 4b with the simultaneously measured topography in Figure 4a. The full-width half maximum (FWHM) of the focal spot is around 350 nm. After parking the tip at the center of the spot, the SiNc clusters are successfully visualized by using the sample scanner in Figure 4d. Compared to the simultaneously measured topography in Figure 4c, the PiFM shows a better signal-to-noise ratio because it follows the derivative of the photo-induced force. For the opaque sample, the side-illumination geometry is applicable rather than the transmission geometry. When the light illuminates the sample from the side, the beam shape is elongated at the surface. The tip scan image on the clean Si substrate visualizes the elongated focal spot well in Figure 5b. Then, the tip parks at the center of the spot, and now, the sample scan visualizes the graphene layers in Figure 5c,d. The spatial resolution of our QTF-based PiFM measurement is estimated under 80 nm from the gray dashed line cut of the Graphene in Figure 5d, which is replotted in Figure 5e. An interesting observation is found in Figure 5d, where the PiFM image shows totally different contrasts to the topography measurement in Figure 5c. Especially the circled region in Figure 5d is composed of several irregularly patterned patches while it is seemingly uniform in thickness in topography. These dramatically different images might possibly be explained by the self-assembled environmental adsorbates of thickness 0.3 nm on the graphene surface [21]. According to Ref. [21], the bright and dark regions have different orientations of chain-like organic molecules of which the polarization responses are different. For the opaque sample, the side-illumination geometry is applicable rather than the transmission geometry. When the light illuminates the sample from the side, the beam shape is elongated at the surface. The tip scan image on the clean Si substrate visualizes the elongated focal spot well in Figure 5b. Then, the tip parks at the center of the spot, and now, the sample scan visualizes the graphene layers in Figure 5c,d. The spatial resolution of our QTF-based PiFM measurement is estimated under 80 nm from the gray dashed line cut of the Graphene in Figure 5d, which is replotted in Figure 5e. An interesting observation is found in Figure 5d, where the PiFM image shows totally different contrasts to the topography measurement in Figure 5c. Especially the circled region in Figure 5d is composed of several irregularly patterned patches while it is seemingly uniform in thickness in topography. These dramatically different images might possibly be explained by the self-assembled environmental adsorbates of thickness 0.3 nm on the graphene surface [21]. According to Ref. [21], the bright and dark regions have different orientations of chain-like organic molecules of which the polarization responses are different.
Discussion
The focal spots in Figure 4b and Figure 5b are measured by using the far-field gradient force in Equation (1), which is mostly the direct light-tip interaction that depends on the shape of the focal volume. In the tightly focused beam case in Figure 4b, the focal spot of the 800-nm wavelength beam has the Gaussian distribution which measured around of a 350 nm diameter (FWHM) by using the tip scanning. This corresponds well to the theoretical diffraction limit, given as λ/2NA ≈ 320 nm where NA is 1.25. In the side illumination geometry, the shape of the focal spot shows the elongated ellipsoid in the micrometer scale. After parking the tip at the center of the spots, the short-range tipsample interaction chemically characterizes the samples. Because the short-range-induced dipole force is close to our noise level (approximately under a few pN) in the positive permittivity, the tipenhanced thermal expansion force dominates in the PiFM chemical characterization of the SiNc at the 800-nm absorption as well as the graphene at the 660-nm absorption. The PiFM sideband mode, which is the force gradient measurement, shows a better contrast than the topography because it extracts the localized force by reducing the constant background forces.
Conclusions
In sum, we present the quartz tuning fork as a force sensor in the photo-induced force microscopy. The self-sensing ability with the high stiffness of the QTF helps to compactly configure the PiFM system without the jump-to-contact issue. We demonstrate that the system can be utilized with respect to the angle of the coupled light, which applies to the transparent and opaque sample respectively. Applying PiFM to various environmental systems will open new opportunities for the spectroscopic visualization and substructure characterization of a vast variety of nano-materials with a functionalized tip [22], from semiconducting nanoparticles to polymer thin films to sensitive measurements of single molecules.
Discussion
The focal spots in Figures 4b and 5b are measured by using the far-field gradient force in Equation (1), which is mostly the direct light-tip interaction that depends on the shape of the focal volume. In the tightly focused beam case in Figure 4b, the focal spot of the 800-nm wavelength beam has the Gaussian distribution which measured around of a 350 nm diameter (FWHM) by using the tip scanning. This corresponds well to the theoretical diffraction limit, given as λ/2NA ≈ 320 nm where NA is 1.25. In the side illumination geometry, the shape of the focal spot shows the elongated ellipsoid in the micrometer scale. After parking the tip at the center of the spots, the short-range tip-sample interaction chemically characterizes the samples. Because the short-range-induced dipole force is close to our noise level (approximately under a few pN) in the positive permittivity, the tip-enhanced thermal expansion force dominates in the PiFM chemical characterization of the SiNc at the 800-nm absorption as well as the graphene at the 660-nm absorption. The PiFM sideband mode, which is the force gradient measurement, shows a better contrast than the topography because it extracts the localized force by reducing the constant background forces.
Conclusions
In sum, we present the quartz tuning fork as a force sensor in the photo-induced force microscopy. The self-sensing ability with the high stiffness of the QTF helps to compactly configure the PiFM system without the jump-to-contact issue. We demonstrate that the system can be utilized with respect to the angle of the coupled light, which applies to the transparent and opaque sample respectively. Applying PiFM to various environmental systems will open new opportunities for the spectroscopic visualization and substructure characterization of a vast variety of nano-materials with a functionalized tip [22], from semiconducting nanoparticles to polymer thin films to sensitive measurements of single molecules. | 6,469.4 | 2019-03-29T00:00:00.000 | [
"Physics"
] |
Magnetic upconverting fluorescent NaGdF 4 : Ln 3 + and iron-oxide @ NaGdF 4 : Ln 3 + nanoparticles
Microwave assisted solvothermal method has been employed to synthesize multifunctional upconverting β-NaGdF4:Ln3+ and magnetic-upconverting Fe3O4/γ-Fe2O3@NaGdF4:Ln3+ (Ln = Yb and Er) nanoparticles. The powder x-ray diffraction data confirms the hexagonal structure of NaGdF4:Ln3+ and high resolution transmission electron microscopy shows the formation of rod shaped NaGdF4:Ln3+ (∼ 20 nm) and ovoid shaped Fe3O4/γ-Fe2O3@NaGdF4:Ln3+ (∼ 15 nm) nanoparticles. The magnetic hysteresis at 300 K for β-NaGdF4:Ln3+ demonstrates paramagnetic features, whereas iron-oxide@β-NaGdF4:Ln3+ exhibits superparamagnetic behavior along with a linear component at large applied field due to paramagnetic NaGdF4 matrix. Both nanoparticle samples provide an excellent green emitting [(2H11/2, 4S3/2)→4I15/2 (∼ 540 nm)] upconversion luminescence emission under excitation at 980 nm. The energy migration between Yb and Er in NaGdF4 matrix has been explored from 300-800 nm. Intensity variation of blue, green and red lines and the observed luminescence quenching due to the presence of Fe3O4/γ-Fe2O3 in the composite has been proposed. These kinds of materials contain magnetic and luminescence characteristics into single nanoparticle open new possibility for bioimaging applications.Microwave assisted solvothermal method has been employed to synthesize multifunctional upconverting β-NaGdF4:Ln3+ and magnetic-upconverting Fe3O4/γ-Fe2O3@NaGdF4:Ln3+ (Ln = Yb and Er) nanoparticles. The powder x-ray diffraction data confirms the hexagonal structure of NaGdF4:Ln3+ and high resolution transmission electron microscopy shows the formation of rod shaped NaGdF4:Ln3+ (∼ 20 nm) and ovoid shaped Fe3O4/γ-Fe2O3@NaGdF4:Ln3+ (∼ 15 nm) nanoparticles. The magnetic hysteresis at 300 K for β-NaGdF4:Ln3+ demonstrates paramagnetic features, whereas iron-oxide@β-NaGdF4:Ln3+ exhibits superparamagnetic behavior along with a linear component at large applied field due to paramagnetic NaGdF4 matrix. Both nanoparticle samples provide an excellent green emitting [(2H11/2, 4S3/2)→4I15/2 (∼ 540 nm)] upconversion luminescence emission under excitation at 980 nm. The energy migration between Yb and Er in NaGdF4 matrix has been explored from 300-800 nm. Intensity variation of blue, green and red lines and the observed l...
INTRODUCTION
For the last two decades, scientific and industrial interest in developing materials has culminated multiple times especially in the preparation of quality materials with enhanced multifunctionality at nanoscale. [1][2][3][4] For instance, the nanocomposites containing luminescent and magnetic characteristics are trending in a wide range of applications, such as bioimaging, diagnostic, and therapeutics. Meanwhile, magnetic iron oxide nanoparticles have also been proved promising hence allowed in a wide range of biomedical applications. 2,4 However, a small application oriented work has been performed combining iron oxide with luminescent materials due to the so called quenching effect induced by semimetallic Fe 3 O 4 during simultaneous optical excitation/emission with applied external magnetic field. [5][6][7] These multifunctional nanoparticles can serve as luminescent markers; they can also be controlled by an external magnetic field. 8 Several parameters such as magnetic, electrostatic, hydrophobic and many chemical interaction creates a boundary and limitation for the real application of multifunctional nanoparticles as these issues generates low chemical stability and aggregation at nanoscale. 2,9,10 To resolve above mentioned quenching issue, aggregation, and agglomeration etc., few approaches are in use such as: a magnetic core coated with silica, polymer or lipid containing a<EMAIL_ADDRESS>fluorescent components; covalently bounded to a fluorophore via a spacer; a fluorescent shell; and/or quantum dots encapsulated in a polymer or silica matrix. Lanthanide (Ln 3+ ) doped rare earth fluoride upconversion (UC) nanoparticles absorb nearinfrared (NIR) photons and emit higher energy photons, either in the ultraviolet-visible-NIR regions and are being potentially used in industry and biomedical applications. 11,12 They are preferred nanoluminescent materials because of high penetration depth and signal-to-noise ratio, a large anti-Stokes shift, a long luminescence lifetime, good chemical and photochemical stability, resistance to photo bleaching, low auto-fluorescence and excellent detection sensitivity. Additionally, Gd 3+ containing UC nanoparticles exhibit paramagnetism at room temperature and they can efficiently alter the spinspin relaxation time of surrounding water protons because Gd possesses seven unpaired electrons. 13 Therefore, Gd 3+ containing rare earth fluoride UC luminescence nanomaterials have been developed as potential T1-weighted MR imaging contrast agents in biomedical applications and are being explored. 5,14 In this article, we present a facile microwave solvothermal method for the synthesis of bifunctional β-NaGdF 4 :Ln 3+ nanoparticles and direct coating of the same over surface of iron-oxide core nanoparticles. We focus on the changes of the intensity of emission peaks with lifetime variation in these two bifunctional nanomaterials and provide direct evidence of quenching induced by magnetite/maghemite phase. The nanocomposite particles have been thoroughly characterized using XRD, HRTEM, photoluminescence and dc magnetization measurements. These investigating techniques suggest a comparative approach to understand two different templates of magnetic-luminescent nanoparticles.
Synthesis of iron-oxide@NaGdF 4 :Ln 3+
The iron-oxide nanoparticles were prepared using method described in Ref. 15. A facile microwave assisted solvothermal reaction has been proposed for the synthesis of NaGdF4:Ln3 + and iron-oxide@NaGdF 4 :Ln 3+ (Ln 3+ = 20%Yb, 2% Er) nanoparticles. The shell NaGdF 4 :Ln 3+ was prepared in following way: firstly, 0.78 mmol GdCl 3 .6H 2 O, 0.20 mmol YbCl 3 .6H 2 O and 0.02 mmol ErCl 3 .6H 2 O was added into solution containing 16 mL oleic acid to form rare earth oleate and 8 mL of 1-octadecene under continuous stirring at ambient temperature for 20 minutes. Meanwhile, 2.5 mmol NaOH was dissolved in 10 mL methanol and then 8 mL oleic acid was added by stirring to prepare Na-oleate translucent solution. The above two solutions were mixed quickly and stirred for additional 5 minutes. A separate solution of iron oxide nanoseeds prepared in 1-octadecene was poured in above solution slowly and stirred for 10 minutes at room temperature. Another solution of 4 mmol fluoride source (NH 4 F) in 15 mL methanol was mixed properly in above solution. The mixture was transferred into the reaction vessel of a commercial microwave reactor working at 1000 W. The working cycle of the microwave reactor was set as (i) 20 0 C/minute rapid heating until 150 0 C from room temperature; and (ii) 40 minute at 150 0 C. The system was allowed to cool quickly to room temperature and obtained samples were washed sequentially with methanol and ethanol, and then dried in an air oven at 50 • C for 6 hours.
Instrumentation
The phase purities of all the samples were checked by using X-ray diffraction (XRD) measurements on a Brucker`s D8 Advance diffractometer using Cu K α radiation (λ= 1.54 A • ). The size and shape morphology were characterized using transmission electron microscopy (TEM, FEI Tecnai, 200 keV). The MPMS3 Quantum Design SQUID magnetometer was used for the dc magnetic measurement, whereas a spectrophotometer (Horiba NanoLog spectrofluorimeter) equipped with an external excitation laser source of 980 nm was used for the upconversion experiments.
RESULTS AND DISCUSSION
The phase structure of synthesized iron-oxide, NaGdF 4 :Ln 3+ and iron-oxide@NaGdF 4 :Ln 3+ nanoparticles were determined by XRD patterns (Figure 1). The XRD pattern of iron-oxide was 4 :Ln 3+ was formed successfully, although, lattice energy mismatch between iron oxide and NaGdF 4 :Yb, Er suggest the formation of composite type structure. 8 The crystallite size was calculated by using Debye-Scherrer's equation 16,17 and the mean particle size of Fe 3 O 4 /γ-Fe 2 O 3 nanoparticles was ∼ 5 nm, whereas for NaGdF 4 :Yb,Er, it was ∼20 nm. After coating of NaGdF 4 :Ln 3+ on Fe 3 O 4 /γ-Fe 2 O 3 , the calculated size was ∼ 15 nm, which is subsequently confirmed by particle size distribution obtained through TEM imaging ( Figure 2). Figure 2 shows the TEM images for Fe 3 O 4 /γ-Fe 2 O 3 , NaGdF 4 :Ln 3+ , and Fe 3 O 4 /γ-Fe 2 O 3 @NaGdF 4 :Ln 3+ nanoparticles. The spherical iron oxide nanoparticles of 5-6 nm were observed (Figure 2a). In Figure 2b, most of the particles have a rod like morphology and few particles appear to be close to a spherical shape. During coating process of iron-oxide with NaGdF 4 :Yb, Er, the overall heterogeneous nucleation was slowed down and ovoid shaped morphology has been observed as shown in Figure 2c.
The green emitting visible UC luminescence spectra of NaGdF 4 :Yb, Er and Fe 3 O 4 /γ-Fe 2 O 3 @NaGdF 4 :Yb,Er nanoparticles have been shown in Figure 3a. On the basis of energy matching conditions the synthesized nanoparticles are discussed by observing the emission spectra (Figure 3, a & b). Under excitation at 980 nm, the green luminescence assigned to the ( 2 H 11/2 , 4 S 3/2 )→ 4 I 15/2 (∼ 540 nm) transition, red emission originating from the 4 F 9/2 → 4 I 15/2 (∼ 660 nm) transition and the blue emissions attributed to the 4 G 11/2 → 4 I 15/2 (∼ 378 nm), 2 H 9/2 → 4 I 15/2 (∼ 411 nm) transitions of Er 3+ ions were observed. The overall green emission line intensity of Fe 3 O 4 /γ-Fe 2 O 3 @NaGdF 4 :Yb, Er is approximately 30% of NaGdF 4 :Yb, Er green line. Furthermore, the surface properties and crystallinity of NaGdF 4 :Yb, Er nanoparticles can affect the different emitting levels of doped ions, and hence result in the different emission intensity of red, green and blue light and the ratio of these are also different in both samples. Hence, the reduction of intensities (green, blue and red) can be ascribed due to direct energy mismatch of cubic Fe 3 O 4 /γ-Fe 2 O 3 and hexagonal NaGdF 4 matrix; reduction of radiative luminescence centers over the surface of luminescence shell of NaGdF 4 :Ln 3+ when iron-oxide provides a leakage path for radiative centers (Er 3+ ); and induced arrangement of magnetic domain ordering at the surface of iron oxide. populated by the ETs between Yb 3+ and Er 3+ ( 2 F 5/2 → 2 F 7/2 (Yb); 4 I 15/2 → 4 I 11/2 (Er), 4 I 13/2 → 4 F 9/2 (Er), and 4 I 11/2 → 4 F 7/2 (Er)), (iii) the excited state absorption of the pump radiation from the 4 I 11/2 and 4 I 13/2 levels, and (iv) the cross relaxation between Er 3+ ions, as depicted in Figure 3b. The magnetic hysteresis loops recorded at 300 K for Fe 3 Figure 4).
The paramagnetic contributions in Fe 3 O 4 /γ-Fe 2 O 3 @NaGdF 4 :Yb,Er and Ln 3+ -doped NaGdF 4 nanocrystals featured mainly arising from Gd 3+ ions. The seven unpaired electrons in the inner 4f subshells of the Gd 3+ ions are tightly bound to the nucleus and shielded by the outer closed shell 5d 1 6s 2 electrons from the crystal fields and are responsible for the magnetic characteristics of the Gd 3+ ions. The magnetic moments related to the Gd 3+ ions are all localized and non-interacting, which led to paramagnetism of the Gd 3+ ions. Hence, under magnetic field the samples containing matrix of NaGdF 4 nanoparticles can induce relative magnetism and such samples recovers its initial state before applied field, when magnetic field is removed. This kind of effect is called magnetooptical interaction in NaGdF 4 :Yb, Er nanoparticles and obtained under certain magnetic fields and the relative luminescence modulation may be accessible with a magnetic field. Generally, modulating luminescence of materials via magnetic fields needs the coupling of optical and magnetic interaction and it is believed that simultaneous realization of optical and magnetic properties in a single phase will avail optical and magnetic interaction within the crystal lattice and hence promising modulation luminescence via magnetic fields.
CONCLUSION
To summarize, we have reported a microwave assisted solvothermal synthesis of multifunctional NaGdF 4 :Ln 3+ and Fe 3 O 4 /γ-Fe 2 O 3 @NaGdF 4 :Ln 3+ (Ln = Yb, Er) nanoparticles. This synthesis suggested morphological difference for both the samples as observed through XRD and HRTEM. The magnetic and luminescent properties of Fe 3 O 4 /γ-Fe 2 O 3 @NaGdF 4 :Ln 3+ nanocomposites were compared with paramagnetic NaGdF 4 :Ln 3+ and discussed in context to the possible luminescence quenching due to Fe 3 O 4 /γ-Fe 2 O 3 . The magnetic hysteresis at 300 K for β-NaGdF 4 :Ln 3+ showed paramagnetic features; whereas Fe 3 O 4 /γ-Fe 2 O 3 @β-NaGdF 4 :Ln 3+ exhibited superparamagnetic behavior with negligible coercive field. The luminescent properties with strong magnetic response in an external magnetic field and better stability of the nanocomposites can be used for the development of certain luminescencemagnetic composites for biomedical industry and may open up new opportunities in this field. | 2,934.4 | 2018-05-01T00:00:00.000 | [
"Materials Science"
] |
Application of an Improved Link Prediction Algorithm Based on Complex Network in Industrial Structure Adjustment
: For a healthy industrial structure (IS) and stable economic development in China, this study proposes an improved link prediction algorithm (LP) based on complex networks. The algorithm calculates the similarity by constructing a mixed similarity index. A regional IS network model is built in the study, and the direction of IS adjustment is calculated with the mixed similarity indicators. In this study, the prediction accuracy of the proposed improved LP algorithm in the real network dataset is up to 0.944, which is significantly higher than that of the other algorithms. In the reality of IS optimization, industries of high similarity could be obtained through similarity algorithms, and reasonable coordinated development strategies are proposed. In addition, the simulated IS adjustment strategy in this study shows that it is highly sustainable in development, which is reflected in its lower carbon emissions. The optimization of IS adjustment could be achieved through IS network model and the improved LP algorithm. This study provides valuable suggestions for China’s regional industrial structure adjustment.
Introduction
In recent years, China has been strategically adjusting its economic development path. Economic growth has slowed with the development of a model characterized by stable growth, structural adjustment, the transformation of the growth model, and risk prevention. The focus of the structural adjustment has become the unified adjustment of industrial structure and promoting the coordinated, unified, and sustainable development of all departments. With the broadening cooperation between sectors and industries and the freer circulation, the inter-industry relationship has developed from a chain structure into a network structure. The analysis of industrial structure optimization based on complex network theory could better reflect the coordination, unity, and sustainability of the optimization scheme. Moreover, with the development of Internet technology, many more network structures appear in people's daily life. Analyzing entities with network features has also become a key technology for people to explore in real life [1]. In the context of the normalization of the development of the economy, the IS adjustment has become a new direction of economic stability and innovation in China. However, the complexity of IS has led to a large number of research errors in discussing its adjustment and optimization direction [2,3]. Some studies believe that IS adjustment is a complicated network of industrial transfers with complex and dynamic characteristics. Therefore, a link prediction algorithm (LP) could be applied to assist network analysis by mining out the relationships in IS network and analyzing the value of each industry through prediction [4]. An LP is an extremely effective algorithm focusing on network feature analysis, where similarity indicators are used to evaluate the relevance and similarity and predict the direction of the network nodes [5]. The study believes that adopting an LP in the IS adjustment could help obtain quicker decisions in the industrial optimization path. In addition, to improve the accuracy, the study proposes an LP that improves complex networks. It aims to clarify the adjustment direction when optimizing the IS by analyzing the similarity of various industries to develop China's economy.
Related Works
IS adjustment is an important guarantee for economic development. A number of studies have proposed optimization strategies for IS adjustment to promote regional economic development in recent years. Ma et al. believed that regional environmental pollution could be significantly reduced through industrial restructuring. They analyzed the current situation of regional pollution in China and put forward strategies for regional IS adjustment. The study required the ability to optimize the integration and recycling of regional resources and the need to improve the innovation capacity of industries [6]. Zhou et al. analyzed the current IS and the optimization path of the Yangtze River Economic Zone in China. They put forward that environmental performance should be considered in the optimization of IS to formulate a better strategy. In addition, the development strategy proposed by the study demonstrated the ability to build a clean and efficient IS [7]. Zheng et al. believed that IS was the link between economic development and environmental quality; therefore, paying close attention to current environmental pollution is significant in the IS adjustment. In view of this, Zheng et al. took the air quality trend as the research direction, explored the impact of energy consumption in IS adjustment, and proposed to formulate reasonable IS adjustment policies [8]. Huang proposed that cloud computing could be used to analyze China's IS adjustment strategy. In the study, cloud computing was used to build a model of Grey Relation Analysis for IS optimization. The empirical analysis showed that the proposed adjustment was feasible and could effectively upgrade the IS [9]. Zang et al. used the PSM-DID method to analyze the influencing factors of IS upgrading. They carried out the study based on the current development status of the EU and deeply explored the coordination between environmental pollution and the industrial economy. Pollution emissions severely impacted the economy, and IS upgrading could also benefit from emission trading [10]. The above research shows that industrial structure adjustment is beneficial in reducing regional environmental pollution.
Industrial structure adjustment could be realized through the LP algorithm. Research results are abundant on LP algorithms in various industries. Shabaz et al. used an LP to predict the probability of future disease occurrence based on the current health status of patients and verified the prediction accuracy of the algorithm in the disease-disease network dataset through MATLAB. The prediction network proposed by the research institute could predict disease effectively and be applied to a variety of network models [11]. Ghasemian et al. proposed an LP stacking model for complex networks. The model combines multiple predictors. To verify the validity of the stacking model, a test of real network datasets was passed, and the results showed that the stacking model obtained higher accuracy than those with a single algorithm [12]. Coşkun et al. analyzed biological networks with an optimized LP based on a graph convolution algorithm. Considering the similarity index of multiple nodes, the prediction algorithm proposed in the study and other similarity algorithms were compared in the experiment. The graph convolution LP adopted in the research could improve the LP performance in biological networks [13]. Chen et al. proposed a new IGA strategy to solve the adversarial attacks in the network and implemented the network LP through this strategy. The strategy proposed in the study could be used to predict adversarial attacks and as an evaluation measure of GAE robustness [14]. Kumar et al. proposed an LP method that could be applied to different classifiers. In the study, Kumar et al. obtained the global structure of the network through node centrality. The LP method was used in predicting the real network. The results showed that the method proposed in the study was significantly superior to the other LP methods [15]. The above research shows that the LP algorithm has excellent performance in complex network prediction.
To sum up, many studies have put forward their opinions regarding optimization strategies for IS adjustment. A huge number of studies have proposed different LPs for complex networks; however, in the existing research, few people use an intelligent LP algorithm to study IS, which is innovatively regarded as a complex network. The LP is used to predict the dynamic changes of the complex networks to adjust and optimize the industrial structure. Though studies have proposed an LP for different complex networks and their corresponding improvement schemes, the prediction accuracy of the LP is still not improved by those schemes. The accuracy of the LP algorithm is greatly affected by the similarity index, so this study builds a better method for weight determination of the LP similarity index. In conclusion, this study proposes to optimize IS adjustment by improving the LP of complex networks. Through similarity measurement, the paper provides strategies and ideas for optimizing China's industrial structure.
LP Improvement of Complex Network for IS
A complex network is an analysis model to study the changes in natural and social behaviors. It explores the basic characteristics of various fields through its models. As network technology develops, social networks appear. A social network is a network system formed among individual members of society with social relations. An individual, also called a node, can be an entity or a virtual individual with different meanings, such as an organization, an individual person, a network ID, etc. Social network analysis (SNA) is a method that integrates disciplines such as informatics, mathematics, sociology, etc., to calculate and analyze the rules of social network relationships. A complex network, as a mathematical tool, is a way of analyzing problems. Social network analysis is the application of related knowledge of complex networks in social relation systems. Both are analyzing methods in social networks. However, social networks pay too much attention to the structural model, which is mostly used for multi-static analysis rather than dynamic analysis. In this study, the improved LP algorithm will be used to predict the network relationship and the trend of changes in the complex network framework.
Previous studies have proposed an LP focusing on complex social networks [16][17][18][19][20][21]. An LP is an algorithm that predicts the connection possibility of two nodes by building a similarity index. In the prediction process, when the connection possibility between the two nodes is considered to be high, the connection will be promoted between the nodes [22][23][24][25]. In view of the complex IS network, a mixed similarity index is proposed to determine the weight combination through uniform distribution. Then LP and IS adjustment is realized. The workflow is shown in Figure 1. Figure 1 shows that the research takes the network features of IS as the input. The mixed similarity algorithm is constructed by combining local information with path analysis, and the IS network model is analyzed. Secondly, the similarity value of the IS network model is calculated, and the correlation between industries is evaluated. Then, the IS adjustment strategy is obtained. In the calculation of the mixed similarity index, two similarity algorithms, local information and path analysis, are introduced. Compared Figure 1 shows that the research takes the network features of IS as the input. The mixed similarity algorithm is constructed by combining local information with path analysis, and the IS network model is analyzed. Secondly, the similarity value of the IS network model is calculated, and the correlation between industries is evaluated. Then, the IS adjustment strategy is obtained. In the calculation of the mixed similarity index, two similarity algorithms, local information and path analysis, are introduced. Compared with other similarity techniques, the mixed similarity index could effectively determine the optimal weight combination of each single similarity index, which could reduce the complexity of obtaining the optimal weight and improve the prediction accuracy of the LP algorithm.
The similarity measurement of local information evaluates the similarity of certain nodes by calculating the number of adjacent nodes between certain nodes, and the calculation method is shown in Formula (1) [3].
τ represents a set of the neighbor nodes; x and y are the certain nodes in Formula (1). In addition to calculating the degree information of the neighbor nodes, some studies have proposed an AA index. It is believed that node importance is significantly related to the degree value, see Formula (2) [26].
k(z) represents the degree value of two adjacent nodes in Formula (2). The path analysis of the similarity algorithm is divided into global path and local path. The global path considers the impact of the overall network on the path, and the local path considers only the influencing factors between nodes in the path [27][28][29][30]. At present, the local path index is often used for similarity measurement, as shown in Formula (3) [3].
A represents the adjacency matrix in Formula (3). Considering the influences from other nodes, in Formula (4), the Katz index is used to weight the path length [5].
In Formula (4), β represents the weight attenuation factor. I represents the identity matrix. The mixed similarity index is designed to select the similarity index in the target network structure appropriately and assign weight to it. The uniform distribution (UD) method is used to search for optimal weight values for each index. UD is a test design method that takes into account the uniform distribution of test points over the test area. The formula method is used to test the design through a set of design tables. Tables are used in each uniform design to be applied by test users for design tests.
The specific procedures of UD are as follows. First, the number of tests and independent variables are given, and their generation vectors can be found according to the table. The uniform design table is generated from this generation vector. Then, the independent variables are formulated according to the uniform design of the formula. Then, the test is carried out according to the purpose. After the test, the value of the response variable is obtained. Then, further analysis is made according to the concrete example. In this study, a table is used to carry out 19 tests of uniform formula design. The selected similarity index is given the weight, the effect of each test is evaluated, and the optimal weight value is selected.
In mixed LP training, Python is used to simulate the BA scale-free network with 20 nodes and 100 edges. The number of users in the network is defined as 20 as the network nodes, and the scale-free network structure is built, as shown in Figure 2.
variable is obtained. Then, further analysis is made according to the concrete example. In this study, a table is used to carry out 19 tests of uniform formula design. The selected similarity index is given the weight, the effect of each test is evaluated, and the optimal weight value is selected.
In mixed LP training, Python is used to simulate the BA scale-free network with 20 nodes and 100 edges. The number of users in the network is defined as 20 as the network nodes, and the scale-free network structure is built, as shown in Figure 2. The scale-free network structure in Figure 2 is used to analyze the performance of the LP. It mainly evaluates the accuracy of the prediction algorithm in the network feature analysis. Formula (5) is the accuracy calculation method.
In Formula (5), n represents the number of repeated experiments. n represents the number of times when node 1 is greater than node 2. n represents the number of times when two nodes are equal. Therefore, the flow of analyzing IS network through LP could be constructed, as shown in Figure 3. The scale-free network structure in Figure 2 is used to analyze the performance of the LP. It mainly evaluates the accuracy of the prediction algorithm in the network feature analysis. Formula (5) is the accuracy calculation method.
In Formula (5), n represents the number of repeated experiments. n represents the number of times when node 1 is greater than node 2. n represents the number of times when two nodes are equal. Therefore, the flow of analyzing IS network through LP could be constructed, as shown in Figure 3. In Figure 3, firstly, the regional IS is divided into sectors, and the industrial features in the network are extracted. Secondly, the correlation index values of different industries are calculated, and the correlation between different industries is analyzed. Finally, the main LP model is constructed to calculate the probability of the existence of real paths in IS network, and the similarity between industries is further determined through weight.
IS Adjustment and Optimization based on the Improved Algorithm
In production and manufacturing, the input and output can be regarded as a complex industrial network. In order to promote regional economic development, it is significant to optimize the IS adjustment. In recent years, the adjustment of IS has become the key to China's economic development [31][32][33]. The paper analyzed the static structure of the industrial network of industrial function zones, and the hot spot analysis tool in Python was used to calculate the spatial distribution characteristics of China's GDP. The distribution of China's GDP growth from 2019 to 2022 is shown in In Figure 3, firstly, the regional IS is divided into sectors, and the industrial features in the network are extracted. Secondly, the correlation index values of different industries are calculated, and the correlation between different industries is analyzed. Finally, the main LP model is constructed to calculate the probability of the existence of real paths in IS network, and the similarity between industries is further determined through weight.
IS Adjustment and Optimization Based on the Improved Algorithm
In production and manufacturing, the input and output can be regarded as a complex industrial network. In order to promote regional economic development, it is significant to optimize the IS adjustment. In recent years, the adjustment of IS has become the key to China's economic development [31][32][33]. The paper analyzed the static structure of the industrial network of industrial function zones, and the hot spot analysis tool in Python was used to calculate the spatial distribution characteristics of China's GDP. The distribution of China's GDP growth from 2019 to 2022 is shown in Figure 4.
In production and manufacturing, the input and output can be regarded as a complex industrial network. In order to promote regional economic development, it is significant to optimize the IS adjustment. In recent years, the adjustment of IS has become the key to China's economic development [31][32][33]. The paper analyzed the static structure of the industrial network of industrial function zones, and the hot spot analysis tool in Python was used to calculate the spatial distribution characteristics of China's GDP. The distribution of China's GDP growth from 2019 to 2022 is shown in Figure 4. Figure 4 shows that China's top GDP growth is distributed in the coastal area; thanks to the reform and opening up policy, provinces in the coastal area have a direct trade relationship with foreign economies. In this study, Shanghai, Jiangsu, and Zhejiang are selected for the IS adjustment and optimization analysis. From 2019 to 2021, the service industry, including finance and insurance, dominated in Shanghai. Jiangsu's industrial structure was dominated by industry and service, with agriculture accounting for a small proportion. Similarly, Zhejiang's industrial structure was also dominated by industry and service. As a result, the service industry accounts for a larger share than other industries in the three regions. There are three reasons for this result. First, the mature industrial development in these coastal cities with fast internal adjustment and transformation ability. Second, the rich local service resources and human resources in the coastal area.
The GDP growth of the coastal cities is greatly affected by international trade. Entering the 21st century, Shanghai, Jiangsu, and Zhejiang provinces have expanded their foreign trade, which has made major contributions to their GDP growth. Therefore, the scale of international investment is an important factor that effectively drives the adjustment of industrial structure in the coastal area. Local investment and market demand are also important factors in promoting the adjustment. Third, the local government's low-carbon planning and policy directly promoted the adjustment of the industrial structure in the area. Therefore, this study selected regional input and output, consumption, and government policy as the independent variables and industrial structure adjustment as the dependent variable for regression analysis. The results show that the independent variables have a significant influence on the industrial structure adjustment.
In the adjustment and optimization of regional IS in Shanghai, Jiangsu, and Zhejiang, the regional input-output consumption coefficient is calculated, as shown in Formula (6).
where j represents the product department, X j represents the total input of the product department, and X ij represents the number of products directly consumed by the product department. The consumption coefficient is used to build the network model of IS, and the calculation of the consumption coefficient matrix is shown in Formula (7). where a ij represents the direct consumption coefficient, and its value is [0, 1]. See Formula (8) for the output correlation of the quantitative department.
In Formula (8), T is transposed. Thus, the industrial network relationship matrix of Shanghai, Jiangsu, and Zhejiang is constructed in Formula (9).
The adjacency matrix of Shanghai, Jiangsu, and Zhejiang is composed, as shown in Formula (10).
where C S , C J and C Z represent the adjacency matrix of Shanghai, Jiangsu, and Zhejiang, respectively. Python simulation is used to build the industrial network structure model of each above, as shown in Figure 5. By constructing the industrial network structure of the three regions, the paper puts forward an IS adjustment strategy based on the improved LP. In the structural adjustment, the direction of industrial path transfer is proposed, and the improved LP is used to construct the optimal mixed index of the IS adjustment, as shown in Formula (11) [9].
where j S and l S represent the local information similarity index and the path analysis similarity index, respectively. Both indexes were mixed in the proportion of 0.513 and 0.487, and the mixed similarity indicators of the IS adjustment in Shanghai, Jiangsu, and Zhejiang were set up. Finally, the calculation strategy of carbon dioxide emissions in the process of IS adjustment is proposed to promote the sustainable and low-carbon development of the industry. Formula (12) shows the calculation of carbon dioxide emissions [9].
where E represents carbon dioxide emissions, Bt. N represents energy consumption, m 3 . F is the carbon dioxide emission factor, and kg/GJ. After the calculation, the carbon emissions are minimized to achieve the lowest-emission production scheduling, as shown By constructing the industrial network structure of the three regions, the paper puts forward an IS adjustment strategy based on the improved LP. In the structural adjustment, the direction of industrial path transfer is proposed, and the improved LP is used to construct the optimal mixed index of the IS adjustment, as shown in Formula (11) [9].
where S j and S l represent the local information similarity index and the path analysis similarity index, respectively. Both indexes were mixed in the proportion of 0.513 and 0.487, and the mixed similarity indicators of the IS adjustment in Shanghai, Jiangsu, and Zhejiang were set up. Finally, the calculation strategy of carbon dioxide emissions in the process of IS adjustment is proposed to promote the sustainable and low-carbon development of the industry. Formula (12) shows the calculation of carbon dioxide emissions [9].
where E represents carbon dioxide emissions, Bt. N represents energy consumption, m 3 . F is the carbon dioxide emission factor, and kg/GJ. After the calculation, the carbon emissions are minimized to achieve the lowest-emission production scheduling, as shown in Formula (13).
where U k represents the operating power of equipment k in production, m represents the total number of equipment in production and manufacturing. α U is the carbon emission conversion factor when the equipment is running. F k represents the coolant flow required for work. α F is the carbon emission conversion factor of the coolant. In order to realize optimized industrial adjustment, it is necessary to calculate the minimum IS adjustment, as shown in Formula (14). min f 2 = min(max(C i )) (14) where C i represents the adjustment time of industry i. Formula (15) shows the optimization model of the IS adjustment considering carbon emissions.
Formula (15) is used to calculate the carbon emissions in industrial production, and the minimum carbon emissions produced in equipment processing are also analyzed.
LP Test
An improved LP for complex networks is proposed to verify the effectiveness of the algorithm by reflecting its prediction performance through the prediction accuracy index. The number of nodes is set to 100, the initial value of the weight attenuation factor is set to 0, and the algorithm prediction accuracy changes are evaluated under different weight attenuation factor values, as shown in Figure 6. Figure 6a shows the prediction accuracy changes of the local information similarity index under different weight attenuation factor values. Figure 6b shows the prediction accuracy changes of the path analysis similarity index under different weight attenuation factor values. Figure 6c shows the prediction accuracy changes of the mixed similarity index under different weight attenuation factor values. With the gradual increase of the proportion of observation edges, the prediction accuracy of each algorithm shows a trend of increasing first and then decreasing. However, the prediction accuracy of the mixed similarity index is significantly higher than the other two. The maximum accuracy is 0.93, and the minimum accuracy is 0.85. The accuracy of the mixed similarity index with LP used is high and significantly improved compared with the single similarity index. Moreover, the repeatability prediction accuracy of each algorithm is analyzed to evaluate the repeatability error of the LP. The results are shown in Figure 7.
LP Test
An improved LP for complex networks is proposed to verify the effectiveness of the algorithm by reflecting its prediction performance through the prediction accuracy index. The number of nodes is set to 100, the initial value of the weight attenuation factor is set to 0, and the algorithm prediction accuracy changes are evaluated under different weight attenuation factor values, as shown in Figure 6. Figure 6a shows the prediction accuracy changes of the local information similarity index under different weight attenuation factor values. Figure 6b shows the prediction accuracy changes of the path analysis similarity index under different weight attenuation factor values. Figure 6c shows the prediction accuracy changes of the mixed similarity index under different weight attenuation factor values. With the gradual increase of the proportion of observation edges, the prediction accuracy of each algorithm shows a trend of increasing first and then decreasing. However, the prediction accuracy of the mixed similarity index is significantly higher than the other two. The maximum accuracy is 0.93, and the minimum accuracy is 0.85. The accuracy of the mixed similarity index with LP In Figure 7a, the prediction accuracy of the local information similarity index decreases as the number of samples increases. Moreover, it decreases to 0.79 when the number of samples reaches 5000. In Figure 7b, the prediction accuracy of the path analysis similarity index shows a decreasing trend as the number of samples increases, and when the number of samples reaches 5000, the prediction accuracy decreases to 0.75. In Figure 7c, with the increase in sample numbers, the prediction accuracy shows a declining trend. However, when the number of samples is 3000, the prediction accuracy of the algorithm begins to stabilize at around 0.87. By comparing the difference in the algorithm prediction accuracy under the three indicators, the consistency of multiple experimental results of the local information similarity index and the path analysis similarity index is significantly In Figure 7a, the prediction accuracy of the local information similarity index decreases as the number of samples increases. Moreover, it decreases to 0.79 when the number of samples reaches 5000. In Figure 7b, the prediction accuracy of the path analysis similarity index shows a decreasing trend as the number of samples increases, and when the number of samples reaches 5000, the prediction accuracy decreases to 0.75. In Figure 7c, with the increase in sample numbers, the prediction accuracy shows a declining trend. However, when the number of samples is 3000, the prediction accuracy of the algorithm begins to stabilize at around 0.87. By comparing the difference in the algorithm prediction accuracy under the three indicators, the consistency of multiple experimental results of the local information similarity index and the path analysis similarity index is significantly lower than that of the mixed similarity index. That is to say, the repeatability accuracy of the proposed mixed similarity index is much higher. In addition, true and false statistics are used to further evaluate the independent prediction ability of similarity indicators under different mixing levels. The results are shown in Table 1. Table 1 shows that the mixing degree of the mixed similarity indicators is set at 0.1, 0.2, 0.3, 0.4, and 0.5, respectively, and the prediction results of mixed similarity indicators on true and false network nodes are evaluated. With the increase in the mixing degree, the node detection accuracy of the mixed similarity index continues to improve. When the mixing degree reaches 0.5, the detection accuracy reaches 0.953. In addition, as the degree of mixing increases, the proportion of real nodes detected by the mixed similarity index increases, and the proportion of false nodes decreases. The prediction accuracy of the mixed similarity index shows a positive correlation with the degree of mixing. As the degree of mixing increases, the detection ability of the algorithm for real nodes is also significantly improved. Finally, real data are used to further evaluate the different prediction performances between the mixed similarity index and other LPs. The Powergrid, Protein, and PGP networks were selected as the experimental network datasets, and the prediction accuracy differences of each algorithm in the three network datasets were compared. The results are shown in Figure 8. Figure 8 shows the prediction accuracy between the mixed similarity index and the PA index, and between the LHN-I index and CR-LHN2 index is compared in different network datasets. In Figure 7, with the increasing proportion of observation edges, the prediction accuracy of the mixed similarity index, PA index, and CR-LHN2 index algorithm shows an increasing trend. The prediction accuracy of the LHN-I index algorithm shows a trend of rising first and then falling. With the comparison of all indicators, the prediction accuracy of the mixed similarity index algorithm is significantly higher than that of the other two algorithms, with the highest value reaching 0.944. To sum up, the mixed similarity algorithm shows high accuracy in practical applications and is superior to the similarity algorithm. The possible reason is that similarity indexes usually use single indexes in other link prediction algorithms, which are only suitable for specific network structures. The prediction algorithm based on the mixed prediction similarity index has optimized weight and improved ability to search parameters and predict. mixed similarity index shows a positive correlation with the degree of mixing. As the degree of mixing increases, the detection ability of the algorithm for real nodes is also significantly improved. Finally, real data are used to further evaluate the different prediction performances between the mixed similarity index and other LPs. The Power-grid, Protein, and PGP networks were selected as the experimental network datasets, and the prediction accuracy differences of each algorithm in the three network datasets were compared. The results are shown in Figure 8. Figure 8 shows the prediction accuracy between the mixed similarity index and the PA index, and between the LHN-I index and CR-LHN2 index is compared in different network datasets. In Figure 7, with the increasing proportion of observation edges, the prediction accuracy of the mixed similarity index, PA index, and CR-LHN2 index algorithm shows an increasing trend. The prediction accuracy of the LHN-I index algorithm shows a trend of rising first and then falling. With the comparison of all indicators, the prediction accuracy of the mixed similarity index algorithm is significantly higher than that of the other two algorithms, with the highest value reaching 0.944. To sum up, the mixed similarity algorithm shows high accuracy in practical applications and is superior to the similarity algorithm. The possible reason is that similarity indexes usually use single indexes in other link prediction algorithms, which are only suitable for specific network structures. The prediction algorithm based on the mixed prediction similarity index has optimized weight and improved ability to search parameters and predict. According to the experimental results, the mixed similarity index algorithm has high prediction accuracy, and it is feasible to apply the improved LP prediction model to the IS network accurately.
IS Adjustment Test
With the constructed IS networks for Shanghai, Jiangsu, and Zhejiang, the mixed similarity index algorithm was used to optimize the path of IS adjustment. To evaluate the application result of the mixed similarity index algorithm in the IS adjustment, the GDP growth data of the three regions from 2010 to 2020 were collected.
All the data used in this study are from the China Regional Input-Output Table released by the National Bureau of Statistics of China in 2020, and the annual direct inputoutput consumption coefficient matrix of the three samples is calculated based on the China 2020 Input-Output Table Compilation Method. The fitting degree between the mixed similarity index algorithm and the actual situation in the adjustment of the IS is evaluated for feasibility, as shown in Figure 9. Figure 9a shows the change of fitting between Shanghai's mixed similarity algorithm and the actual situation. Figure 9b shows the change of fitting degree between Jiangsu's mixed similarity algorithm and the actual situation. Figure 9b shows the change of fitting degree between Zhejiang's mixed similarity algorithm and the actual situation. In Figure 9a, the maximum fitting value between the calculated results of the mixed similarity algorithm and the actual GDP growth of Shanghai reached 0.952, and the minimum reached 0.926. In Figure 9b, the calculated results of the mixed similarity algorithm and the actual GDP growth in Jiangsu also have a high degree of fitting, with a maximum fitting value of 0.950. In Figure 9c, the maximum fitting value between the calculated results of the mixed similarity algorithm and the actual GDP growth in Zhejiang reached 0.953, and the minimum value also exceeded 0.940. As a result, the results calculated by the mixed similarity algorithm are similar to the development trend of the three regions. That is, the mixed similarity algorithm is feasible in the adjustment and optimization of the IS in Shanghai, Jiangsu, and Zhejiang. Meanwhile, the mixed similarity algorithm is used to calculate the IS similarity among Shanghai, Jiangsu, and Zhejiang, and the top ten nodes are shown in Table 2.
growth data of the three regions from 2010 to 2020 were collected.
All the data used in this study are from the China Regional Input-Output Table re leased by the National Bureau of Statistics of China in 2020, and the annual direct input output consumption coefficient matrix of the three samples is calculated based on th China 2020 Input-Output Table Compilation Method. The fitting degree between th mixed similarity index algorithm and the actual situation in the adjustment of the IS evaluated for feasibility, as shown in Figure 9. Figure 9a shows the change of fitting between Shanghai's mixed similarity algorithm and the actual situation. Figure 9b shows the change of fitting degree between Jiangsu mixed similarity algorithm and the actual situation. Figure 9b shows the change of fittin degree between Zhejiang's mixed similarity algorithm and the actual situation. In Figur 9a, the maximum fitting value between the calculated results of the mixed similarity algo rithm and the actual GDP growth of Shanghai reached 0.952, and the minimum reache 0.926. In Figure 9b, the calculated results of the mixed similarity algorithm and the actua GDP growth in Jiangsu also have a high degree of fitting, with a maximum fitting valu of 0.950. In Figure 9c, the maximum fitting value between the calculated results of th mixed similarity algorithm and the actual GDP growth in Zhejiang reached 0.953, and th Table 2 shows the chemical products and wholesale in Shanghai, Jiangsu, and Zhejiang. The similarity between retail industries is the largest, with the similarity value reaching 1.23, indicating that the development of the retail industry is highly coordinated. In addition, the similarity between finance, leasing, and business services is the lowest in Table 2, but its similarity value also reaches 0.92. The mixed similarity algorithm can be used to analyze the similarity of the industrial network structure of Shanghai, Jiangsu, and Zhejiang. To sum up, the mixed similarity algorithm can be used for the coordinated development of industries by calculating the similarity between industries in the adjustment of IS. The results also show that the three regions should focus on the integrated development of multiple industries. Finally, the sustainability of the IS adjustment strategy and the carbon dioxide emission control in the IS adjustment of each region is analyzed through simulation. The results are shown in Figure 10.
used to analyze the similarity of the industrial network structure of Shanghai, Jiangsu, and Zhejiang. To sum up, the mixed similarity algorithm can be used for the coordinated development of industries by calculating the similarity between industries in the adjustment of IS. The results also show that the three regions should focus on the integrated development of multiple industries. Finally, the sustainability of the IS adjustment strategy and the carbon dioxide emission control in the IS adjustment of each region is analyzed through simulation. The results are shown in Figure 10. In Figure 10, with the IS adjustment, the carbon dioxide emissions of each region showed continuous growth in the early period. In addition, the simulation results show that, with the continuous adjustment of IS, the regional carbon dioxide emissions gradually began to stabilize and decrease. The IS adjustment strategy can reduce carbon emissions while achieving structural optimization, thus realizing green and sustainable development.
Conclusions
The adjustment of IS is a main strategy for economic development in China. For this reason, the paper took the IS network as the research object and proposed an LP based on the complex network for the adjustment and optimization of the IS. The accuracy test result of the mixed similarity index reached 0.93, which is significantly higher than that of the local information and path analysis similarity index. In addition, the repeatability results show that the accuracy of the mixed similarity index is higher. Finally, in the test of real network datasets, the prediction accuracy of the mixed similarity index reached 0.944, which is higher than the other commonly used datasets. Finally, in the analysis of regional IS adjustment, the fitting value between the calculation results of the mixed similarity algorithm and the regional GDP growth reached 0.991, which means that the calculation results of the mixed similarity algorithm meet the requirements for regional development. With the LP, industries with high similarity in the regional IS can be obtained. Therefore, in the development of the economy, it is necessary to improve the integration and development ability of industries. Finally, in the analysis of sustainability, the IS adjustment strategy reduced carbon emissions and is in line with the green development philosophy. To conclude, with IS as a complex network model, the LP can be used to formulate IS adjustment strategies more accurately. This paper provides an intelligent method for predicting the change in industrial structure and a sustainable adjustment strategy for optimizing and adjusting the regional IS. However, in regional IS data analysis, data processing algorithms are not introduced to verify the accuracy of the data used. Therefore, in subsequent studies, a variety of algorithms are needed to further optimize the IS adjustment strategy. | 9,048.2 | 2023-06-01T00:00:00.000 | [
"Computer Science"
] |
Immunomodulation via MyD88-NFκB Signaling Pathway from Human Umbilical Cord-Derived Mesenchymal Stem Cells in Acute Lung Injury
Excess inflammatory processes play a key detrimental role in the pathophysiology of acute lung injury (ALI). Mesenchymal stem cells (MSCs) were reported to be beneficial to ALI, but the underlying mechanisms have not been completely understood. The present study aimed to examine the involvement of MyD88–NFκB signaling in the immunomodulation of MSCs in mice with lipopolysaccharides (LPS)-induced ALI. We found that serum concentrations of IL-6, TNF-α, MCP-1, IL-1β, and IL-8 were significantly decreased at 6 h after LPS-induced ALI in the MSC group (p < 0.05). For each of the five cytokines, the serum concentration of each individual mouse in either group declined to a similar level at 48 h. The intensity of lung injury lessened in the MSC group, as shown by histopathology and lung injury scores (p < 0.001). The expressions of MyD88 and phospho-NFκB in the lung tissue were significantly decreased in mice receiving MSCs as measured by Western blotting and immunohistochemistry. Our data demonstrated that human umbilical cord-derived MSCs could effectively alleviate the cytokine storm in mice after LPS-induced ALI and attenuated lung injury. Firstly, we documented the correlation between the down-regulation of MyD88–NFκB signaling and immunomodulatory effects of MSCs in the situation of ALI.
Introduction
Diseases of the respiratory tract are among the leading causes of death in the world population. Acute lung injury (ALI) is characterized by rapid onset of inflammatory infiltrates in the lungs in response to various insults and can progress into acute respiratory distress syndrome, respiratory failure, and even death. Excess inflammatory reactions play a key detrimental role in the pathophysiology of ALI [1]. Despite significant advances in the management of the disease, morbidity and mortality remain high [2,3]. Novel and effective therapeutic strategies focusing on the interruption of early events to prevent disease progress are warranted.
Mesenchymal stem cells (MSCs) are considered a promising platform for cell-based therapy. Given their immunomodulatory properties, MSCs are attractive candidates to manage a variety of clinical diseases with aberrant immune responses [4,5]. Previously published data have shown the benefits of MSC administration in animal models of ALI, and the anti-inflammatory properties of MSCs may contribute to their protective role [6]. However, it remains unclear by which mechanisms these cells influence immune cells and ameliorate lung injury.
Toll-like receptors (TLRs) are crucial to immediate host responses in the early phase of infection and the linkage of innate and adaptive immunity [7][8][9]. There are eleven members of the TLR family identified in mammals. Ligand recognition by TLRs results in the recruitment of myeloid differentiation factor 88 (MyD88) and subsequently the activation of nuclear factor-κB (NFκB) [10,11]. Through MyD88-NFκB signaling, sensing the presence of infections can trigger the production and secretion of various inflammatory cytokines. The association between deficient NFκB activation and subsequent immunosuppression was found [12,13], suggesting that inhibition of massive NFκB could reduce inflammatory reactions. To date, there are no reports regarding the effects of MSCs on this signaling in the context of ALI. In the present study, we aimed to investigate the immunomodulatory effects of MSCs on mice with lipopolysaccharides (LPS)-induced ALI as well as to evaluate the role of MyD88-NFκB signaling in this circumstance.
Characterization of MSCs
In vitro culture, umbilical cord-derived MSCs (UCMSCs) adhered to plates and showed a spindle-shaped morphology ( Figure 1A). They expressed CD73, CD90, and CD105, and they were negative for CD34, CD45, and CD14 ( Figure 1B). Under induction conditions, they achieved adipogenesis and osteogenesis ( Figure 1C,D). These findings fulfilled the criteria of the International Society for Cellular Therapy to define human MSCs.
MSCs Alleviating Cytokine Storm after LPS-Induced ALI
To assess the initial changes of circulating inflammatory cytokine profiles after LPSinduced ALI, serum concentrations of interleukin (IL)-6, tumor necrosis factor-α (TNF-α), monocyte chemoattractant protein-1 (MCP-1), IL-1β, and IL-8 were measured at 6 h after LPS challenge ( Figure 2). Compared to mice of the control group receiving phosphatebuffered saline (PBS) only after LPS-induced ALI, all of these cytokine levels were significantly lower in mice of the MSC group (p < 0.05). In addition, we measured IL-6, TNF-α, MCP-1, IL-1β, and IL-8 in blood serum obtained after sacrifice. For each of the five cytokines, the serum concentration of each individual mouse in either group declined to a similar level at 48 h after LPS challenge ( Figure 3). We speculated that MSC administration could alleviate the cytokine storm that occurred around 6 h after LPS-induced ALI. It gradually cooled in 48 h despite MSC administration or not.
MSCs Attenuating Lung Injury after LPS-Induced ALI
Grossly, multiple focal hemorrhagic spots with edematous change can be found in the lungs of mice in the control group. In contrast, the lungs of mice in the MSC group appeared grossly normal. Microscopically, the lung tissue was significantly injured with the presence of septal edema and massive inflammatory cell infiltration in the control group compared with that in the MSC group ( Figure 4A). The mean lung injury score was 7.25 and 3.50 in mice of the control group and the MSC group, respectively ( Figure 4B). Most evidently, MSC-treated mice demonstrated less intra-alveolar macrophage and neutrophil infiltration (Table 1). These findings provided beneficial evidence for lung protection from MSC administration in mice with LPS-induced ALI.
In vitro culture, umbilical cord-derived MSCs (UCMSCs) adhered to plates and showed a spindle-shaped morphology ( Figure 1A). They expressed CD73, CD90, and CD105, and they were negative for CD34, CD45, and CD14 ( Figure 1B). Under induction conditions, they achieved adipogenesis and osteogenesis ( Figure 1C,D). These findings fulfilled the criteria of the International Society for Cellular Therapy to define human MSCs. MCP-1, IL-1β, and IL-8 in blood serum obtained after sacrifice. For each of the five cytokines, the serum concentration of each individual mouse in either group declined to a similar level at 48 h after LPS challenge ( Figure 3). We speculated that MSC administration could alleviate the cytokine storm that occurred around 6 h after LPS-induced ALI. It gradually cooled in 48 h despite MSC administration or not.
MSCs Restraining Elevation of Inflammatory Cytokine Levels in the BALF after LPS-Induced ALI
Being a marker of endothelial and epithelial permeability, protein levels in the bronchoalveolar lavage fluid (BALF) are linked to the status of inflammation in lung tissue and thought to be able to exacerbate lung injury. As shown in Figure 5, concentrations of BALF protein were significantly reduced in mice receiving MSCs after LPS challenge compared to those receiving PBS only (p = 0.030). Furthermore, we measured inflammatory cytokine levels, including IL-6, TNF-α, MCP-1, IL-1β, and IL-8, in the BALF. Compared to mice of the control group, concentrations of IL-6 and IL-8 were significantly decreased in mice receiving MSCs after LPS challenge ( Figure 6). Although not reaching statistical significance, levels of TNF-α, MCP-1, and IL-1β trended lower in mice of the MSC group. Our results suggested that MSC administration could restrain the elevation of these cy-tokine concentrations in the BALF and thus ameliorate acute inflammation in mice with LPS-induced ALI.
MSCs Attenuating Lung Injury after LPS-Induced ALI
Grossly, multiple focal hemorrhagic spots with edematous change can be found in the lungs of mice in the control group. In contrast, the lungs of mice in the MSC group appeared grossly normal. Microscopically, the lung tissue was significantly injured with the presence of septal edema and massive inflammatory cell infiltration in the control group compared with that in the MSC group ( Figure 4A). The mean lung injury score was 7.25 and 3.50 in mice of the control group and the MSC group, respectively ( Figure 4B). Most evidently, MSC-treated mice demonstrated less intra-alveolar macrophage and neutrophil infiltration (Table 1). These findings provided beneficial evidence for lung protection from MSC administration in mice with LPS-induced ALI.
MyD88-NFκB Signaling Associated with Immunomodulation from MSCs
TLR-mediated reactions are important for innate immunity. To evaluate the involvement of their activation in mice with LPS-induced ALI receiving MSCs, we detected expressions of MyD88 and phospho-NFκB in the lung tissue obtained after sacrifice by Western blot analysis and immunohistochemistry. As illustrated in Figure 7, the expression of MyD88 protein was significantly decreased in mice receiving MSCs compared to those receiving PBS only (p < 0.001). Consistently, the downstream-activated transcription factor, phospho-NFκB, expressed lower as measured by Western blotting in mice of the MSC group (p < 0.001). As shown by immunohistochemical analysis in Figure 8, the percentage of MyD88-positive inflammatory cells in the lung tissue as well as the intensity of staining for MyD88 within the inflammatory cells was decreased in mice receiving MSCs after LPS challenge compared to mice receiving PBS only. As expected, the expression of phospho-NFκB was consistently lower in mice of the MSC group. These results implicated that MSCs may exert their immunomodulatory influence on mice with LPS-induced ALI via downregulation of the MyD88-NFκB signaling pathway. Individual scores range from 0 to 4 (0 = normal, 4 = most severe). n = 8 mice/group.
MSCs Restraining Elevation of Inflammatory Cytokine Levels in the BALF after LPS-Induced ALI
Being a marker of endothelial and epithelial permeability, protein levels in the bronchoalveolar lavage fluid (BALF) are linked to the status of inflammation in lung tissue and thought to be able to exacerbate lung injury. As shown in Figure 5, concentrations of BALF protein were significantly reduced in mice receiving MSCs after LPS challenge compared to those receiving PBS only (p = 0.030). Furthermore, we measured inflammatory cytokine for MyD88 within the inflammatory cells was decreased in mice receiving MSCs after LPS challenge compared to mice receiving PBS only. As expected, the expression of phospho-NFκB was consistently lower in mice of the MSC group. These results implicated that MSCs may exert their immunomodulatory influence on mice with LPS-induced ALI via downregulation of the MyD88-NFκB signaling pathway.
Discussion
ALI is characterized by inflammation, cytokine production, neutrophil accumulation, and rapid alveolar damage. Given that excessive inflammatory response plays a key detrimental role in the development of ALI [1], the important work needs to focus on the
Discussion
ALI is characterized by inflammation, cytokine production, neutrophil accumulation, and rapid alveolar damage. Given that excessive inflammatory response plays a key detrimental role in the development of ALI [1], the important work needs to focus on the interruption of early events in disease pathogenesis. With profound immunomodulatory properties, MSCs have emerged as a promising therapeutic strategy for various inflammatory diseases [4,5]. In the present study, a well-characterized animal model of LPS-induce ALI was used to mimic human ALI and to stimulate host inflammatory responses. We found that the intraperitoneal administration of human UCMSCs effectively ameliorated the surge of inflammatory cytokines after LPS challenge and attenuated lung injury in mice with ALI. For the first time, we demonstrated that down-regulation of the MyD88-NFκB signaling pathway contributed to the immunomodulatory effects of MSCs in the situation of ALI.
Adequate inflammatory reactions are essential for protecting the body from invasion of infectious pathogens. However, an uncontrollable pulmonary inflammation caused by large amounts of inflammatory cells and cytokines leads to the development of ALI, and the degree of acute inflammation is highly associated with the outcome of human ALI [14]. In the present study, serum levels of inflammatory cytokines, including IL-6, TNF-α, MCP-1, IL-1β, and IL-8, were measured at 6 h and 48 h after LPS-induced ALI. All these cytokine levels were significantly lower at 6 h in mice receiving MSCs, implicating that MSCs effectively ameliorated the cytokine storm occurred around 6 h after LPS challenge. Although the cytokine concentration in each individual mouse of either group declined to a similar level at 48 h for each of the five cytokines, as shown in Figure 3, the occurrence of cytokine storm did cause harm in the lungs already. The intensity of pathological changes observed grossly and microscopically was increased in mice of the control group, and there were higher lung injury scores as well. Notably, the infiltration of macrophages and neutrophils in the intra-alveolar, peribronchial, and perivascular space was significantly increased. It is known that inflammatory cell migration and infiltration into the site of inflammation is extremely important in the tissue damage of lung injury [15,16]. Consistently, protein levels in the BALF along with the inflammatory cytokine concentrations were higher in mice receiving PBS only after LPS-induced ALI, suggesting the increase in the leakage of fluid and macromolecules due to alveolar capillary endothelial injury. As known, vascular permeability is the most important initial cause of ALI and related to the outcome [17]. Taken together, our data suggested that MSCs blocked the recruitment of excess inflammatory cells into the lungs and attenuated the surge of inflammatory cytokines in mice with ALI. With the effective modulation of early inflammatory conditions, MSCs lessened the deterioration of lung injury.
MSCs possess profound immunomodulatory effects [18][19][20]. Various studies have showed beneficial effects of MSCs in animal models of ALI, and the anti-inflammatory properties of MSCs may contribute to their protective role [6]. Our present study and previous investigations demonstrated that the vast majority of infiltrating cells found in ALI were neutrophils, and the degree of infiltration was significantly decreased in MSC-treated animals [21][22][23][24]. The myeloperoxidase activity, which is an enzyme marker of neutrophil accumulation and activity, was found to be decreased in animals with ALI receiving MSC transplantation [21][22][23][24]. Danchuk et al. reported that the up-regulated expression of TNF-α-induced protein 6 was highly induced in MSCs in response to lung injury and resulted in a decrease in neutrophil accumulation and lung damage [23]. These data implicated that MSCs protected the lungs from acute inflammation primarily by preventing the recruitment and activation of neutrophils. However, the mechanisms involved are still being elucidated.
TLRs are the first-line effector molecule in response to infections, and they also act as an important bridge between innate and adaptive immune response [7][8][9]. Ligand recognition by TLRs triggers the recruitment of MyD88 and subsequently the nuclear translocation of the transcription factor NFκB [10,11]. The activation of MyD88-NFκB signaling leads to the production of a variety of inflammation-associated cytokines, and therefore, it is thought to be important in regulating inflammatory reactions. Because MSCs are potent immunomodulators to affect nearly all cell types of the immune system [18][19][20], the MyD88-NFκB signaling pathway may participate in MSC-mediated immunomodulation to a certain degree. Our previous studies reported the involvement of MyD88-NFκB signaling in immunomodulatory effects from MSCs in animal models of sepsis and systemic lupus erythematosus [25,26]. We demonstrated great benefits of MSC administration to survival in septic mice and disease control in mice with systemic lupus erythematosus. The beneficial effects from MSCs may be because the MSC administration brought the chaotic and hyper-inflammatory immune responses back into balance and consequently ameliorated self-tissue damage.
TLRs are common immune molecules to recognize bacterial pathogens during lower respiratory tract infections and play a crucial role in neutrophil sequestration in the lungs [27]. In addition to a variety of invading microbes, TLRs are capable of sensing endogenous molecules released after cell damage as well. Being a member of pattern recognition receptors in both infectious and noninfectious lung diseases, the engagement of TLRs is the prerequisite for the initiation of immune responses, which can be beneficial or detrimental to the host [28]. In the present study, we found that expression levels of MyD88 and phospho-NFκB in the lung tissue were significantly decreased in mice receiving MSCs after LPS-induced ALI. Along with the altered cytokine profiles in the serum and BALF, we speculated that down-regulated MyD88-NFκB signaling may contribute to the immunomodulatory reactions and lung protection from MSCs in mice with ALI. To the best of our knowledge, this is the first report regarding the involvement of MyD88-NFκB signaling in the immunomodulation of MSCs in an animal model of ALI. As pattern recognition receptors and their signaling pathways represent promising targets for therapeutic interventions in various lung diseases [28], MSC transplantation can be a potential treatment for ALI in humans.
MSCs have been demonstrated as an effective strategy to modulate inflammation while enhancing bacterial clearance, reducing organ damage, and improving survival in various animal models of ALI. Efficacy was maintained across different types of MSC sources and delivery approaches [6]. A variety of insults, infectious and noninfectious, can trigger pulmonary inflammation and result in the development of ALI. Among them, infection is the most common cause and leads to worse outcomes compared to noninfectious etiologies [29]. In clinical practice, it is extremely important to prevent the spread of pathogens in the management of patients with infectious diseases. Compared with other delivery routes such as intratracheal, the systemic administration of MSCs via an intravenous route is much safer for the environment and care providers. For clinical use, another important issue to note is the origin of MSCs. A broad spectrum of tissues has been identified as resources for MSCs, but autologous MSCs may not be a suitable source for cell therapy in many clinical situations. Umbilical cords are rich in MSCs, which can be easily collected and cultured without ethical problems [30]. Additionally, UCMSCs were found to have higher proliferative potential in vitro [31], indicating their advantages of rapid expansion and consequent clinical application. In humans, we found that UCMSCs could promote hematopoietic engraftment after hematopoietic stem cell transplantation and treat refractory graft-versus-host disease effectively and safely [32][33][34]. To act like in clinical use, we examined the effects of intraperitoneal delivery of UCMSCs on mice with LPS-induced ALI in the present study.
There are several limitations in the present study. First, there was no control group of untreated animals. Although important, the number of control groups and the intervention of animals in the control groups were quite different in animal models of ALI. The more control groups, the more solid information can be drawn in the study. However, there are a variety of considerations. In the literature, data of untreated group were not always included in animal models of ALI. Considering animal welfare and the 3Rs, mice receiving PBS with no cells were used as the only control group in our study. Second, MSCs were given at only one time point. In the present study, we aimed to evaluate the activation of MyD88-NFκB signaling which is an early immune response to infections and tissue damage. Therefore, we designed to administer MSCs to mice at one hour after LPS-induced ALI. There may be some discrepancy in the effects of MSCs if the time point changed. It remains an important issue to determine the optimal time point of MSC treatment for patients with ALI in a real-life scenario. Third, we did not give consideration to the issue that MSCs may have some different characteristics depending on the donors and other conditions. Although important for experimental studies and clinical utility, how to ameliorate the bio-diversity still remains a question.
Isolation of MSCs from Umbilical Cords
This study was approved by the Institutional Review Board of the Chung Shan Medical University Hospital (CS 14103), and written informed consents were obtained from the donors. UCMSCs were collected, isolated, and identified as in our previous reports [25,26,35,36]. Briefly, umbilical cords were obtained from full-term infants immediately after birth. The cord blood vessels were removed carefully to retain Wharton's jelly, which was digested with collagenase and then placed in the culture medium (high-glucose DMEM with 10% fetal bovine serum). The cells were incubated at 37 • C in a humidified atmosphere under 5% CO 2 . The medium with suspension of non-adhered cells was discarded after 48 h and thereafter replaced twice a week. Upon reaching 80-90% confluence, the cells were detached with trypsin-EDTA (Gibco, Carlsbad, CA, USA) and re-plated for subculture. The cultured MSCs of passage 5 were used for further studies.
Identification of UCMSCs
The criteria of International Society for Cellular Therapy were used to characterize UCMSCs [37]. To evaluate surface marker expression, cultured UCMSCs were detached, washed, and resuspended in PBS. After fixing and blocking, the cells were immunolabeled with fluorescein isothiocyanate or phycoerythrin-conjugated mouse antihuman antibodies specific to CD34, CD45, CD14, CD73, CD90, or CD105. Nonspecific mouse IgG served as isotype control. All reagents were purchased from BD Biosciences. Data were analyzed by flow cytometry (FACSCalibur; BD Biosciences, San Jose, CA, USA) with CellQuest software.
LPS-Induced ALI in Mice
The experimental protocol was approved by the Institutional Animal Care and Use Committee of the Chung Shan Medical University Experimental Animal Center (IACUC Approval No: 1598). Eight-week-old female C57BL/6 mice were provided by the Bio-LASCO Taiwan and maintained in a temperature-and humidity-controlled environment with free access to food and water for two weeks before the experiment. LPS-induced ALI was performed after being anesthetized by the inhalation of isoflurane vapor mixed with oxygen. Mice were suspended by their cranial incisors with the tongue extracted to prevent swallowing reflex. LPS from Escherichia coli O55:B5 (Sigma-Aldrich, St. Louis, MO, USA) at the dose of 15 mg/kg was pipetted into the deep throat, and the nares were pinched to enhance liquid aspiration. Mice were randomly divided into two groups, and there were eight mice in each group. Mice of the MSC group (n = 8) received intraperitoneal injections of one million MSCs in 0.5 mL sterile PBS (Gibco, Gaithersburg, MD, USA) at one hour after LPS challenge. Mice of the control group (n = 8) received sterile PBS in a volume of 0.5 mL with no cells at the same time point.
Collection of Samples
To determine the initial changes in levels of circulating cytokines, blood samples were harvested from retro-orbital sinus bleeding at 6 h after LPS challenge. Then, all mice were sacrificed at 48 h after LPS-induced ALI. Bronchoalveolar lavage was performed after anesthetization with intramuscular injection of ketamine (75 mg/kg) and xylazine (5 mg/kg). A 20-gauge catheter was placed into the trachea through which 1 mL of PBS was flushed back and forth five times. The BALF was collected, and the volume was recorded. The protein concentration in the BALF was measured by the Bradford assay (Bio-Rad, Hercules, CA, USA) in accordance with the manufacturer's instructions. Finally, cardiac puncture was performed to obtain blood samples. The anterior chest wall was surgically removed, and bilateral lungs were excised.
Determination of Cytokine Levels in the Serum and BALF
To determine circulating cytokine levels, serum was separated by centrifugation at 10,000× g for 10 min immediately after collection. The concentrations of IL-6, TNF-α, MCP-1, IL-1β, and IL-8 in the serum and BALF were measured separately by cytometric bead array immunoassay (BD CBA Mouse Soluble Protein Flex Set System; BD Biosciences, San Jose, CA, USA), according to the manufacturer's instructions. Data were analyzed using flow cytometry (FACSCanto; BD Biosciences, San Jose, CA, USA) with the FCAP array software. Reactions were performed in duplicate for the serum and triplicate for the BALF.
Lung Histopathology and Immunohistochemistry
The harvested lungs were fixed in 10% formalin for 24 h. Then, the specimens were embedded in paraffin and stained with hematoxylin and eosin for histologic assessment. Two pathologists independently evaluated the pathologic changes and graded the degree of lung injury based on the semiquantitative scoring system. Seven main characteristics were used as scoring parameters, including proportion of intra-alveolar macrophages in the infiltrate, septal edema, congestion, degree of neutrophil infiltration, septal mononuclear cell infiltration, alveolar hemorrhage, and alveolar edema [38]. Each parameter was scored from 0 to 4 (0 = normal, 4 = most severe). The total score was obtained by adding the values for each parameter for each animal.
Assessment of MyD88-NFκB Activation by Western Blot Analysis
After lysis, lung tissue samples (50 µg) were run on 12.5% sodium dodecyl sulfate polyacrylamide gels and transferred to polyvinylidene fluoride membranes. After blocking with 5% bovine serum albumin in Tris-buffered saline, the membranes were incubated overnight at 4 • C with primary antibodies, anti-MyD88 (Gene Tex, GTX112987), or antiphospho-NFκB (Gene Tex, GTX50254). Then, the membranes reacted with horseradish peroxidase-conjugated secondary antibody (Gene Tex, GTX213110-01) at room temperature for 1 h. As a loading control, the same blots were re-probed with anti-GAPDH (Gene Tex, GTX100118) and anti-mouse horseradish peroxidase antibodies. All samples were performed in triplicate.
Statistical Analysis
Data analysis was performed using SPSS 16.0 for Windows. For continuous variables, the Mann-Whitney U test was used to compare groups. A value of p < 0.05 was considered to be statistically significant.
Conclusions
Results from this study demonstrated potent immunomodulatory effects from intraperitoneal administration of human UCMSCs in wild-type mice exposed to LPS, which is a well-characterized animal model of ALI. MSCs effectively alleviated the surge of inflammatory cytokines and attenuated lung injury. For the first time, we documented that MSCs exerted their immunomodulatory influence on mice with ALI through down-regulation of the MyD88-NFκB signaling pathway. As a promising source of MSCs for clinical use, our data suggested that the systemic administration of UCMSCs can be a viable treatment option for human ALI. Further clinical trials are warranted. | 5,782 | 2022-05-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Free energy and boundary anomalies on $\mathbb{S}^a\times \mathbb{H}^b$ spaces
We compute free energies as well as conformal anomalies associated with boundaries for a conformal free scalar field. To that matter, we introduce the family of spaces of the form $\mathbb{S}^a\times \mathbb{H}^b$, which are conformally related to $\mathbb{S}^{a+b}$. For the case of $a=1$, related to the entanglement entropy across $\mathbb{S}^{b-1}$, we provide some new explicit computations of entanglement entropies at weak coupling. We then compute the free energy for spaces $\mathbb{S}^a\times \mathbb{H}^b$ for different values of $a$ and $b$. For spaces $\mathbb{S}^{2n+1}\times \mathbb{H}^{2k}$ we find an exact match with the free energy on $\mathbb{S}^{2n+2k+1}$. For $\mathbb{H}^{2k+1}$ and $\mathbb{S}^{3}\times \mathbb{H}^{3}$ we find conformal anomalies originating from boundary terms. We also compute the free energy for strongly coupled theories through holography, obtaining similar results.
Introduction
The response of a conformal field theory (CFT) to curvature encodes important information about the CFT. On general grounds, upon a Weyl rescaling, a CFT on an even-dimensional curved space suffers from anomalies, which are proportional to the central charges which characterize the CFT [1,2]. In turn, in odd dimensional manifolds without boundary, it is well known that there are no conformal anomalies -in particular, no invariant term of odd dimension can be constructed. However, in the presence of boundaries, CFT's may have conformal anomalies originating from boundary terms, both in even and odd dimensional spaces. While there have been some early discussions on this and related issues (see e.g. [3,4,5,6,7,8]), the topic has been only recently revived starting with [9,10,11,12,13,14,15] (see also [16,17,18,19,20,21,22,23]).
In this paper we will be interested on explicit calculations of such effects. We will investigate these boundary anomalies for an interesting class of geometries, namely, euclidean spaces of the form S a × H b , where S a is the a-dimensional sphere and H b is the b-dimensional hyperbolic space. 3 When S a and H b have the same radii, these spaces are conformally related to S a+b . To see this, we write the metric of the S a+b as 4 ds 2 S a+b = dφ 2 + cos 2 φ ds 2 S a + sin 2 φ ds 2 S b−1 . (1.1) Introducing a new coordinate y defined by tan φ = sinh y , (1.2) we have that ds 2 S a+b = 1 cosh 2 y ds 2 S a + dy 2 + sinh 2 y ds 2 Then, upon stripping off the conformal factor, in the parenthesis we recognize the metric of S a × H b . Note that the case a = 1 corresponds to S 1 ×H b . This space can be conformally mapped to R 1,b , covering the causal development of the S b−1 inside it. Using this fact one can argue [24] that the entanglement entropy across the sphere (which equals minus the sphere free energy in odd dimensions) maps to the thermal entropy in S 1 × H b , thus making these spaces particularly relevant. The family S a × H b is then a natural generalization of S 1 × H b . Note that, being conformal to the sphere S a+b , this family can also be conformally mapped to R a+b .
Another interesting case is b = 2, as it can be regarded as the near-horizon geometry of an extremal black hole. In this case one identifies the S 1 inside H 2 as the thermal circle. Then, by a thermodynamic argument one identifies the free energy on S 2k+2 with the corresponding thermal entropy on S 2k × H 2 [25,26].
For our purposes, let us differentiate bulk anomalies from boundary anomalies. Bulk anomalies appear only in even dimensions and are represented by conformal invariant terms constructed in terms of the Riemann tensor and covariant derivatives. This includes the a-anomaly, given by the Euler density, and c-anomalies, constructed in terms of the Weyl tensor. In the presence of boundaries, there are many other conformal invariant terms containing the extrinsic curvature that can contribute to the integrated trace of the stress tensor [5,9,13,15].
Since for the class of spaces S a ×H b the Weyl tensor vanishes, bulk conformal anomalies can only appear when the Euler number of the space is non-vanishing, namely when both a and b are even numbers. Denoting the free energy for a CFT on S a × H b as F (a,b) (note that (a + b, 0) corresponds to the S a+b case), in the absence of conformal anomalies one would have F (a,b) = F (a+b,0) for all a, b. 5 A mismatch F (a,b) = F (a+b,0) for some a and b can only occur when conformal anomalies are present. In particular, when a + b is an odd number or else if a is an odd number, the CFT can only have boundary conformal anomalies. 6 In this paper we will compute F (a,b) for a number of cases. The calculation will be done at zero coupling by considering free conformal scalars on S a × H b . We also discuss the holographic (strong coupling) calculation for theories admitting a holographic dual in terms of Einstein gravity.
Regularizing with a UV cut-off, in the presence of bulk conformal anomalies the free energy will be proportional to log RΛ. Since R is the overall scale of the geometry, a constant Weyl re-scaling induces a re-scaling of R. Thus, the free energy is sensitive to the conformal anomaly. Specifically, the counterterm that eliminates the logarithmic divergence is the one responsible for the trace anomaly in the stress tensor and the coefficient in front of the log term is directly related to the anomaly.
The structure of this paper is as follows: in section 2 we set up the computation of the free energy on S a × H b for free scalars. Since this will be compared against the S a+b free energy, in the same section we review the results for F (a+b,0) . In section 3 we compute the free energies F (a,b) for a number of cases. We first start with a = 1 and explicitly perform the computation of F (1,b) for b = 2, · · · , 7. While the cases of b = 2, 3 have appeared in [29], the cases b = 4, · · · , 7 offer new explicit and remarkable tests of the identity between thermal entropy on S 1 × H b and entanglement entropy -or sphere free energy-(see also [30]). We then focus on the cases for which the total dimension a + b is odd or in cases involving odd spheres, where no bulk Weyl anomalies are expected. In these cases we find that F (a,b) = F (a+b,0) as long as b is even. In section 4 we compute holographically the free energy for a (strongly coupled) CFT admitting a holographic dual in terms of Einstein gravity. We finish in section 5 with some concluding remarks. Appendix A reviews the computation of the regularized volume of the hyperbolic space. For completeness, appendix B provides a review of the holographic computation of the free energy for a CFT on S a+b . Finally, in appendix C we include a proof that the family of spaces S a × AdS b admits Killing spinors. Hence, supersymmetric theories can be defined on these spaces.
The weak coupling computation: preliminaries
The action for a conformally coupled scalar on S a × H b is Then, the free energy is given by with ∆ being the scalar Laplace operator. For our spaces this becomes This can be computed as usual by the heat kernel method, or by a more explicit sum over eigenvalues of the Laplacian. Let us begin by reviewing the case of S d , which plays a pivotal role in our discussion. It is important to recall that the free energy suffers from UV divergences. Introducing a UV cut-off Λ, in general F includes a sum over terms proportional to (RΛ) d−2n for n = 0, 1, · · · , plus some finite coefficient. In addition, in even dimensions, there is a logarithmic term. Thus Counterterms can be added to remove UV divergences. In even dimensions, there is a logarithmic term, whose coefficient is free of ambiguities. In odd dimensions, the finite term depends on the scheme (in particular, it would change under a shift Λ → Λ+constant). However, the finite part in odd dimensions is universal in the sense that it does not change under re-scaling of Λ by a constant. On the other hand, the free energy on a non-compact space such as S a × H b suffers, in addition, from IR divergences. Similarly, there are universal terms once the power-law divergences have been subtracted. The dimensional regularization of the volume of H b used in appendix A automatically leaves the universal terms (a finite term for even b, a logarithmic divergent term for odd b). In odd dimensional spaces of the form S 2n+1 × H 2k we will be able to match the finite part of the free energy with the finite part in S 2n+2k+1 . This of course requires using the same scheme for subtracting power divergences on both sides.
A particular case in this family are the spaces S 1 × H 2k+1 , which have been extensively studied in the literature, starting with [24]. As we will review below, the free energy has no UV logarithmic divergences in this space. Instead, there is an IR logarithmic divergence originating from the volume of H 2k+1 . In connecting the results on S 1 × H 2k+1 with the results on S 2k+2 one uses an UV/IR connection of the cutoffs, implied by the conformal map between both spaces [24] (see also [29,13]). One can extend this idea to the more general family of spaces S 2n+1 × H 2k+1 , or to spaces H 2k+1 . There are no UV logarithmic divergences on these spaces, but, likewise, there is an IR logarithmic divergence originating from the volume of H 2k+1 . In comparing with the results of the sphere, one may similarly expect that there is a relation between the corresponding IR and UV cutoffs implied by the conformal map. We give an argument on appendix A.
The free energy F (d,0) = − log Z for a conformal free scalar on S d is given by [31] In this paper we will explicitly study the cases d = 3, ..., 8. The free energy on the sphere S a+b is (2.7) The even case was obtained by doing dimensional regularization d → d − ǫ in the ǫ → 0 + limit, where the log(RΛ) gets traded by the 1 ǫ pole. The even case reflects the a conformal anomaly, which is proportional to the Euler characteristic of S d (χ(S d ) = 2 for d even, χ(S d ) = 0 for d odd).
The heat kernel
The free energy can be computed from the formula: where δ is a UV regulator, K H b , K S a are the heat kernels for Laplace operators of scalar fields on H b and S a spaces, respectively. Heat kernels in spheres and hyperbolic spaces have been extensively discussed in the literature. Formulas for the heat kernel on hyperbolic spaces for real scalars are given e.g. in appendix A of [32]. For odd b = 2n + 1, the general formula reads The equal-point kernel is obtained by setting ρ = 0 and integrating over the H b space. For future reference, let us quote the result for b = 3, 5, 7: (2.10) The even b case can also be found in appendix A of [32]. However, for b even, as the heat kernel is slightly more complicated, it is more convenient to compute the determinant directly from the known expressions of the eigenvalues of the Laplace operator, as explained below.
Similar formulas for the heat kernel on spheres can be found, e.g., in section (2.1) of [33]. Here we quote the cases that will be used in this paper: (n + 1) 2 e −n(n+2)t . (2.11) The formula for K S 1 is obtained after Poisson resummation. For the S 3 case, we used the simpler form given in [34], which uses the fact that the S 3 is the SU(2) group manifold. Note also that the heat kernel for the S 1 assumes a generic length β for the circle.
Summing over eigenvalues
Alternatively, the determinants can also be computed from the explicit expression of the eigenvalues of the Laplace operator and their degeneracy. We first write For scalar fields, the eigenvalues of the Laplacian on S a are l(l + a − 1) (l = 0, 1, · · · ), with degeneracy d where the boundary is the sphere S b−1 . The Laplace operator and the density of states on this space H b have been studied in [35,36]. The eigenvalues will be denoted by the continuous variable λ, with a density of eigenvalues Φ (b) given by For future reference, let us quote the explicit forms of the densities which we will need Substituting these expressions into (2.3) and (2.4), we thus obtain The free energy will still depend on the volume of the H b space. This volume is divergent but it can be regularized [37] as discussed in appendix A.
3 Free conformal scalar on S a × H b
Spaces of the form S 1 × H b
Let us begin by considering free real scalar fields on S 1 × H b . In this case we can view the S 1 as a thermal circle. 7 Assuming its length to be β (note that only the case β = 2π is conformally related to S 1+b ), one may compute a β-dependent free energy. Then, it follows that the associated thermal entropy coincides with the entanglement entropy across a spherical surface in R 1,b−2 [24], (3.1) 7 It has been proposed [38] that the Renyi entropy is the same computed on geometries S n q × H d−n , for any n, where S n q is a branched sphere obtained by a conic defect in any circle of S n . In the presence of conformal anomalies, it is very unclear whether this could hold in general, aside from the known S 1 q × H d−1 case, but it would be interesting to explore this proposal by explicit calculations.
This entanglement entropy is, in turn, equal to minus the S 1+b partition function as S (1,b) = −F (1+b,0) . We may then combine these ingredients to write (3.2) Typically, for spaces of odd dimension β∂ β F (1,b) β=2π identically vanishes [29] (see also [39]). 8 For these spaces, the mass coming from the conformal coupling to curvature is By shifting the integration variable λ − (b−1) 2
4
→ λ, the free energy (2.17) takes the form The sum over l can be computed using the product representation of sinh (see e.g. appendix B in [29]). We obtain The formula is readily generalized to the case of an S 1 of length β = 2πq by replacing sinh(π √ λ) → sinh(πq √ λ). In the b odd case, the free energy can be most directly computed by using the heat kernel. Note that when b = 2k + 1, one has M 2 = −k 2 and the conformal coupling to curvature exactly cancels the t exponential in the H b heat kernel.
S 1 × H 2
We first compute the free energy for β = 2π, following [29]. Making use of the explicit expression (2.16) for the eigenvalue density, the free energy (3.5) takes the form This can be written in the form: The second integral diverges. As in [29], we regulate it by subtracting the R 3 free energy density: Thus, the regularized free energy is The λ integrals can be computed using the formulas One finally finds thus reproducing the result of the free energy on S 3 . The formulas are readily generalized for an S 1 of length β = 2πq (see also [29]). One finds (3.13) It then follows that Thus, in this case, there is no new contribution from the term β∂ β F (1,b) . The same feature holds for all even b cases. 9
S 1 × H 3
Let us now consider the case S 1 × H 3 (also considered in the Discussion section of [29]). From (2.8), (2.10), we have Computing the integral and the infinite sum, we obtain We can now use (3.2) to find Switching from DREG to cutoff regulated quantities, this coincides with the S 4 free energy F (4,0) (cf. eq. (2.7)), if one identifies the UV cutoff in F (4,0) with the IR cutoff in (3.17). A justification of this identification -based on mapping the two cutoffs by using the conformal map between the two spaces-is given in section 2 of [24] (see also [29,13,30] and appendix B).
The same result can be obtained by the alternative method of summing over eigenvalues. In this case That is, The second term diverges. Like in the case of S 1 × H 2 , this divergence can be regularized by subtracting the flat-theory free energy density. Computing the remaining integral, we reproduce (3.16).
S 1 × H 4
From (2.16) and (3.5), we obtain the following expression for the free energy for β = 2πq: Again, we first separate the divergent integral by reorganizing the different terms and then subtract the flat-theory free energy density. This leads to We note that Thus there is no contribution from the term β∂ β F (1,b) . Hence in what follows we set q = 1.
For the first integral, we use (3.10) and Computing the remaining integral, we find thus showing the expected match.
S 1 × H 5
Using the heat kernel formula for the free energy (2.8), with the heat kernel K H 5 given in (2.10), we obtain Computing the integral and the infinite sum, we now find The entanglement entropy is then computed from (3.1) and then setting β = 2π. This gives Using that V H 5 = π 2 ǫ , we finally obtain .
Thus, if one identifies IR and UV cutoffs (see discussion above), −S (1,5) matches the free energy F (6,0) of a scalar in S 6 , given in (2.7).
S 1 × H 6
In this case, the formulas (3.5), (2.16) give As in previous cases, we rewrite the formula by separating the divergent piece representing the flat-theory free energy and then subtract this divergence. This leads to As in all even b cases, one finds that there is no contribution from the term , which vanishes identically. The integrals in (3.30) can be computing by expanding the log and resumming the result after integration. Setting q = 1, we find (7) , (3.31) which exactly agrees with the free energy F (7,0) on S 7 ; see (2.7).
S 1 × H 7
In this case, the free energy in the heat kernel representation (2.8), with K H 7 given in (2.10), takes the form Computing the integral and the infinite sum, we now obtain Hence, the entanglement entropy (3.1) is given by where we used V H 7 = − π 3 3ǫ . Thus, we see that this exactly matches free energy for a conformal scalar field on S 8 , assuming that UV and IR cutoffs can be identified by the arguments of [24,13] (see discussion above and appendix B).
Spaces of the form S 2n+1 × H 2k
Since in odd dimensions the bulk anomaly vanishes, a conformal anomaly, if present, could only originate from boundary contributions. Let us first concentrate on spaces of the form S 2n+1 × H 2k . For these cases we shall find that F (2n+2k+1,0) = F (2n+1,2k) . The match is striking, since the result involves a non-trivial combination of Riemann ζ-functions, arising after a long calculation that uses expressions which are very different from the expressions used for the spheres S 2n+2k+1 .
S 3 × H 2
Our starting point is (2.17). The eigenvalues of the Laplace operator on S 3 have degeneracy d By shifting λ by 1 4 and l by 1, one gets The sum over l can be regularized as follows. We consider the auxiliary sum: Our original sum is then obtained as S 1 (m = 0). By differentiating S 1 twice with respect to m 2 and performing the sum over l, we get This can be integrated twice (and then m 2 set to zero), leading to the formula Making use of this result, we obtain Here we used that V H 2 = −2π. The asymptotic λ → ∞ behavior of Σ 1 (λ) is Therefore, the regularized free energy is To compute the integral one can introduce a new integration variable x = e −2π √ λ and expand the polylogarithms in powers of x. After some algebra, we find which thus exactly matches the free energy F (5,0) (2.7) on the S 5 .
S 3 × H 4
Using (2.17) and the expression (2.16) for the eigenvalue density in H 4 , we are led to the formula: Computing the sum over l as in the previous case, we get where we used V H 4 = 4π 2 3 . Subtracting the flat-theory free energy, we find the finite integral: Computing the integrals, we finally get We again find exact match with the free energy F (7,0) on S 7 , given in (2.7).
S 5 × H 2
From the general formula (2.17), we now get Using the expression (2.16) for the eigenvalue density, shifting λ by 1 4 and using that V H 2 = −2π, we obtain The sum over l is regularized along the same lines as above. Introduce now the auxiliary sum It can be calculated by differentiating three times with respect to m 2 , i.e. by considering ∂ 3 m 2 S 2 (m 2 ), then integrating three times and setting m 2 = 0. We find Making use of this, we have The asymptotic behavior of Σ 2 for large λ is Thus, the regularized free energy is given by (3.53) Computing the integrals, we finally find which, strikingly, exactly matches the free energy F (7,0) (2.7) on S 7 .
3.3 Spaces of the form S 2n × H 2k+1
H 2k+1
The simplest subclass in S 2n × H 2k+1 is when n = 0. The spaces H 2k+1 are of course also related to S 2k+1 by a Weyl transformation (the precise Weyl transformation is easily found by recalling that both spaces are conformally flat). Below it will be shown that, in this case, F (2k+1,0) = F (0,2k+1) . While F (2k+1,0) , corresponding to the free energy of an odd sphere, is a transcendental number involving zeta functions, see (2.7), in contrast F (0,2k+1) contains a 1 ǫ divergence (in DREG -or a log ρ 0 in terms of the IR cutoff) multiplying a rational number. Since this implies a logarithmic dependence on the scale, it suggests the presence of a boundary conformal anomaly for the conformal field theory on H 2k+1 (below this will be confirmed independently for H 3 ).
In this case the conformal mass is M 2 = − 3 4 . From (2.8) and (2.10), we find that the free energy on H 3 is given by This contains power-law divergences as δ → 0. The regularized free energy is obtained by subtracting these terms: Thus, we have This agrees with the result of [41]. We can reproduce the same result from the sum over eigenvalues. From (2.17), after a shift in the λ integration variable, we get We will regulate the divergent integral by zeta function regularization. We consider The integral (3.58) is then obtained by analytic continuation to the s → 0 limit, as the coefficient of −s in the expansion in powers of s. This reproduces (3.57).
We may relate this result to the anomalous trace of the stress energy tensor. A pole in ǫ in DREG for the volume of H 3 corresponds to a term log ρ 0 , 1/ǫ → log ρ 0 , if one regulates the IR divergence in terms of a cut-off ρ 0 (see appendix A). The presence of a log ρ 0 term (given the UV/IR connection discussed in appendix A) suggests the existence of a conformal anomaly due to boundary terms (note that a constant Weyl scaling can be implemented by scaling ρ 0 ). The contribution of boundary terms to the trace of the stress tensor has been computed in [15,16]. While these formulas have been derived for compact spaces with boundary, we will naively extend them to our regularized hyperbolic space.
The general formula for the conformal anomaly containing the boundary contribution is (see section 5 in [15]) where, for a conformally coupled scalar with Dirichlet boundary conditions, c 1 = −1, c 2 = 1. HereΘ is the trace-free extrinsic curvature of the boundary metric and χ(∂M 3 ) is the Euler number of the boundary metric. As for the former, recall that the H 3 metric is ds 2 = dy 2 + sinh 2 y dΩ 2 2 . Introducing sinh y = ρ, the metric becomes ds 2 = (1 + ρ 2 ) −1 dρ 2 + ρ 2 dΩ 2 2 . Thus, we can borrow the computation from footnote 10 below with b = 0, which shows that the trace-free part of Θ vanishes. Since, on the other hand the boundary is an S 2 , for which χ(S 2 ) = 2, we find Thus, naive application of the formulas in [15,16] shows the CFT on H 3 has a conformal anomaly due to boundary terms, whose coefficient is consistent with (3.57). Regularizing the integral as before to subtract the power-law divergent terms in δ, we find Likewise, we can recover the same result from explicitly summing over eigenvalues. From (2.17), we now get The integral can be computed by zeta-function regularization as above. We consider By analytic continuation to s → 0, the free energy (3.63) arises as the coefficient of −s. This reproduces (3.62).
In conclusion, we find a log ρ 0 term in the free energy for the CFT on H 5 with a precise coefficient. The result is different from S 5 , suggesting the presence of a conformal anomaly, whose origin must be boundary contributions to the trace of the stress tensor.
The conformal mass is now M 2 = − 35 4 . From (2.8), (2.10), we now obtain Subtracting the power-law divergent terms in δ as above, we now find We can again reproduce the same result from the explicit sum over eigenvalues. Now the starting point is Using ζ function regularization, we recover the result (3.66) above. Thus we expect that the CFT on H 7 should also have a conformal anomaly originating from boundary terms.
S 2 × H 3
From the general formula (2.17), for this space we have Using the explicit form of the density of eigenvalues given in (2.16) and shifting the integration variable λ → λ + 1, we obtain We can now introduce Schwinger's proper-time parameter and write Computing the integral over λ, we find This recovers the heat kernel formula (2.8). The integral contains non-physical divergences in the δ → 0 limit. The finite part of the integral is obtained by an appropriate subtraction, by defining the regularized free energy as follows: Computing the integral over t (or, alternatively, keeping the finite part in the δ → 0 limit of (3.71)) gives The sum can be computed using zeta function regularization, using given by (2.15). This leaves a finite number containing combinations of ζ ′ (−4), γ E and derivatives of Γ functions, which indeed suggests strong dependence on the regularization scheme.
S 2 × H 5
Using (2.17), here we get with Φ (5) given in (2.16). We can follow the same steps as before. After shifting λ → λ + 4, and upon introducing a proper-time parameter, the λ integral is easily done, leading to Computing the t integral keeping the physical finite terms in the δ → 0 limit and performing the l sum as above, we find (3.78) In ζ-function regularization, this is proportional to a linear combination of ζ(−4) and ζ(−6). Therefore As in the previous case, this result strictly holds for the coefficient of the 1/ǫ pole in V H 5 = π 2 /ǫ. In dimensional regularization there is a residual O(ǫ 0 ) finite part which appears to be strongly sensitive to the regularization scheme.
S 4 × H 3
For this space, (2.17) becomes Using the expression for the eigenvalue density Φ (3) given in (2.16) and shifting λ − 1 → λ we find Just as in the previous cases, upon introducing a proper-time parameter, the integral over λ is easily done, giving Keeping the finite term in the δ → 0 expansion of the integral, we find (3.83) Using ζ function regularization, like in the previous case, we find for the coefficient of the 1/ǫ pole (as in the two previous cases, we omit the calculation of the much more subtle O(ǫ 0 ) term).
Spaces of the form S 2n+1 × H 2k+1
We have already studied one subclass of these spaces, namely the case n = 0 corresponding to S 1 × H 2k+1 . We found that the free energy exhibits a boundary anomaly and it is proportional to 1/ǫ = log ρ 0 . The free energy does not match the free energy of S 2k+2 , but for S 1 × H 2k+1 there is a thermodynamic interpretation by which one can identify the entanglement entropy and and check that it matches with the entanglement entropy on S 2k+2 . For the cases with n > 0, there is no thermodynamic interpretation as one does not have a thermal circle. However, like in the n = 0 case, the free energy on the spaces S 2n+1 × H 2k+1 also exhibits a logarithmic IR divergent term log ρ 0 , which, through the UV/IR connection, indicates the presence of a boundary conformal anomaly. In what follows, we will compute it for the case S 3 × H 3 , which is the example of lowest dimension in this class and already illustrates the main features.
S 3 × H 3
Using the heat kernel (for coincident points) for S 3 and adding the mass coming from the conformal coupling to curvature, we find that the free energy is Computing the finite part of the integral and the sum (using ζ function regularization), we obtain Using that V S 3 = 2π 2 and that V H 3 = − 2π ǫ , we find As in the case H 3 discussed in section 3.3.1, the presence of an ǫ pole (1/ǫ = log ρ 0 ) suggests that the CFT on S 3 × H 3 has a boundary conformal anomaly Note that the coefficients of the IR and UV logarithmic terms in F (3,3) and F (6,0) differ by a factor 1/2. The CFT in both spaces S 3 × H 3 and S 6 has a conformal anomaly but the origin is different. The space S 3 × H 3 has vanishing Weyl tensor and vanishing Euler characteristic, since χ(S 3 ) = 0. Thus the trace of the stress tensor can only receive contributions from boundary terms. We have already seen this feature explicitly in the H 3 example in (3.60). We will return to this in section 4. As a check, let us now compute F (3,3) by the alternative method of explicitly summing over eigenvalues. From (2.17), we have Substituting Φ (3) given in (2.16) and upon a shift λ → λ − 1 in the integration variable, we find The integral can be regulated by considering and extracting the linear term in s in the expansion in powers of s. Then, computing the sum over l using ζ function regularization, we reproduce the result (3.87).
Strongly coupled fields on S a × H b
Let us consider a general CFT in d = a + b dimensions at strong coupling, admitting a gravity dual as a solution to the Einstein-Hilbert action in D = a + b + 1 dimensions with non-zero cosmological constant, A vacuum solution is AdS D , which can be written in the following coordinates, As r → ∞, the above metric is asymptotic to Therefore, (4.2) describes AdS D space with boundary S a × AdS b . Thus, this space is the natural candidate to support the holographic dual to the CFT on S a × AdS b . We can now Wick rotate the AdS b into H b to find the holographic dual of a generic CFT a+b on S a × H b .
Free energy from gravity
Let us evaluate the on-shell action on our background. Using the value of the scalar curvature R = −D(D − 1), we find where we introduced a cut-off r 0 in the radial coordinate. Then As it is well-known, in order to have a well-defined variational problem, the action should be supplied by the Gibbons-Hawking surface term, which must be then evaluated on-shell to compute the holographic free energy: Here h ab is the induced metric on the boundary and Θ is the extrinsic curvature. Note that a vector normal to the boundary is while the boundary metric is Therefore, the Gibbons-Hawking term evaluated on the cut-off surface is 10 In addition, in order to implement holographic renormalization, we need to add counterterms (to be evaluated on-shell as well). Up to dimension D = 6, they are given by [42] S os 10 As a check, note that n = √ 1 + r 2 ∂ r . Therefore n r = √ 1 + r 2 . Then Θ µν = −γ ρ µ ∇ ρ n ν , so Θ ab = −γ c a ∇ c n b , where latin indices stand for boundary indices. In turn, since ∇ ρ n ν = ∂ ρ n ν + Γ α νρ n α , we have ∇ c n b = ∂ c n b + Γ r bc n r = Γ r bc n r = 1 2 g rr ∂ r g bc n r . Therefore (g ab = γ ab ) Θ a b = √ 1+r 2 2 γ ac ∂ r γ cb Then, as a matrix where R ab is the Ricci curvature of the boundary metric and R is its scalar curvature (the second and third counterterms are strictly needed only for D > 3 and D > 5, respectively). (4.13) Using these ingredients, we are in place to holographically compute the free energy corresponding to strongly coupled CFT's dual on S a × H b as F holo (a,b) = S os EH + S os GH + S os CT . (4.14) The case a = 1 has been already considered in [42]. Thus, in the following we will discuss the remaining cases.
S 2n+1 × H 2k
These cases contain no logarithmic divergent log r 0 term. Expanding the renormalized action (4.14) at large r 0 , we find that the leading term is cut-off independent and reads (recall that here a = 2n (4.15) Since b = 2k, there is no logarithmic IR divergent term in the volume of H b (see appendix A). Substituting the value of the volumes of S a and H b , this becomes (D = a + b + 1) where in the last step we have used (B.8). This shows that, just as in the free scalar model of section 3, the free energy for a strongly coupled CFT on S 2n+1 × H 2k equals that on S 2n+1+2k , thus showing the absence of conformal anomalies -in particular, the absence of boundary conformal anomalies.
S 2n × H 2k
Let us now consider cases of even total dimension, concentrating on spaces of the form S 2n × H 2k . Being an even-dimensional space, one should in general expect the presence of bulk conformal anomalies, showing up in the free energy through a logarithmic term in log r 0 . Starting with the case of total dimension 4, a straightforward application of (4.14) yields (we only quote the logarithmic term) The S 4 and H 4 cases were originally computed in [42], being both formally the same up to a factor of two due to the volume ratio V S 4 /V H 4 = 2.
Let us now move to the case of total dimension 6. We now find The cases of the S 6 and the H 6 were also computed in [42], finding that they are formally identical up to a relative factor of 2 originating from the volume factors as described above.
On the other hand, the remaining cases yield a free energy equal to that of the S 6 . In order to understand these results, note first that the boundary of H 2k is S 2k−1 , an odd-dimensional space. While in [5,13,15] conformal boundary anomalies have also been proposed for odd-dimensional boundaries, here we are finding that, at least for our spaces, there are no conformal boundary anomalies when the dimension of the boundary is odd (see section 5 for further comments on this). In turn, the bulk conformal anomaly must come from the A-type anomaly, and hence the ratio of free energies must be proportional to the ratio of Euler numbers. The Euler number of the non-compact hyperbolic space can be defined as usual by including a suitable boundary term in the definition. This gives χ(H 2k ) = 1. 11 Therefore χ(S 2n × H 2k ) = χ(S 2n+2k ) as long as n = 0, while χ(H 2k ) = 1 2 χ(S 2k ), thus precisely matching the pattern of free energies (4.17), (4.18) which we have found.
S 2n × H 2k+1
Let us briefly comment on the case S 2n × H 2k+1 . The particular case n = 0, k = 1 was computed in [42], where it was shown to vanish. This result holds for all spaces of the form H 2k+1 , moreover, for all spaces of the form S 2n × H 2k+1 . This can be seen as follows. After expanding (4.14) in powers of r 0 , we find that there is no finite (nor any logarithmic) term. For instance, for the case S 2 × H 3 , one finds Note that inside V H 2k+1 there is a hidden log R ρ 0 , where ρ 0 is a cutoff in the hyperbolic space H 2k+1 (this corresponds to the pole in DREG, see appendix A). But all terms vanish as r 0 → ∞. Remarkably, this is entirely consistent with our findings for conformal free scalars on S 2n × H 2k+1 , where the free energy in these spaces was found to vanish, at least for the coefficient of V H 2k+1 in dimensional regularization.
It would be interesting to understand if there is a finite remnant, perhaps originating from surface terms on the boundary of H 2k+1 , and whether this matches with the free energy (B.8) of S 2n+2k+1 .
S 2n+1 × H 2k+1
Here we consider as an example S 3 × H 3 . Using our expressions above, we find that (4.20) Recall, nevertheless, that V H 3 contains a log R ρ 0 term (or an ǫ pole, in DREG). Therefore, making use of our formula (A.3) and the volume of the 3-sphere (A.4), we find Note that this is neither the free energy on S 6 , S 4 × H 2 , S 2 × H 4 , nor the free energy on H 6 , thus explicitly showing the presence of a conformal anomaly. We recall that there is an A-anomaly on the spaces S 2n × H 6−2n . On the other hand, the A-anomaly vanishes on S 2n+1 × H 2k+1 , since the Euler characteristic is zero, but there can still be a contribution to the conformal anomaly from boundary terms (see [15] for a general construction).
The logarithmic dependence on the scale suggests that the conformal anomaly produced by boundary terms can be read from the coefficient of the log term in (4.21). It is interesting that this coefficient differs from the A anomaly coefficient on S 6 , given in (4.18), by a factor 1/2, and that the same relative factor appears in the ratio of the coefficients of IR and UV log terms in F (3,3) and F (6,0) computed at weak coupling for a free conformal scalar (see (3.87), (2.7)). This could be a consequence of the form of the boundary contribution to the anomaly. From the expressions for two-dimensional and four-dimensional boundaries discussed in [15], one may guess that such boundary anomaly involves either the Euler number of the boundary or the Weyl tensor and trace-free part of the extrinsic curvature (c.f. (3.60)). Since the latter two tensors vanish in our case, the boundary anomaly would be given by a general expression of the form T = c ∂ χ(∂M) for some coefficient c ∂ . Our findings suggest that, at least as long as n = 0, c ∂ is proportional to the a central charge with some universal coefficient whose origin would be very interesting to clarify.
Discussion
Boundary conformal anomalies have been comparatively poorly studied with respect to their bulk counterparts. Indeed, to the best of our knowledge, there is no comprehensive study of these in arbitrary dimension. In this paper we have introduced an interesting class of spaces, namely S a × H b , conformally related to S a+b where boundary anomalies play an important role.
We have studied a free conformal scalar as well as strongly coupled CFT's (the latter through holography) on S a × H b . The case a = 1 is somewhat special, as it permits an interpretation in terms of entanglement entropy across a b−2-dimensional sphere. Through this connection it is possible to argue that the relation between F (1+b,0) and F (1,b) is precisely given by (3.2). It is worth noting that F (1+b,0) − F (1,b) = −β∂ β F (1,b) ≡ ∆ b , when evaluated at β = 2π, measures the total conformal anomaly ∆ b , coming in principle both from bulk and boundary contributions (see e.g. [30]). Recall now that ∆ b is zero for even b = 2k, while, as argued in [13], ∆ b contains boundary contributions for odd b = 2k + 1. The boundary anomaly is supported on the even-dimensional S 2k at the boundary of H 2k+1 .
More generally, consider the families of spaces S 2n+1 × H 2k and S 2n+1 × H 2k+1 . The bulk conformal anomaly vanishes on these spaces because they have zero Euler characteristic and vanishing Weyl tensor (so both a and c bulk anomaly contributions vanish). For the first family of spaces, S 2n+1 × H 2k , our results, both at weak coupling and strong coupling, show that F (2n+1,2k) = F (2n+1+2k,0) , implying that also the boundary conformal anomaly vanishes. 12 However, for the second family of spaces, we find F (2n+1,2k+1) = F (2n+2k+2,0) . This shows that, at least in the class of spaces S 2n+1 × H b , boundary conformal anomalies only appear when the boundary space S b−1 has even dimension.
In the case of H 3 , we related our result to the general expression for the boundary contribution to the trace of the stress tensor, given in [15]. We stress that it is the coefficient of an IR logarithmic divergence what is related to the conformal anomaly. This is related to the underlying UV/IR connection discussed in appendix A (see also [24,13]). This implies that a short-distance cutoff δ on a conformally related geometry is equivalent to an IR cutoff ρ 0 on H 3 , δ ∼ 1/ρ 0 . In additon, assuming this connection, we used the formula in [15] to compute the trace of the stress tensor, finding a perfect matching with the prediction coming from the coefficient of the IR logarithmic divergence. It would be extremely interesting to undertake a general analysis of the boundary anomalies extending the work of [15], perhaps leading to a prediction of the boundary anomalies found here, presumably in terms of the central charges of the the CFT.
As another example of this, we have studied the case of S 3 ×H 3 , where we have explicitly seen the appearance of the boundary anomaly. Interestingly, we have seen that the ratio of coefficients of the IR and UV logarithmic terms in F (3,3) and F (6,0) is 1/2, both for the weak coupling computation as well as for the holographic computation. We have also computed F (5,5) , finding an IR logarithmic term whose coefficient is 1/2 the coefficient in the UV logarithmic term of F (10,0) . It would be very interesting to prove such patterns, compare them to holography and understand them from the general form of the trace of the stress tensor.
The odd-dimensional cases S 2n × H 2k+1 remain puzzling. Since they are free from bulk conformal anomalies, they may only suffer from boundary anomalies. Indeed, the case n = 0, k = 1 allowed us to explicitly test this by matching our result to the prediction of [15]. However, for n = 0, we seem to find that there is no logarithmic term (i.e. no pole in DREG). 13 The absence of a logarithmic term suggests that there is no boundary anomaly either. In turn, at strong coupling the holographic computation gives a vanishing free energy irrespective of the value of n. It would be important to understand these cases at least qualitatively, since, in principle, extra counterterms due to boundary effects might give new non-trival contributions [8,19,20,21,22].
We also showed that supersymmetric field theories can be defined on spaces S a × H b (see appendix C). It would be extremely interesting to compute the partition function on S a × H b by supersymmetric localization for supersymmetric gauge theories in various dimensions. In particular, in the large N limit, the localization results may be directly compared with our results for the holographic free energy. 14 We will also make use of the explicit formula for the volume of an a-dimensional sphere: Let us now discuss the connection between the IR cutoff on S a × H 2k+1 and the UV cutoff on S a+2k+1 induced by the conformal map, generalizing the arguments of [24] for S 1 × H 2k+1 to our case. From (1.3), we see that a covariant, short-distance UV cutoff δ 2 S 2k+1 on the sphere is related to a covariant UV cutoff δ 2 H 2k+1 on the hyperbolic space by where we have used ρ = sinh y, ρ 0 ≫ 1. Thus the UV momentum cutoff on the sphere Λ ≡ 1/δ S 2k+1 is linearly related to the IR cutoff ρ 0 on the volume of H 2k+1 . The case a = 0 has to be treated separately, since the conformal map is different. Consider for example H 2k+1 with metric on the sphere is related to a covariant UV cutoff δ 2 H 2k+1 on the hyperbolic space by leading to the same linear relation between Λ = 1/δ S 2k+1 and ρ 0 .
B Free energy on spheres
In this appendix we include the holographic derivation of the free energy for a CFT on S D−1 . To that matter, let us consider euclidean AdS D with metric ds 2 = dr 2 1 + r 2 + r 2 ds 2 S D−1 . In turn, the induced metric on the boundary is dξ a h ab dξ b = r 2 ds 2 S D−1 , while the normal vector to the boundary is Thus, the Gibbons-Hawking term is In order to compute the counterterms, note that, when evaluated at the cut-off surface We can now compute the free energy by adding the Einstein-Hilbert, Gibbons-Hawking and counterterm contribution upon expanding for large r 0 . We find For odd D, this formula is to be understood in dimensional regularization, since there is a 1/ǫ pole associated with a logarithmic term (see [42]). 15 Finally, using the formula (A.4) for the volume of the S D−1 , we get Being conformally related to spheres, one may expect that the family of geometries S a × AdS b can support supersymmetric theories on them. In this appendix we explicitly study the SUSY of the geometries under consideration (see [51] for related developments). Consider the metric The conformal Killing spinor equation is the form where ǫ is the Poincare spinor, η the superconformal spinor and c some constant depending on dimensionality. Note that, because of the product structure of the space, the spin connection appearing in the covariant derivative will have either all indices along the sphere S a or all indices along the AdS b . Let us denote the sphere indices by i, j · · · and the AdS indices by I, J, · · · . Let us now assume that ǫ = ǫ S ⊗ ǫ AdS and η = η S ⊗ η AdS . We now need to write down the Dirac matrices for S a × AdS b . The decomposition depends on a, b being odd or even. In order to illustrate the details, let us first consider the (a even, b odd) case. We will use some results from [52]. The Dirac matrices are where γ i , γ I and γ S are the respectively the S a Dirac matrices, the AdS b Dirac matrices and the S a chirality operator satisfying γ 2 S = l 1. Clearly (C.5) Then the Killing spinor equation becomes This is satisfied as long as the following equations hold Since / ∇ǫ = / ∇ S ǫ S ⊗ ǫ AdS + γ S ǫ S ⊗ / ∇ AdS ǫ AdS , (C. 10) we have that Hence, if we take ǫ AdS = η AdS and γ S ǫ S = η S , the equation is satisfied. In turn, (C.8) reduces to ∇ i ǫ S = cγ i γ S ǫ S , ∇ I ǫ AdS = cγ I ǫ AdS . (C. 13) These equations are precisely the Killing spinor equations on S a and AdS b . It should be stressed that this is true for equal radii S a and H b , which is precisely the case when the geometry is conformal to S a+b . The other cases follow in a similar fashion. For instance, let us now consider (a even, b even). There are two choices for the Dirac matrices [52] • i): Γ i = γ i ⊗ l 1 , This is formally the same computation as the case (a even, b odd), just taking into account that now b is even. Thus, we have ǫ = ǫ S ⊗ ǫ AdS , where the sphere and AdS Killing spinors satisfy, respectively (we set c = 1 2 ) (C.14) • ii): This is formally the same computation as the case (a odd, b even), now setting a to be an even number. | 12,203.8 | 2017-08-01T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Citizenship Training through sMOOCs: A Participative and Intercreative Learning
: sMOOCs (social massive open online courses) have revealed themselves as a remarkable opportunity to foster the culture of participation and open knowledge and sustainability. Due to their communicative potential, they make it possible for participants to interact, to create ubiquitous learning, and to build knowledge in a collective way. This educational and communicative line has set the basis for the European ECO (e-learning, communication, open data) Project, i.e., the purpose of our study, which, beyond training teachers, is decidedly betting on open life-long education. The results presented in the study have been elicited by following a quantitative methodology, through the analysis of a “sMOOC Step by Step” community, intended to become an educational gate to students’ empowerment, shared knowledge, and participation in the course. Results show that collaborative work practices organized by teachers in that virtual learning community encourage educational changes. Both the degree of satisfaction with the learning achieved and the way students perceive its direct applicability to real-life professional contexts prove the e ff ectiveness of this training model. Our research has expanded, aiming to discover sMOOCs opportunities for teacher training and assessing the motivation shown by the virtual learning community towards such an educational reality.
Introduction
MOOCs provide members in the virtual learning community with a wide range of opportunities for interaction and communication, using their applications in different ways according to learning concepts, activities, resources, media, methodology, assessment processes, and interactivity. Advantages in this kind of training include issues such as interactivity between members in the virtual community, the promotion of universities or educational institutions hosting the courses, and the possibility to reconsider the curricular elements structuring online training courses. However, there are some disadvantages to be taken into account as well, such as the success of "package content" which means, in other words, reverting back to educational approaches characteristic of the late 20th century which included content and several resources lineally structured with no intention of fostering a pedagogical change. Regarding their benefits and risks [1] in higher education, MOOCs can be classified into three types: xMOOCs, cMOOCs and sMOOCs. Firstly, those following the xMOOC model, applicable to most MOOCs currently online [2,3]. With a clear objectivist and instructivist approach, xMOOCs either offer a new educational trend or follow a traditional learning model based on videos and short multiple-option quizzes. The communicative model they represent, specifically in the case of sMOOCs, breaks the unidirectionality previously associated to this term in some of its modalities (xMOOC) and takes a step further on social interaction (cMOOC), warrantying bidirectionality-reinforced by a multitude of connections between different nodes, constantly being created and developed. Intercreativity (Osuna & Camarero, 2016) is therefore prioritized, so that knowledge flows among all Spain. Additionally, two off-EU institutions have also contributed to the project: the University of Quilmes in Argentina and the Manuela Beltrán University in Colombia. The main distinguishing feature in this macro-project, whose MOOCs have engaged over 55,000 students and trained over 200 e-teachers, is to turn participants into autonomous e-teachers, able to develop their own sMOOC courses. The purpose of the ECO Project and its sMOOC-based pedagogical approach is to enable training for everyone, providing them with the necessary tools to step forward and take control of their own educational process within a ubiquitous environment (anywhere, anytime, and from any device) [12]. On a wider scope, the goal is to create a multicultural, intercreative environment for knowledge, purposely built through the collaboration of all participants and its degree of engagement in educational institutions. This way, through this training model, we contribute to the construction of collective intelligence which is, at the same time, accessible for everyone. A more democratic social space becomes a reality [13], on the one hand, through participation on social media and a break-up with closed-down educational structures and, on the other hand, through a collaborative construction of knowledge. ECO provides online training on all fields of knowledge with an educommunicative approach, with special attention paid to new ways of learning through the connectivist learning theory [14]. This reality has been made possible thanks to the preparation of the teaching team, whose members have encouraged students' digital empowerment [15], enabling them to create their own sMOOCs within the project's learning scenarios. We firmly believe that collaboration among equals and the creation of networks in sMOOC courses, even if done gradually and through hybrid forms, open up new horizons for development toward learning and knowledge sharing at college higher education [16]. Success in these courses is based on an interactive participation [17] which spans beyond the course platform applied to social software, through all those spaces which contribute to the architecture of participation [18], resulting in a committed participation by individuals who position themselves as active cultural agents. The purpose of this research work is to analyze new education formats, to be able to provide a better response to the requirements of the current era. This proposal for the capacitation of the citizens also makes open and ever-changing knowledge possible, aiming to train innovative professional educators, but also to gather together, through virtual learning communities, all those who are meant to be trained as a community of practice [19], which will have an impact on improving the social layer from a sustainability perspective.
Materials and Methods
The goal of the "sMOOC Step by Step" is to motivate students enough to become e-teachers and, as such, create their own sMOOCs. At the end of the sMOOC, participants should be able to answer the following questions: "Why is a sMOOC worth doing? How is a sMOOC built? How is a sMOOC designed? What content does the sMOOC focus on? How can we make a sMOOC accessible and successful? How can we assess a sMOOC and how can we use the data it contains?". Thus, the course's contents, including materials and activities, were designed purposely.
Overall, the purpose of this research work is to analyze new education formats, to be able to provide a better response to the requirement of the current era. In particular, from an operational point of view, the present research work intends:
•
Objective 1: To examine the professional profile of participants in the sMOOC; • Objective 2: To assess the degree of satisfaction, as seen from the students' perspective, concerning three factors: the course's activities, resources, and services; the learning achieved, and its applicability to professional life.
The present research aims to show how its 250 participants perceived the "sMOOC Step by Step" course through its first and second iterations (January and June 2015, respectively), bearing in mind that the educational and communicative practices carried out were based on the culture of participation. The study was carried out through a voluntary questionnaire accessible from the course's virtual platform and which was completed by 250 participants out of the 3416 who enrolled in the MOOC. The questionnaire consisted of a total of 30 questions, most of which provided multiple-choice answers and included a Likert's scale, in order to know the degree of satisfaction for the learning acquired, the interaction, and the usefulness related to the different dimensions learned in the course. The questionnaire was validated by experts through a series of interviews before administering. As our sample consists of a significant number of students attending the sMOOC study, we can assume that the data can be generalized as they are a positive representation, bearing in mind the digital scenario where they are being developed.
The dissemination of the MOOC was launched with a fortnight's notice and was open to all citizens. During that time, the MOOC accepted pre-enrollment pending its opening. From the start date, the contents, documents, forums, working groups, etc. were opened. The platform hosting the sMOOC itself reported on the confidentiality of the data and other standard ethical issues, which each person has to accept and give their consent to in advance before they can enroll in the course. In the initial part, a form was incorporated to detect previous knowledge and attend to the expectations of the students. The sMOOC was initially configured by thematic cores, which includes an introductory and explanatory video, a specific discussion forum, microblogging discussion associated with a course hashtag, and a series of activities related to challenge-based learning. It is worth highlighting the gambling activities designed to encourage motivation and the participation and involvement of students, including progress bars, likes, monitoring of other students, scores achieved in the activities or karma level obtained, in addition to the serious games developed with the theme of the sMOOC.
This descriptive project, presented from a determined reality, is also supported by some hypotheses which have led our analysis on a specific direction and canalized information gathered through quantitative techniques. These hypotheses have provided an explanation for the studied phenomena and a structure to the final report of results. The hypotheses have been phrased as follows:
Hypothesis 1. Students show a high level of satisfaction with the "sMOOC Step by
Step" course; Hypothesis 2. The overall satisfaction degree regarding the course corresponds to the students' perception of the sMOOC's activities resources and services, the learning achieved, and its applicability to professional life.
The analysis carried out is based on quantitative techniques and is therefore framed within positivist research methods. This methodology is based on an approach oriented toward the generalization of the results, focusing on facts that are observable and able to be measured through experimental control and statistical analysis. The quantitative method used clearly responds to the hypotheses put forward in this study and which are intended to produce concise results, in order to facilitate the veracity of the conclusions formulated. In order to define the problem area and suggest specific proposals for research, a systematic search for information was developed in which the researcher presents the research on the data they wish to obtain.
A quantitative methodology was followed because it makes it possible to study a phenomenon in a standardized way, greatly limiting the interference of the researcher's-consciously or not-biased input [20]. Data compilation was carried out from the feedback obtained through a questionnaire [21] displayed at the end of the sMOOC. Significantly, the questionnaire was available for students from the moment they enrolled. We chose this descriptive method because we consider it to be the best-known quantitative data-compilation technique, which provides a fast, accurate, and convenient way to describe inclinations, frequency of opinions, and attitudes shown by a specific sector of the population. It also allows locating the outreach and distribution of a given phenomenon. The questionnaire was carried out using LimeSurvey software, which allows answers to be classified and certain variables to be associated according to population traits, so that correlations among variables can be later studied through SPSS software. Such tools have proven helpful to rightly specify each answer, in order to figure out sMOOC students' opinions, expectations, and criteria concerning this educational model, which encourages the culture of participation and its interest for social change.
Students' Profile
The sample includes students between 19 and 73 years old, 52.1% of them female and 47/9% male. A prevalence of the educational sector can be observed, with a 48.6% providing a positive answer to the first hypothesis to be confirmed. The greatest interest was perceived among those working in the field of education, who saw an opportunity to improve their professional teaching skills in their regular classroom work. All other percentages follow at a certain distance, such as a 6.2% of participants working in IT or mathematics, 3.7% in administrative positions and office work, and 2.7% in cooperation or social work. Educators are therefore one of the sectors working on the ECO project's sMOOCs the most. The significant participation of teachers should be favourably considered, as far as teachers are perceived as agents of social change who, moreover, show interest in training through this type of courses, as well as engaging in specialized and continuous training.
Students' Overall Degree of Satisfaction
Through the analysis of frequent issues referring to students' degree of satisfaction, varied results can be observed, as shown in Table 1. On the one hand, satisfaction was high concerning audiovisual materials and the design of collaborative activities. On the other hand, participants are still critical toward the technical support provided during the course. The quality of the curricular material of this training model and the didactic approach are clearly stated. It is observed, though, that digital MOOC platforms, in general, are not responding yet at an appropriate level of interaction for these types of courses. As implied by the presented frequencies, a good level of satisfaction of sMOOC participants can be perceived in three out of the four studied factors. A closer examination of the profile for "sMOOC Step by Step" students on X1_age level and X3_residence is advisable, in order to determine whether there is a correspondence among the four variables shown in Table 2 (X22, X21, X18 y X24). The values obtained for Chi-squared tests, aimed to check the independence of such variables, are shown in Table 2, in order to be able to subsequently measure their positive or negative correlation. Students' assessment toward audiovisual materials, provided documents, collaborative practices, and sMOOC technical support are clearly dependent on age variables. Calculating Kendall and Spearman's correlation coefficients (Table 3) among the ordinal variables of the previous table, a significant and negative connection can be observed between variables X1 (age) and X21 (assessment of course documents). It can therefore be inferred that the older the age in the sample, the lower the degree of satisfaction with the course documents. By contrast, a positive inclination can be observed between variables X18 (assessment of collaborative activities design), X21, X22 (assessment of audiovisual materials), and X24 (assessment of technical support), from which it can be inferred that the higher the satisfaction with audiovisual materials, the higher the satisfaction with the provided documents, with the collaborative tasks carried out, and with the technical support offered. The positive experience in these training environments predisposes the participants to express their general satisfaction in all the didactic areas that motivate their learning. These data show that the older the age, the more satisfied participants are regarding materials used throughout the course, due to their greater learning experience and their more critical point of view toward learning materials.
Degree of Satisfaction Perceived by Students Concerning the Learning Achieved and its Applicability to Their Professional Life
In addition to sMOOC students' satisfaction degree, further study on the real degree of learning achieved and the transfer of that content to students' professional activity was considered necessary. Such a purpose involved studying the X63 (How_much_did_you_learn_in_MOOC) and X62 (Applications_MOOC_content_in_daily_professional_life) variables. The information provided by Table 4 confirms that 73.7% of the students participating in the sMOOC claim to agree very much or to some extent with the idea of this educational model being adequate for their training. Moreover, data show that this type of learning is positively valued as far as its impact on their professional life is concerned. Such data are furtherly reinforced by the results provided by Table 5, according to which 59.9% of the participating students confirm that the learning is applicable to their regular professional lives and to the construction of a sustainable society.
Participation and Empowerment of Students
The measure of the researchers' success shows that the completion rate in these courses is lower than in traditional e-learning courses. Moreover, its massive features set a noticeable tendency toward transmissive learning methods [22,23]. The Educational Goals 2021 proposed by the Organization of Ibero-American States outlines a series of standards which these courses should meet as a quality requirement; for instance, students should actively participate in their own learning, as well as create and share their knowledge [24,25]. In addition to those studied in the previous epigraph, there are further variables in Table 6 to be considered, such as the promotion of discussion and personal reflection, interaction among students and creativity, aimed to minimize effects such as high drop-out rates in this kind of courses. Kendall and Spearman's correlation coefficients of the ordinal variables in Table 7 show a significant positive correlation among variables X29, X30, X31, and X32, from which we can infer that the more discussion and reflection are encouraged during the course, the higher students' commitment is, as well as their mutual interaction and creativity, seen in Table 6. Based on this fact, we could be glimpsing at a possible solution to counteract the high drop-out rates, which are related to motivation [26,27], commitment or degree of engagement. The participants, motivated by this training model, will be more predisposed to developing their learning in virtual scenarios and keeping a constant degree of participation and involvement, thus decreasing drop-out rates. Table 6. Students' overall assessment on whether the course encourages discussion and personal reflection, peer-to-peer interaction, and creativity. X29 = Encourages discussion and personal reflection on the field tackled. X30 = Promotes learner involvement in the course. X31 = Promotes interaction with other learners in the course. X32 = Promotes student creativity.
Discussion
The 2016 research has permitted the verification of hypotheses contrasted with the results obtained from the sample. A firm articulation between the construction of knowledge and a strong social dimension (collaborative learning) is proving essential, as well as between the flexibility required by sMOOC students and the working pace they need to achieve their goals. Therefore, the elimination of rigid learning paths, highly structured tasks with fixed sequences, and the tiresome interdependence of sequential tasks-all of them typical xMOOC features that reduce flexibility and increase the distance between teachers and students-has been positively valued. In addition, a robust interaction among all the members in the learning community makes the difference when compared to the cMOOC model. In sMOOCs, teachers engage as mediators and facilitators to encourage students' collective learning, while in cMOOCs, there is no such role for teachers. As a matter of fact, the teachers' accompanying role empowers participants toward a socioconstructivist learning and a bidirectional, horizontal communication among them all.
As we conclude, students' satisfaction with documents, audiovisual materials, and collaborative practices, together with their satisfaction with the degree of interaction among members in the educational community, are clearly interdependent from age factors. Moreover, it is highlighted that in addition to the previously mentioned correlation, the degree of satisfaction concerning the learning achieved by students is mainly assessed as "very much" and "to a large extent", while the level of transferability of contents to students' professional life is assessed as "to a large extent" and "to some extent". Results are therefore good in both cases, especially concerning the degree of learning. Hence, it can be stated that students' level of satisfaction corresponds to the learning achieved and its applicability to their daily professional life and to the construction of a sustainable society.
Participants state that members in the virtual learning community are potential peers in the learning process, which leads us to think of good rates of motivation and participation achieved. Social interaction experienced through the course can be positively considered, which proves the effort made by the "sMOOC Step by Step" teaching team to foster interaction between participants.
Obviously not only the teachers, but also the students share this belief and, subsequently, created a practice community within the course which made collaborative work real. There, the students shared interesting links, discussed the concepts studied in the forums, assessed the gamified activities that they had previously carried out, etc. Students at "sMOOC Step by Step" have experienced intercreative practices and collaborative learning, through which they have committed to becoming active cultural agents. Moreover, it is noteworthy how 700 participants voluntarily decided to join forces, found common learning subjects, and jointly created and developed 70 sMOOCs. There is a high level of satisfaction toward this way of teaching, including peer assessments, far away from traditional models. In this sense, students' self-assessment, introduced on cMOOCs and normalized on sMOOCs, responds satisfactorily to students' demands [28], offering learning opportunities on a third level of interactivity, also known as "multidirectional" [29]. Answers on the questionnaire reflect the idea of a learning community showing willingness to participate, since they themselves are the ones getting enriched by others' contributions, thus appropriating a role traditionally granted to teachers. To that end, they agreed they would have to participate and be mutually responsible for their learning. Doubts about their empowerment or commitment with their own learning are nowhere to be found throughout the course.
From this perspective, intercreativity and participation of contributors in the learning process have been valuated. We should not forget that one of the main educational challenges of the present century is to provide a sound basis for continuous learning, making it available to the largest possible amount of people, thus helping to fight the digital gap still present in some societies and excluded groups. sMOOCs should support an educational practice contextualized in the current media, thus increasing the possibilities for interaction and the creation of a richer, more diversified learning environment, through which people resort to a wider range of materials, contexts, and situations in order to participate in the educational experience. We are referring to a series of educational and communicational strategies implemented in the "sMOOC Step by Step" and aimed at promoting social change and the revolution of citizens, toward a new educational reality in which formal and informal educational contexts will merge into one. The purpose pursued by an sMOOC cannot be carried out if we do not allow in course structures and on the same platforms the empowerment of students, a space aimed at the participation of the virtual community of learning [30] and projected by social software, opening the way to a new collaborative style of knowledge construction toward a community of practice, a collective intellect. From this perspective, sMOOCs show an intelligent crowd that is built through the architecture of participation and that is projected toward consolidation as a community of practice.
sMOOCs stand out as a clear point of attraction within the scope of Higher Education. In this study, the students' perspective has centered the analysis of the satisfaction degree regarding the learning achieved throughout the sMOOC and its transfer to professional contexts [31]. The goals have been reached, moreover, from a training model based on participation and students' engagement in their own training process. The study, however, also shows some limitations. Firstly, the obtained data only correspond to students' personal perception. Such a perception should eventually be cross-checked by experts in massive, open, online education, so that students' degree of satisfaction can be compared to the actual learning achieved through the course. Secondly, there is a clear bias regarding the measurement of the satisfaction degree, because the only opinions available are those of students who voluntarily filled the questionnaire. There are cases of students who completed the course but then declined to fill the questionnaire form, as well as some others who, in a less expected move, answered the questionnaire without having completed the sMOOC. Finally, 200 of the students who had completed the "sMOOC Step by Step" course decided to become e-teachers and created teams to jointly implement all they had learnt in the design and development of a total of 70 sMOOCs.
Conclusions
sMOOCs present themselves as educational proposals with a great potential for continuous and updated training for people, especially for active teachers, thus allowing an educational change able to meet the current society's demands. Due, on the one hand, to the high degree of heterogeneity of sMOOC participants in terms of competences, previous knowledge, and motivation and, on the other hand, due to these courses' nonformal nature, students should play a central role in the educational process, assuming an active, co-responsible approach to their own learning. Knowledge is constructed through reflection, practice (creation, production), and dialogue in a social context of collaboration [32], which fosters collaborative working. Success in this type of courses shall be measured bearing in mind the goals and aims of the participating subjects.
The sMOOC proposal at the ECO European Project shall permit the possibility to adjust itself to the ever-changing aims of each participant through the course and respond to their motivation and learning demands. Collaborative working and the joint construction of open knowledge are fostered and supported by teaching materials provided in varied formats, making the most of spaces and tools provided by virtual environments. Fostering collaborative learning within the community requires from the team of teachers and facilitators a careful planning, interventions through the process to solve conflicts, and a final analysis of team-working [33]. Such a reality is built from communicative tools promoting debate, exchange of experiences, and the construction of a different way of learning based on collaboration and social openness online. This communicational model is completed by further pedagogical aspects intending to achieve quality learning on these environments, to pay attention to group processes and their impact on the construction of knowledge [34]. This situation requires an in-depth analysis of key aspects such as teachers' professional competences [35] in higher education. Those competences are not limited to using certain technological tools or performing mechanical tasks [36] but also include reaching out for pedagogical and learning management requirements, which are essential for the transformation of an online, open college experience [37].
We need a greater effort invested in providing alternative learning routes focused on intercreativity, the collaborative construction of learning, and sustainable practices, but the technological evolution of sMOOCs and teachers' own training prevent us from progressing according to the demands of the digital society. All aspects concerning communication and interaction among participants in educational environments, albeit highly valued, are still the subject of constraints which obstruct the construction of a bidirectional approach. Therefore, we consider it necessary to insist on a continuous improvement in these areas, in order to achieve even higher levels of satisfaction and improved learning transfer for a sustainable society. We must keep researching how to constantly improve future sMOOCs editions, betting on a new model with an approach opting for the transference of the knowledge acquired by students, in order to develop a professional transformation. Once improvements are made on this formative model, its connections with the empowerment levels of students will be studied. | 6,112.4 | 2020-10-09T00:00:00.000 | [
"Education",
"Computer Science"
] |
Flipping SU(5) out of Trouble
Minimal supersymmetric SU(5) GUTs are being squeezed by the recent values of alpha_s, sin^2 theta_W, the lower limit on the lifetime for p to nubar K decay, and other experimental data. We show how the minimal flipped SU(5) GUT survives these perils, accommodating the experimental values of alpha_s and sin^2 theta_W and other constraints, while yielding a p to e/mu+ pi0 lifetime beyond the present experimental limit but potentially accessible to a further round of experiments. We exemplify our analysis using a set of benchmark supersymmetric scenarios proposed recently in a constrained MSSM framework.
One of the key pieces of circumstantial evidence in favour of grand unification has long been the consistency of the gauge couplings measured at low energies with a common value at some very high energy scale, once renormalization effects are taken into account. This consistency is significantly improved when light supersymmetric particles are included in the renormalization-group running, in which case the agreement improves to the per-mille level [1].
However, this circumstantial evidence is not universally accepted as convincing. For example, it has recently been suggested that the logarithmic unification of the gauge couplings is as fortuitous as the apparent similarity in the sizes of the sun and moon [2]. Alternatively, it has been argued that the unification scale could be as low as 1 TeV, either as a result of power-law running of the effective gauge couplings in theories with more than four dimensions [3], or in theories with many copies of the SU(3) × SU(2) × U(1) gauge group in four dimensions [4].
For some time now, detailed calculations have served to emphasize [5] how much fine tuning is needed in models with power-law running to reproduce the effortless success of supersymmetric grand unification with logarithmic running of the gauge couplings. Moreover, data from particle physics and cosmology provide independent hints for low-energy supersymmetry. Precision electroweak data favour quite strongly a low-mass Higgs boson [6], as required in the minimal supersymmetric extension of the Standard Model (MSSM) [7], and the lightest supersymmetric particle is a perfect candidate [8] for the cold dark matter thought by astrophysicists to infest the Universe. Many studies have shown that these and other low-energy data -such those on b → sγ decay [9] and g µ − 2 [10] -are completely consistent with low-energy supersymmetry, and a number of benchmark supersymmetric scenarios have been proposed [11].
Issues arise, however, when one considers specific supersymmetric grand unified theories. One is the exact value of sin 2 θ W , which acquires important corrections from threshold effects at the electroweak scale, associated with the spectrum of MSSM particles [12,13], and at the grand unification scale, associated with the spectrum of GUT supermultiplets [12,14]. Precision measurements indicate a small deviation of sin 2 θ W even from the value predicted in a minimal supersymmetric SU(5) GUT, assuming the range of α s (M Z ) now indicated by experiment [15].
The second issue is the lifetime of the proton. Minimal supersymmetric SU(5) avoids the catastrophically rapid p → e + π 0 decay that scuppered non-supersymmetric SU(5). However, supersymmetric SU(5) predicts p →νK + decay through d = 5 operators at a rate that may be too fast [16] to satisfy the presently available lower limit on the lifetime for this decay [17,18]. The latter requires the SU(5) colour-triplet Higgs particles to weigh > 7.6 × 10 16 GeV, whereas conventional SU(5) unification for α s (M Z ) = 0.1185 ± 0.002, sin 2 θ W = 0.23117 ± 0.00016 and α em (M Z ) = 1/(127.943 ± 0.027) [18] would impose the upper limit of 3.6 × 10 15 GeV at the 90% confidence level [16]. This problem becomes particularly acute if the sparticle spectrum is relatively light, as would be indicated if the present experimental and theoretical central values of g µ − 2 [10] remain unchanged as the errors are reduced.
The simplest way to avoid these potential pitfalls is to flip SU(5) [19,20]. As is well known, flipped SU(5) offers the possibility of decoupling somewhat the scales at which the Standard Model SU(3), SU(2) and U(1) factors are unified. This would allow the strength of the U(1) gauge to become smaller than in minimal supersymmetric SU(5), for the same value of α s (M Z ) [13]. Moreover, in addition to having a longer p → e/µ + π 0 lifetime than nonsupersymmetric SU(5), flipped SU(5) also suppresses the d = 5 operators that are dangerous in minimal supersymmetric SU(5), by virtue of its economical missing-partner mechanism [19].
In this paper, we re-analyze the issues of sin 2 θ W and proton decay in flipped SU(5) [13], in view of the most recent precise measurements of α s (M Z ) and sin 2 θ W , and the latest limits on supersymmetric particles. We study these issues in the MSSM, constraining the soft supersymmetry-breaking gaugino masses m 1/2 and scalar masses m 0 to be universal at the GUT scale (CMSSM), making both a general analysis in the (m 1/2 , m 0 ) plane and also more detailed specific analyses of benchmark CMSSM parameter choices that respect all the available experimental constraints [11]. We find that the p → e/µ + π 0 decay lifetime exceeds the present experimental lower limit [17], with a significant likelihood that it may be accessible to the next round of experiments [21]. We recall the ambiguities and characteristic ratios of proton decay modes in flipped SU(5).
We first recall the lowest-order expression for α s (M Z ) in conventional SU(5) GUTs, namely The present central experimental value of α s (M Z ) = 0.118 is obtained if one takes sin 2 θ W = 0.231 and α −1 = 128, indicating the supersymmetric grand unification is in the right ball-park. However, at the next order, one should include two-loop corrections δ 2loop as well as electroweak and GUT threshold corrections, that we denote by δ light and δ heavy . Their effects can be included by making the following substitution in (1) [12]: where δ 2loop ≈ 0.0030, whereas δ light and δ heavy can have either sign. If one neglects δ light and δ heavy , the conventional SU(5) prediction increases to α s (M Z ) ≈ 0.130 [15]. A value of α s (M Z ) within one standard deviation of the present central experimental value requires δ light and/or δ heavy to be non-negligible, so that the combination (δ 2loop + δ light + δ heavy ) is suppressed. However, in large regions of parameter space δ light > 0, which does not help. Moreover, in conventional SU (5), as was pointed out in [12,15], a compensatory value of δ heavy is difficult to reconcile with proton decay constraints. This problem is exacerbated by the most recent lower limit on τ (p →νK + ) [17] 1 .
As has been advertized previously [13], an alternative way to lower α s (M Z ) is to flip SU(5). In a flipped SU(5) model, there is a first unification scale M 32 at which the SU(3) and SU(2) gauge couplings become equal, which is given to lowest order by [24] 1 where α 2 = α/ sin 2 θ W , α 3 = α s (M Z ), and the one-loop beta function coefficients are b 2 = +1, b 3 = −3. The hypercharge gauge coupling α Y = 5 3 (α/ cos 2 θ W ) has, in general, a lower value α ′ 1 at the scale M 32 : where b Y = 33/5. Above the scale M 32 , the gauge group is the full SU(5) × U(1), with the U(1) gauge coupling α 1 related to α ′ 1 and the SU(5) gauge coupling α 5 as follows: The SU(5) and U(1) gauge couplings then become equal at some higher scale M 51 . The maximum possible value of M 32 , namely M max 32 , is obtained by substituting α ′ 1 = α 5 (M 32 ) into (5), and coincides with the unification scale in conventional SU (5) [18]. In general, one has and the flipped SU(5) prediction for α s (M Z ) is in general smaller than in minimal SU(5), for the same value of sin 2 θ W . The next-to-leading order corrections to (7) are also obtained by the substitution in (2). Numerically, an increase of ∼ 10% in the denominator in (1), which would compensate for the decrease due to δ 2loop , could be achieved simply by setting M 32 ≈ 1 3 M max 32 in (7).
In order to understand the implications for τ (p → e/µ + π 0 ) decay, we first calculate M 32 , using (7) with sin 2 θ W replaced by sin 2 θ W − δ 2loop , leaving for later discussions of the possible effects of δ light,heavy . We now explore the possible consequences of δ light for M 32 , following [12,13]. We approximate the δ light correction by where L(x) = ln(x/M Z ). As already mentioned, we assume that the soft supersymmetrybreaking scalar masses m 0 , gaugino masses m 1/2 and trilinear coefficients A 0 are universal at the GUT scale (CMSSM). We used ISASUGRA [25] to calculate the sparticle spectra in terms of The unknown parameters in (8) were constrained by requiring that electroweak symmetry breaking be triggered by radiative corrections, so that the correct overall electroweak scale and the ratio tan β of Higgs v.e.v.'s fix |µ| and m A in terms of m 1/2 and m 0 . Before making a more general survey, we recall that a number of benchmark CMSSM scenarios have been proposed [11], which include these constraints and are consistent with all the experimental limits on sparticle masses, the LEP lower limit on m h , the world-average value of b → sγ decay, the preferred range 0.1 < Ω χ h 2 < 0.3 of the supersymmetric relic density, and g µ − 2 within 2 σ of the present experimental value. These points all have A 0 = 0, but otherwise span the possible ranges of m 1/2 , m 0 , tan β and feature both signs for µ. Fig. 1 also shows the change in M 32 induced by the values of δ light in these benchmark models, assuming a fixed value α s (M Z ) = 0.1185. In general, these benchmark models increase M 32 for any fixed value of α s (M Z ) and sin 2 θ W . As α s (M Z ) varies, the predicted value of M 32 in each model varies in the same way as indicated by the sloping lines. We recall that the estimated error in α s (M Z ) is about 0.002, corresponding to an uncertainty in M 32 of the order of 20%, and hence a 2 Heavy singlet neutrinos were not used in the renormalization-group equations.
corresponding uncertainty in the proton lifetime of a factor of about two. The error associated with the uncertainty in sin 2 θ W is somewhat smaller 3 .
We now turn to the calculation of τ (p → e/µ + π 0 ). We recall first that the form of the effective dimension-6 operator in flipped SU(5) is different [24,26] from that in conventional SU(5) [27,28]: where θ c is the Cabibbo angle 4 . Also appearing in (9) are two unknown but irrelevant CPviolating phases η 11,21 and lepton flavour eigenstates ν L and ℓ L that are related to mass eigenstates by unknown but relevant mixing matrices: Despite our ignorance of the mixing matrices (10), some characteristic flipped SU (5) predictions can be made [24]: In the light of recent experimental evidence for near-maximal neutrino mixing, it is reasonable to think that (at least some of) the e/µ entries in U ℓ are O(1). In what follows, we assume that the lepton mixing factors |U ℓ 11,12 | 2 are indeed O(1), and do not lead to large numerical suppressions of both the p → e/µ + π 0 decay rates. Note that there is no corresponding suppression of the p →νπ + and n →νπ 0 decay rates, since all the neutrino flavours are summed over. However, without further information, we are unable to predict the ratio of p → e + X and p → µ + X decay rates. Hereafter, wherever we refer to p → e + π 0 decay, this mixing-angle ambiguity should be understood.
The p → e + π 0 decay amplitude is proportional to the overall normalization of the proton wave function at the origin. The relevant matrix elements are α, β, defined by The reduced matrix elements α, β have recently been re-evaluated in a lattice approach [29], yielding values that are very similar and somewhat larger than had often been assumed previously, and therefore exacerbating the proton-stability problem for conventional supersymmetric SU (5). Here, we use here the new central value α = β = 0.015 GeV 3 for reference. The error quoted on this determination is below 10%, corresponding to an uncertainty of less than 20% in τ (p → e + π 0 ), which would be negligible compared with other uncertainties in our calculation. Thus, we have the following estimate, based on [26,16] and references therein: We see in Fig. 2 that the 'bulk' regions of the parameter space preferred by astrophysics and cosmology, which occur at relatively small values of (m 1/2 , m 0 ), generally correspond to τ (p → e + π 0 ) ∼ (1 − 2) × 10 35 y. However, these 'bulk' regions are generally disfavoured by the experimental lower limit on m h and/or by b → sγ decay. Larger values of τ (p → e + π 0 ) are found in the 'tail' regions of the cosmological parameter space, which occur at large m 1/2 where χ−l coannihilation may be important, and at larger m 1/2 and m 0 where resonant direct-channel annihilation via the heavier Higgs bosons A, H may be important.
We turn finally to the possible implications of the GUT threshold effect δ heavy [12,14]. A 5 The horizontal spacing between points sampled was comparable to the thickness of these lines. 6 For fuller discussions of the implementations of these constraints with and without ISASUGRA, see [11,30]. general expression for this in flipped SU(5) is given in [12]: where M H 3 = λ 4 |V | and MH 3 = λ 5 |V | are the masses of the heavy triplet Higgs supermultiplets, the X, Y gauge bosons and gauginos have common masses M V = g 5 |V | where V is the common v.e.v. of the 10 and 10 Higgs supermultiplets, λ 4,5 are (largely unconstrained) Yukawa couplings, g 5 is the SU(5) gauge coupling, and r ≡ max{g 5 , λ 4 , λ 5 }. Thanks to the economical missing-partner mechanism of flipped SU (5), the H 3 andH 3 do not mix, and hence do not contribute significantly to proton decay. Thus there is no strong constraint on M H 3 ,H 3 from proton decay in flipped SU (5), and it is possible that M H 3 ,H 3 < M V (i.e., r = g 5 ). In this case, we can see from (15) that δ heavy < 0 naturally. For instance, as pointed out in [13], if λ 4 , λ 5 ∼ 1 8 g 5 , then δ heavy ≈ −0.0030, which completely compensates the δ 2loop contribution. We also recall that, in general, including δ heavy leads to a re-scaling of the M 32 /M max 32 : We display in Fig. 3 the possible numerical effects of δ heavy on τ (p → e/µ + π 0 ) in the various benchmark scenarios, assuming the plausible ranges −0.0016 < δ heavy < 0.0005 [13]. The boundary between the different shadings for each strip corresponds to the case where δ heavy = 0. The left (red) parts of the strips show how much τ (p → e + π 0 ) could be reduced by a judicious choice of δ heavy , and the right (blue) parts of the strips show how much τ (p → e + π 0 ) could be increased. The inner bars correspond to the uncertainty in sin 2 θ W . On the optimistic side, we see that some models could yield τ (p → e + π 0 ) < 10 35 y, and all models might have τ (p → e + π 0 ) < 5 × 10 35 y. However, on the pessimistic side, in no model can we exclude the possibility that τ (p → e + π 0 ) > 10 36 y.
We recall that a new generation of massive water-Čerenkov detectors weighing up to 10 6 tonnes is being proposed [21], that may be sensitive to τ (p → e + π 0 ) < 10 35 y. According to our calculations, such an experiment has a chance of detecting proton decay in flipped SU(5), though nothing can of course be guaranteed. We recall that there is a mixing-angle ambiguity (11) in the final-state charged lepton, so any such next-generation detector should be equipped to detect e + and/or µ + equally well. We also recall [24,26] that flipped SU(5) makes predictions (11) for ratios of decay rates involving strange particles, neutrinos and charged leptons that differ characteristically from those of conventional SU (5). Comparing the rates for e + , µ + and neutrino modes would give novel insights into GUTs as well as mixing patterns.
We conclude that flipped SU(5) evades two of the pitfalls of conventional supersymmetric SU(5). As we have shown in this paper, it offers the possibility of lowering the prediction for α s (M Z ) for any given value of sin 2 θ W and choice of sparticle spectrum. As for proton decay, we first recall that flipped SU(5) suppresses p →νK + decay naturally via its economical missingpartner mechanism. As in conventional supersymmetric SU(5), the lifetime for p → e/µ + π 0 decay generally exceeds the present experimental lower limit. However, as we have shown in . 36 10 . 36 10 Figure 3: For each of the CMSSM benchmark points, this plot shows, by the lighter outer bars, the range of τ (p → e/µ + π 0 ) attained by varying δ heavy over the range -0.0016 to + 0.0005 [13]. The central boundary of the narrow inner bars (red, blue) corresponds to the effect of δ light alone, with δ heavy = 0, while the narrow bars themselves represent uncertainty in sin 2 θ W . We see that heavy threshold effects could make τ (p → e/µ + π 0 ) slightly shorter or considerably longer. this paper, the flipped SU(5) mechanism for reducing α s (M Z ) reduces the scale M 32 at which colour SU(3) and electroweak SU(2) are unified, bringing τ (p → e/µ + π 0 ) tantalizingly close to the prospective sensitivity of the next round of experiments. Proton decay has historically been an embarrassment for minimal SU(5) GUTs, first in their non-supersymmetric guise and more recently in their minimal supersymmetric version. The answer may be to flip SU(5) out of trouble. | 4,580.8 | 2002-05-30T00:00:00.000 | [
"Physics"
] |
COMPARATIVE ANALYSIS OF NORMALIZATION TECHNIQUES IN THE CONTEXT OF MCDM PROBLEMS
: Normalization is an essential step in data analysis and for MCDM methods. This study aims to outline the positive and negative features of the normalization techniques that can be used in MCDM problems. In order to compare the different normalization techniques, fourteen sets representing different scenarios of decision problems were used. According to the results, if the decision-maker chooses to take the alternative with the highest value in the criteria and avoid the one with the lowest value, or vice versa, optimization-based normalization techniques should be preferred, whereas the reference-based normalization techniques are considered appropriate for situations where there are ideal values determined by the decision-maker for each criterion. However, if the decision-maker believes that the values in the criteria do not represent the monotonous increasing or decreasing benefit/cost, then non-linear normalization techniques should be used. Also, in the event of a change in the conditions mentioned above, the decision maker may opt for mixed normalization techniques. However, some data structures, such as the presence of zero, and negative values in the decision matrix, can prevent the use of some normalization techniques. The choice of the normalization technique may also be affected by the problem of rank reversal, the range of normalized values, obtaining the same optimization aspect for all criteria, and the validity of results.
Introduction
In quantitative research, researchers often try to use methods appropriate for the data structures. To do this, the data is first collected and then compiled. In the compilation process, it is always important to create the data structure required by the relevant method using scaling techniques.
In the scaling process, the unit of measurement, the size, and the level of the criteria are changed alongside one or more of the transformations, re-measurement, normalization, or weighting operations. The differences in various features of the criteria such as measurement levels, size of the range, importance levels and reflecting the decision maker's preferences effectively are prominent reasons for scaling. The other reason for scaling is the need to meet the assumptions of the method used for the research or the decision problem. In this context, the primary purpose of scaling is to provide the appropriate measurements or data structure for the proper method or analysis. Normalization is one of the critical processes used in scaling data (Jensen, 1984;Roberts, 1984;Lootsma, 1999;Tavşancıl, 2006;Kainulainen et al., 2009;Sarraf et al., 2013;Jahan & Edwards, 2015;Podviezko & Podvezko, 2015;Gardziejczyk & Zabicki, 2017).
There are numerous application areas for normalization including data mining, multivariate statistics and multi-criteria decision making (MCDM) among others. This study will however focus on the effects of the normalization processes on solutions obtained using MCDM methods.
Normalization is used to obtain criteria that have the same weight, are dimensionless, and are suitable for compensatory processes in MCDM problems. Normalization also enables the decision maker to show his preferences regarding the problem to a certain extent. There are many normalization techniques in the literature to achieve this. The choice of the normalization technique depends on the structure of problems and the assumptions of MCDM methods. Although not yet sufficient, studies on the comparison of normalization techniques have been increasing in the past decades. These studies, however, usually include a small number of techniques. Similarly, studies considering the selection of the normalization technique, and the criteria to be used in this selection process are also limited. Another important issue is that every normalization technique cannot be suitable for all decision problems. It is, therefore, necessary to investigate the extent to which the normalization techniques have achieved their purposes of development, their roles in the problem, and the MCDM method, their dimensionlessness, and comparability. In this context, this study will examine the practical comparisons of the normalization techniques, determine the positive and negative features, and outline the selection process of normalization techniques suitable for different data structures. The purpose of the study is thus to provide different perspectives on normalization techniques and a holistic framework for researchers and decision makers.
Normalization
Normalization is a scaling process used to make the criteria comparable by eliminating the optimization orientation (benefit, cost), the unit of measurement, and the variation range. Through normalization, the data is converted to a specific norm or standard. Another term often used interchangeably with normalization is standardization. However, standardization is a normalization process that eliminates unit differences and transforms values to a specific range, such as 0-1 in all criteria. In general, normalization techniques are expected to equalize the effect levels of all criteria (regardless of the weighting process), process the zero and negative values, generate the same normalized value for different units of measure that can be converted into each other (as in the case of g / cm 3 -kg / m 3 ), and not cause rank reversal problems while also ensuring symmetry in the cost and benefit optimization orientation. The normalization technique that has these features is considered successful (Pavličić, 2001;Jahan & Edwards, 2015;Podviezko & Podvezko, 2015).
In the analysis of normalization techniques in the MCDM literature, the decision matrix given in equation (1) will be used. The rows of the decision matrix contain alternatives while the columns carry the criteria. Each of its cells/elements shows the quality, feature, or performance value of each alternative in the relevant criterion. The elements of the decision matrix in equation (1) are expressed as xij. xij is the performance or result value of alternative i in criteria j. The following section of the study outlines the classifications of normalization techniques.
Classification of Normalization Techniques
Classification of normalization techniques makes it easier to identify the similarities and differences of the techniques, standardize the concepts in the field, and examine the increasing number of techniques. Various approaches can be used in the classification of normalization techniques. However, the most common classifications in the literature are done according to the distance measurements, the linearity of the normalization process or the optimization orientation of the criteria (Milani et al., 2005;Yoon & Kim, 1989;Zeng et al., 2013;Jahan & Edwards, 2015).
Distance measurements are the most commonly used in the normalization process. The distance-based normalization is the ratio of the distances of the alternatives from the starting point (vector 0) to the sum of distances of all alternatives from the starting point in the relevant criterion. In the distance-based normalization processes, Eq. (2) is used in the Lp metric. (Yoon & Kim, 1989, p. 22 Eq. (2) is used in the benefit criteria. For the cost criteria, the values are converted to benefit with the transformation of 1/xij values. In Eq. (2), Manhattan distance normalization is performed for p = 1, with Euclidean distance normalization for p = 2 and Tchebycheff distance normalization for p = ∞ (Yoon & Kim, 1989). In the literature, Manhattan distance normalization is called Sum-Based Linear Normalization, while Euclidean distance normalization is known as Vector Normalization.
For, normalization processes that are not based on distance, a specific value is used. Often, these values are the maximum and the minimum in the criterion. Similarly, reference values or large fixed numbers can be used.
The linearity of the normalization process is that the utilities or values in the criterion increase or decrease monotonously in a specific direction. In non-monotonic normalization processes, there is no continuous increase or decrease of acceptable performance values within the criterion in a certain direction. For example, in the criterion with a normal distribution, the linear normalization process will not be appropriate if the desired/ideal values are within three or four standard deviations of the mean. Z score normalization, some reference-based normalizations, and nonlinear normalizations are some examples of non-monotonic normalization techniques (Zavadskas & Turskis, 2008;Zeng et al. 2013).
We can divide normalization techniques into two fundamental classes based on whether they consider the optimization orientation of the criteria. However, some normalization techniques provide a mixed/integrated normalization process with the idea that the optimization orientation and reference value are vital for different criteria that can be found at the same time in the decision problem. The following section will examine the normalization techniques which depend on the optimization orientation, those that are independent of the optimization orientation, and those that have a mixed structure.
Normalization Techniques Depending on the Optimization Orientation
Most normalization techniques provide normalization according to the optimization orientation of the criteria. The optimization orientation is divided into two -benefit and cost. The benefit optimization orientation implies that the increase in the performance values of the alternatives evaluated in criterion j is preferred to the decrease. The cost optimization orientation is that the decline in the performance values of the alternatives in criterion j is preferred to the increase. In general, we can say that the highest (maximum) value in benefit-orientation criteria and the lowest (minimum) value in cost-orientation criteria are preferred.
Normalization techniques depending on the optimization orientation are given in Table 1 (Brauers & Zavadskas, 2006;Zavadskas & Turskis, 2008;Fayazbakhsh et al., 2009;Jahan & Edwards, 2015;Gardziejczyk & Zabicki, 2017). These techniques mainly use performance value totals, the maximum value, and the minimum value in a criterion. Gardziejczyk & Zabicki (2017) It is possible to further divide the normalization techniques depending on the optimization orientation into four sub-classes: sum-based, a maximum or minimum value-based, range-based, and others. From the techniques in Table 1, while N1, N2, and N3 are sum-based, N4-N9 are maximum-minimum value-based, N10 is rangebased, and N11 is evaluated under the other category.
In sum-based normalization techniques, the sum of performance values within the criterion is used. It seems that the normalization techniques in this class may lead to the rank reversals problem due to the changes (adding or removing alternatives) in the alternative set. For example, when the alternative performance with the highest performance value is removed from criterion j, which has a benefit optimization orientation, the maximum value used in N4 will change. Change of the maximum value in criterion j will require the recalculation of normalized values. Also, when the numerator or denominator values change, all normalized values to be generated by the techniques in Table 1 will change in criterion j.
Another critical issue is the presence of the criterion that has negative values in the decision problem. The presence of negative values in a criterion can prevent effective solutions in N1 and N2 depending on the nature of the problem. The negative and positive values in N1 can be said to be mutually offsetting. Also, the negative performance values (xij) in N2 cause negative normalized values. And these situations can prevent the realization of effective solutions.
The range of normalized values obtained by the normalization techniques in Table 1 are different from each other. The sum of all normalized values is equal to 1 in N1 for benefit-orientation criteria and N3 for all type criteria. Also, N1, N2, N3 are expected to generate nij in the range of 0-1. From these techniques, N3 was found to be useful if the values in a criterion are quite different from each other (Jahan & Edwards, 2015, p. 338).
Maximum/minimum value-based normalization techniques are said to be less successful than the sum-based normalization techniques in handling the scale effect. Also, some of the techniques in this class cannot be effectively applied to cost criteria. In some cases, the normalized values may be higher than 1 in the techniques of this category. This situation is generally undesirable in some MCDM methods (Jahan & Edwards, 2015, p. 338). Among the techniques in this class, it is aimed to provide normalization in the range of 0-1, with the highest value being 1 in N4 and the lowest value being 1 in N5. After normalization using N6, the best performance value depending on the optimization orientation of criteria is expected to equal 1 while all normalized values are expected to fall within the range of 0-1. In N7, the normalized values are expected to be in the range of 0-1 while the best value is equal to 1. Since N8 produces large normalized values, it does not seem appropriate for most MCDM methods. The range of normalized values obtained in N9 can be expected to be smaller than the normalized values created by most techniques.
N10 is one of the most used techniques in the normalization steps of MCDM methods. Providing range-based normalization, N10 is successful in handling the scale effect (Jahan & Edwards 2015, p. 338). Normalized values in N10 are expected to be in the range of 0 and 1. Z score normalization is frequently used in the application of multivariate statistical methods. Fayazbakhsh et al. (2009) used the Z score normalization as dependent of the optimization orientation. Optimization orientation dependent Z score normalization can generate negative values, but normalized values are usually around 0. This situation restricts N11's usage in MCDM methods (Jahan & Edwards, 2015).
Normalization techniques were developed to be used for specific purposes or expectations. We have highlighted these purposes and expectations for the optimization orientation dependent normalization techniques. On the other hand, we will test whether the intended or expected normalized values of these normalization techniques will always be obtained in the application section.
Normalization Techniques Independent of the Optimization Orientation
For normalization techniques independent of the optimization, a specific reference value/range or a constant is used instead of the optimization orientation of the Comparative analysis of normalization techniques in the context of MCM problems criteria. For the techniques in this category, one or more of maximum value, minimum value, mean, standard deviation, reference (ideal/target) value/range, adjustable constant number, and data distributions are used in the normalization process (Wu, 2002;Shih et al., 2007;Jahan et al., 2011;Jahan et al., 2012;Alpar, 2013;Saranya & Manikandan, 2013;Jahan & Edwards, 2015;Gardziejczyk & Zabicki, 2017;Aytekin, 2020).
Such values as maximum and minimum are also used in the normalization of optimization orientation-dependent techniques. The normalization techniques independent of the optimization orientation have no formula change based on the optimization orientation and use the same equation for all criteria types. Normalization techniques independent of the optimization orientation are given in Table 2. Alpar (2013) In the normalization techniques in Table 2, Rj shows the reference value determined for criterion j. The reference value is the base, source, or guide point that reflects the decision maker's preferences in the decision problem. The reference value can be identified subjectively by the decision-maker, or it can be determined with the help of scientific tools or techniques (Aytekin, 2020). The arithmetic mean is generally used as a reference value in N12. The normalized values obtained in N12 are mainly in the range of 0 to ± 3. Jahan et al. (2011) used N13 in the extension of VIKOR. N14 and N15 allow the decision-maker to adjust the performance values within the criteria according to the average or standard deviation. The ρ parameter in N16 is determined by the digit value of the largest absolute number in the decision matrix. Thus, N16 can generate normalized values are between -1 and +1. N17, N18, and N19 are techniques that provide normalization based on reference. Jahan et al. (2012), in their study on material selection, proposed N17, which is the extension of Weitendorf's Linear Normalization, based on the reference value. Wu (2002) suggested using N18 in the Gray Relational Analysis if the reference value is determined among the maximum and minimum values. Aytekin (2020) proposed N19 by integrating reference-based and decimal normalization processes. In this study, N19 is revised to equate the bestnormalized value to 1, and the worst normalized value to 0 according to the reference value. N20 and N21 provide normalization based on range. It is aimed to obtain normalized values between -1 and +1 in N20, and 0-1 range in N21.
Integrated-Mixed Normalization Techniques
The different structures of MCDM problems always force researchers to seek new solutions. In MCDM problems, the optimization orientation-based or reference-based approaches are generally adopted for solutions. However, it may be possible for decision-makers to solve the problem using reference values for some criteria and optimization orientations for others. Similarly, some of the criteria in the decision matrix may be monotonously increasing or decreasing, while others may not. In these cases, for instance, it is possible to determine the maximum or the minimum value according to the optimization orientation as a reference value or to create a solution by applying transformations to existing techniques. In such a case, there are normalization techniques developed for these integrated/mixed situations (Zhou et al, 2006;Zeng et al., 2013;Jahan & Edwards, 2015). The integrated-mixed normalization techniques are given in Table 3. Zhou et al. (2006) to determine the ratio of xij to the reference value or maximum/minimum value. Zeng et al. (2013) developed N23 for an extension of VIKOR to provide practical solutions to problems in the health sector. Zeng et al. (2013) stated that the criteria having normal distribution are mostly used in the issues in the health sector and that an absolute deviation from the average is acceptable in these criteria. Also, they stated that normalization could be achieved with N23, including monotonic increasing or decreasing criteria.
Apart from the normalization techniques examined in the study, there are still more normalization techniques developed for different purposes like membership functions (rough numbers, triangle, trapezoidal, etc.) which are used in the normalization processes within the framework of fuzzy logic or rough sets (Sharma et al., 2018;Vafaei et al., 2018a;Roy et al., 2019). A comparative review of the normalization techniques given in Table 1, Table 2, and Table 3 will be carried out in the following section of the study.
Comparisons of Normalization Techniques in the Literature
Normalization techniques have an essential role in the solution of MCDM problems. Normalization processes used in the vast majority of MCDM methods enable the criteria of various structures to be dimensionless so that they can be directly compared. However, every normalization technique cannot be suitable for all decision problems. For example, some techniques do not provide eligible normalization for criteria with negative values or 0. Other than this, the expected range of normalized values and the rank reversal problem likely to be encountered are among the other determining factors in the selection of the normalization technique. On the other hand, it is not possible to evaluate MCDM methods independently of the normalization techniques they contain. Changing the normalization process included in an MCDM method results in the creation of a new extension/derivative of the method.
There are many normalization techniques and MCDM methods in the literature. The structure of problems and the assumptions of MCDM methods are prominent factors for choosing the normalization technique. In this context, although not yet sufficient, studies on the comparison of normalization techniques have been increasing in the recent past. These studies, however, usually include a small number of techniques and exclude most. Similarly, those that consider the selection of the normalization technique, and the criteria to be used in this selection process are limited.
In one of the prominent studies in comparing normalization techniques, Çelen (2014) examined the deposit banks using the FAHP-TOPSIS integrated model. The study compared N1, N2, N4, and N10. The consistency and validity of the normalization results were evaluated under four conditions. According to the first of these conditions, the distribution of the normalized values should be similar when compared to other techniques. Under this condition, inferences could be made by looking at the mean, standard deviation, smallest-largest values, and Kolmogorov-Smirnov normality test. In the second condition, the first three and the last three of the values to be obtained by normalization techniques should be the same. The third condition states that the correlations of these rankings should be high while the fourth condition emphasizes the need for the normalization techniques to produce similar scores, and this condition was examined with correlation coefficient values. The Pearson correlation coefficient was used for measuring similarities (Çelen, 2014, p. 201-203).
The use of the Kolmogorov-Smirnov normality test and Pearson correlation coefficient in the study carried out by Çelen (2014) can be seen to be unsuitable for MCDM problems. This is because MCDM problems mostly contain data structures that are not normally distributed, and the normal distribution is not generally sought in MCDM problems. Normalized values provided by normalization techniques may also have different structures. While some normalization techniques limit the data to values in a particular range, some ensure that obtained values are close to zero, while others provide only positive one-way data. It is thus clear that the structures of the decision problem and the MCDM method have a direct influence on the determination of the normalization process. The primary purpose of the normalization process performed in MCDM problems is not to obtain data with normal distribution, but to get comparable data with equal weight. However, it should be accepted that Çelen (2014) gives a different perspective on the comparison of normalization techniques. Chakraborty and Yeh (2007) suggested RCI (Ranking Consistency Index) to compare normalization techniques. In RCI, a normalization technique is evaluated based on ranking consistency with other normalization techniques. For this, it is necessary to simulate the decision matrix of at least 4x4 and at most 20x20. Finally, the results of the different normalization techniques are analyzed.
In another study conducted for the selection of normalization techniques, Vafaei et al. (2018a) discussed the appropriate normalization technique for TOPSIS using the reviews previously held in the literature. Accordingly, RCI, mean and distribution measures of normalized values, Kolmogorov-Smirnov normality test, ranking consistency of normalization techniques in terms of the first three and last three rows, Pearson and Spearman correlations were used in the comparison of normalization techniques and order of suitability for TOPSIS. The authors concluded that Vector Normalization is the best technique for TOPSIS in similarity with the study of Chakraborty and Yeh (2009). They also stated that comparison based on the normal distribution is questionable (Vafaei et al., 2018a).
Some of the processes mentioned in the comparison of normalization techniques do not always seem to be possible due to the structural features of MCDM problems. It is often not possible for the criteria to have a normal distribution. Furthermore, when the number of normalization techniques to be compared is high, and the size of the decision matrix exceeds 20x20, the use of RCI may not be effective. Another critical problem is that using the Pearson correlation coefficient to examine the correlation of the rankings does not give accurate results. Consequently, in the application part of this study, the Spearman rank correlation coefficient will be used to analyze the rank correlations of the normalization techniques. The ability to give effective normalization in different criteria structures, differences between normalized values, and their use in MCDM methods will also be examined.
Normalization is one of the essential process steps in MCDM methods, and it directly affects the solution to the problem. The normalization process used in the MCDM methods should not be considered independent of the method itself. For example, TOPSIS uses Euclidean distance in solving the decision problem. In this context, vector normalization, which is the second moment according to the starting vector (0), is used in the normalization process in TOPSIS. Like TOPSIS, many other MCDM methods also have normalization procedures for the intended solution. Table 4 shows the MCDM methods and the normalization techniques used in the original forms of these methods (the first form, not extension form) and the studies comparing them depending on different normalization techniques.
As seen in Table 4, the most recommended normalization techniques are Sum-Based Linear Normalization and Vector Normalization. The normalization technique should be chosen by considering the nature of the decision problems. In the following section of the study, an applied comparison of the normalization techniques will be given.
Applied Comparative Analysis of Normalization Techniques with Different Scenarios
In this part of the study, a comparison of normalization techniques will be highlighted. The twenty-three normalization techniques in the second section will be compared over a randomly and purposefully generated data set for a valid comparison. These sets were created by considering different scenarios to reflect the general structure of MCDM problems.
The literature and scenarios were taken as a basis in determining the number of criteria and alternatives for decision matrices. To this end, studies that conducted the literature review of MCDM methods were examined. The conclusion is that the most observed numbers were four for criteria, and five for alternatives (Durucasu et al., 2017). The numbers in this study were considered important as they provide guidance. For this study, however, it was decided that it would be appropriate to have six criteria and six alternatives to reflect the structural differences of scenarios and decision problems to be used in comparing normalization techniques.
After determining the number of criteria, and the number of alternatives, the different scenarios to be created were decided. Each scenario is named with a different set. To ensure that the criteria in Set 1 are different from each other, K1, whose variation range is quite wide compared to other criteria, K2 with a variation range of 0-1, K3 containing 0 and positive values, K4 containing negative values and 0, K5 containing negative, 0 and positive values together, and K6 whose values are all negative were established. While the optimization directions of all criteria in Set 1 were determined as benefit (maximum), it was evaluated as cost (minimum) in Set 2. In set 3, a scenario was created where all criteria have the benefit optimization orientation and where the ranges do not intersect was created. Accordingly, the ranges are 1-10 for K1; 11-100 for K2; 101-1,000 for K3; 1.001-10.000 for K4, 10.001-100.000 for K5 and 100.001-1.000.000 for K6. The optimization orientations of all criteria were set as benefit in Set 3 whereas it was determined as the cost in Set 4 which has the same decision matrix. In Set 5 to Set 11, the aim is to investigate the effects of adding and removing alternatives from the decision problem. Set 12 was created to examine whether units that can be converted into each other are normalized with the same values. In Set 12, the optimization orientations of the criteria are determined as benefit, while they are cost in Set 13.
Set 14 was created based on the values obtained in scaling techniques commonly used in MCDM problems. In this regard, the ten-point direct rating scale for K1, Saaty's Fundamental (Linear Priority) Scale for K2, Likert type scale for K3, DEMATEL Scoring Scale for K4, Semantic Scale for K5, and the hundred-point direct rating scale for K6 were used to determine the values of the alternatives. The ten-point direct rating scale allows alternatives to be evaluated in the range of 1-10. Alternatives are evaluated in the range of 1-9 by pairwise comparisons using Saaty's Fundamental (Linear Priority) Scale. DEMATEL Scoring Scale is based on determining the interactions between the alternatives by pairwise comparisons in the range of 0-4 (1-5 in some research). Semantic Scale generates values in the range of 0-100 using binary comparisons of alternatives in the range of 0-6 (1-7 in some research). The hundred-point direct rating scale allows alternatives to be evaluated in the range of 1-100 (Saaty, 1977;e Costa & Vansnick, 1994;Wu, 2008).
The fourteen sets in Table 5 will be used in the comparison of the normalization techniques. These sets contain decision matrices created for different scenarios. There are six criteria and six alternatives in these matrices. MS Excel was used to generate the performance values of the alternatives randomly. For this purpose, the formulas = RANDBETWEEN (lower_bound_value; upper_bound_value) and = RAND () were used. However, some values were determined purposely; for instance, in set 1, to examine the effects of 0, and in set 3 to check the effects of rank reversals in sets 6-11. The values that could be converted into each other were also purposely assigned in Set 12. To examine the rank reversal problem, SAW, which has one of the simplest and basic forms of MCDM methods, was used to solve the decision problems. Besides, MS Excel, SANNA, and SPSS 25.0 were used in the analysis. In Table 5, to show the effect of removing an alternative from the decision matrix in Set 5, A6, which ranked first in the solutions obtained with SAW in Set 3, was removed from the decision problem. In Set 6, A5, which took the last place in the solutions obtained with SAW in Set 3, was removed from the decision problem. In Set 7, both A5 and A6 alternatives in Set 3 are excluded from the decision matrix. In Set 8, A7, a new alternative with the best values in all criteria, was added to the decision matrix specified in Set 3. In Set 9, A8, a new alternative with the worst values in all criteria, was added to the decision problem in Set 3. Set 10 was created by simultaneously adding A7 and A8 to Set 3. Set 11 was created by changing the optimization orientations of Set 10. The values in Set 12 and Set 13 are assigned to represent values that can be converted into each other. In Set 14, the values of K2, K4, and K5 should be created by pairwise comparison. Values ranging between 0 and 1 were obtained for each alternative as a result of the operations performed using Saaty's Fundamental Scale in K2 and DEMATEL Scoring Scale in K4. These values were assigned following the structure of these scales; thus, multiple way performance analysis and comparison of normalization techniques can be made with Set 1-14.
To examine the rank reversals problem, the ranking of the alternatives was obtained via SAW using the normalized values. In the ranking process, the criteria were considered to have equal weights. The best values in criteria depending on the optimization orientation are used as reference values for reference-based normalization techniques.
The issues determined in the applications of normalization techniques in Sets 1-14 are shown in Table 6. The table contains information on the techniques regarding the sets in which the normalization process is not completed, the number of rank reversals, the maximum-minimum normalized values observed in the benefit/cost criteria, the ability to cope with the same values expressed in different units, and providing the same optimization orientation.
The completion of the normalization process, which is one of the issues in Table 6, in all possible data types in a meaningful way (by cleaning all units and making them dimensionless) shows the robustness and usability of the technique. At the end of the normalization process, normalized values between 0 and 1 are preferred while different ranges of normalized values across criteria are undesirable. For example, having normalized values between -1 and +1 in one criterion, and between 0 and 10 in another is not desirable. Such a structure will change the effects/weights of the criteria on the decision problem and make the solution of the decision problem to become invalid. The success of the normalization techniques will be examined by looking at the maximum and minimum normalized values. Also, normalization techniques are expected to handle unit differences successfully. For example, a distance may be expressed in any of the units of kilometers-hectometers-decametersmeters-decimeters-centimeters. The unit of measurement used does not change the length of the distance. The normalization technique should therefore generate units that can be converted to each other and measure the same thing, with the same normalized values. This standpoint is considered when examining how the techniques cope with different units. The last issue in Table 6 is the test of whether the technique gives the same optimization orientation, which is often the benefit orientation, an indication of whether the normalization technique gives one-dimensional values in all criteria. The first column of Table 6 shows the sets in which normalization techniques could not complete all the operations. N2, N10, N11, N12, N13, N14, N15, N16, N17, N19, N20, N21, and N23 completed the normalization process in all criteria in all sets. N3, N4, N5, N6, N7, N8, N9, and N22 could not complete normalization processes in all or some of the criteria because of a zero value in K3; zero and negative values in K4, and negative, zero and positive values in K5. N1, N5, N6, and N22 could not create normalized values for A3 in Set 2 due to the error of dividing by zero. The normalization process with N3 could not be completed because of the zero value in K4 in Set 14. Normalization of the benefit criteria could not be ended with N18. The main reason for this situation in N18 is the selection of reference values depending on the optimization orientation.
When the normalization techniques were analyzed in terms of rank reversals for Sets 1-14, only N16, N18, and N19 were found not to have rank reversal problems. However, a dramatic change in ρ value in only a specific criterion can cause rank reversals in N16. N18, which achieved normalization in only four sets, is subject to rank reversals due to maximum and minimum values in the decision matrix. In N19, if the reference and ρ values are determined depending on the first decision matrix or independently from the decision matrix, rank reversals are not observed. However, if the reference and ρ values change with changes in the decision matrix, rank reversals should be expected. Rank reversal problems were observed in other normalization techniques. Among these, N12 had the highest number of rank reversals. Table 6 also shows the range of the values obtained by normalization techniques in benefit criteria can also be observed. Accordingly, techniques providing normalization in the range of 0-1 are N3, N10, N12, N13, N17, N19, N21, and N23. N2 and N16 give normalization between -1 and +1. The sum of normalized values from the normalization performed in N1 on the benefit criteria is expected to be 1. On the other hand, N1 generated 1.406 and -0.891 normalized values in K5 in Set 1. The following techniques were also seen to have negative normalized values; N1, N2, N4, N5, N6, N7, N8, N9, N11, N14, N15, N16, N20, and N22.
In general, normalized values are expected to be within a certain range in all criteria as a result of normalization. At this point, it is more preferred that the normalized values are in the range of 0-1. In Set 1-Set 14, the average of normalized values obtained by normalization techniques was 0.7057, except for N8 and N9, which can produce structurally high normalized values. For example, an examination of the normalized values in benefit criteria reveals that N4, N5, N6, N8, N9, N14, N15, and N22 could produce normalized values larger than five by absolute value. From these techniques, N8 and N9 produced large normalized values; 2033,33 and 413,44, respectively.
Set 12 and Set 13 were created to examine whether units that could be transformed into each other were normalized in the same way. Table 6 shows that technique N3 failed in this regard. Another important comparison point is that normalized values can be used as-is in MCDM methods without additional processing. Although a significant majority of MCDM methods involve processing steps according to the costbenefit optimization orientation, the normalization technique should shorten this process. In this context, techniques providing one-dimensionality; N1, N2, N3, N4, N5, N6, N7, N8, N10, N11, N13, N19, N22, and N23.
The applications performed in Sets 1 -14 provide an opportunity to see the positive and negative features of normalization techniques. In the following section, a general evaluation of each technique will be given. N1 does not provide useful normalization in the cost criteria as well as it does in the benefit criteria. N1 cannot complete the normalization process for cells that have 0 in the cost criteria. Besides, N1 produced quite large normalized values, such as 108 and -16, in cost criteria. However, as can be seen in K5 in Set 1, if negative, 0, and positive values are included in the benefit criteria, N1 can produce negative normalized values or values greater than 1. Also, N1 is prone to rank reversal problems. N1 can give values corresponding to the performances of the alternatives in the range of 0-1 in cases where the optimization of the criterion benefit orientation and all values are positive. Thus, the weight of an alternative in the relevant criterion can be easily determined. Besides, N1 was also found to have successfully removed unit differences.
N2 produced negative values in Set 1 in the normalization of K4, K5, and K6. It also generated normalized values larger than 1 in Set 2. The main reason for this is the negative values in the decision matrix. N2 is also prone to rank reversal problems as it uses the sum of squares of the decision matrix elements. On the other hand, N2 completed the normalization processes in all criteria. N2 was also found to have successfully removed unit differences and gave normalized values in the benefit orientation.
N3 could not complete the normalization of the criteria having zero and negative values. Although N3 provided normalization in the range of 0-1, most of the values tended to be closer to zero. The rank reversal problem in N3 was less than many other techniques. It was determined that N3 could not successfully remove unit differences. On the other hand, N3 can give normalized values in the benefit orientation. Another critical point is that N3 provides non-linear normalization. Most normalization techniques have a monotonous increasing / decreasing structure. N3 can be said to be an option for decision problems with a different structure.
N4 cannot perform normalization in criteria where zero is the maximum value. Also, it produces negative normalized values in criteria having zero, negative, and positive values together. N4 gives quite large normalized values only for criteria including negative values. There may be rank reversals with N4. On the other hand, it was seen that N4 successfully removed unit differences and gave normalized values in the benefit orientation.
In the cost criteria, N5 cannot complete normalization when 0 is the minimum value whereas it gives normalized value as 0 for all other values in the decision matrix. The use of the minimum value in the decision matrix in the normalization process with N5 can lead to rank reversal problems. N5 can produce quite large normalized values in criteria that have negative and zero values. However, it gives the same normalized values if the same value is expressed in different units. It also gives all normalized values in the benefit orientation.
N6 could not complete normalization operations for K4 containing negative values and zero in Set 1. Similarly, in Set 2, N6 could not provide normalization for zero value in K3, K4 and K5 due to cost optimization orientations of these criteria. Also, it gave quite large normalized values such as 20,33 in K6, which had negative values in Set 1 and Set 2. Another negative feature of N6 is that it is prone to the rank reversal problem. However, N6 was found to provide normalization in the range of 0-1 in criteria where all values are positive. It is also successful in coping with different units and providing all normalized values in the benefit orientation.
At first glance, N7 can be assumed not to produce negative normalized values because it operates with absolute value. On the other hand, negative normalized values for K5 and K6 were observed in Set 1 and Set 2. Normalized values such as -18.33 produced in the mentioned sets are also quite high. N7 provided normalization in the range of 0-1 in other sets. As N7 is dependent on the maximum and minimum values in the decision matrix in the normalization processes, it is prone to the rank reversal problem. On the other hand, N7 is successful in dealing with the unit's differences and providing all normalized values in the same optimization orientation.
N8 did not provide normalization for all cells in K4 in Set 1, and the cells have zero values in K4, K5, and K6 in Set 2. Also, while N8 is expected to produce 100 as normalized values for the best value according to the optimization orientations of the criteria, quite large normalized values such as -356.25 and 2033.33 values were observed in Set 1 and Set 2. The rank reversal problem was also identified in applications done in Sets 1-4. Outstanding positive features of N8 include removing the unit's differences and giving all normalized values in the same optimization orientation.
Another technique that is prone to the problem of rank reversal is N9. N9 could not complete normalization in K4, where zero is the maximum value in Set 1 and Set 2. However, the normalized values created for Set 1 and Set 2 had quite high values, such as 413.44 and 8406.70 and negative values. N9 was found to be able to remove the unit's differences. On the other hand, it was unable to provide the benefit orientation in all criteria. In this context, the same ranks were seen for Set 12 and Set 13, in which their optimization orientations are different. Another critical point is that N9 produces non-linear normalized values. It can therefore be stated that N9 is an option for decision problems with different structures.
N10 provided normalization in the range between 0 and 1 in all sets. N10 was also successful in dealing with the unit's differences and providing benefit orientation values. Among the normalization techniques, N10 is one of the most robust and successful, but it can cause rank reversals.
N11 provided normalization in all sets. The maximum and minimum normalized values observed with N11 are 2.03 in Set 9 and -1.91 in Set 4. Also, the rank reversal problem was observed in normalizations with N11. On the other hand, it was successful in removing unit differences and generating normalized values in benefit criteria. N11 is one of the techniques with a non-monotonous structure. It can be stated that N11 creates an option for decision problems with different structures with these features.
Techniques N1 to N11 perform the normalization process according to the optimization orientation, while N12 to N21 provide normalization independent of the optimization orientation. In normalization techniques independent of the optimization orientation, specific parameters such as reference, mean, digit value, range, standard deviation, or fixed values are used for normalization. However, this study used the maximum/minimum values as reference values following the optimization orientations of the criteria. Furthermore, when the references are determined independently from the decision matrix, the reactions of the techniques will be reported.
The arithmetic mean is used as the reference value in normalization processes performed in Sets 1 -14 with N12. In this case, normalized values ranged from 0-1 were obtained in all sets. In some sets, the same normalized values were generated for criteria with different values. At this point, it should be noted that the values in certain ranges under the normal distribution curve have the same normalized values, and a non-monotonic process is performed. Considering that most normalization techniques have a monotonous increasing/decreasing structure, N12 can be said to differ significantly from other techniques. On the other hand, these features, which distinguish N12 from other techniques, prevent it from being successful at giving all values in the benefit orientation. Furthermore, N12 was the technique in which the rank reversal problem was common in Sets 1-14. On the other hand, N12 has the capability of removing unit differences.
N13 was observed to cause the rank reversal problem. On the other hand, it provides normalization in the range of 0-1 in all sets and successful in coping with unit differences and providing all values in benefit criteria. It gave the minimum normalized value as 0.3678 in all sets. N14 performs the normalization by equating the average value to 1. If a criterion has a large range, normalized values will not be acquired within the desired range using N14. Also, normalization based on average value can lead to rank reversal problems. N14 cannot achieve normalization in all benefit criteria. It however has the capability of removing unit differences. N15 shares the same features as N14 except for the normalization by providing the standard deviation equal to 1.
In N16, the digit value of the largest absolute number in the decision matrix is taken as the basis. When the maximum or the minimum value changes dramatically in a criterion, N16 can cause the rank reversal problem. However, no rank reversal was observed in comparisons made in Sets 1-14 with N16. N16 cannot provide all values in the benefit orientation, but it produces normalized values between -1 and +1 in all sets. Also, N16 is successful in removing unit differences.
Although N17 provides normalization based on reference, it also uses the maximum and minimum values in the criterion. This situation can cause the rank reversal problem. N17 produces normalized values in the range of 0-1, succeeds in coping with unit differences, and provides all normalized values in the same optimization orientation.
N18 was the technique with the highest number of processing errors in Sets 1-14 since the reference values were determined depending on the optimization orientation of the criteria. Revisions were made on Set 1 and Set 3 to measure the reaction of N18 and other reference-based techniques, N12, N13, N17, and N19. For this purpose, the reference values were different from the best values in the criteria. To do this, new reference values were determined to be 10% better and 10% worse than the best values in the criteria, and normalized values were examined. N18 could not complete the normalization process if the references were 0. Also, if the reference was higher than the maximum value in the criterion, N18 produced negative and large normalized values such as -8.85 and -9.37. For the other techniques, no situation other than the issues determined in the context of Set 1-14 was encountered. Also, the fact that N18 uses the maximum value in the criterion in the normalization process can lead to the rank reversal problem. N18 can remove unit differences. On the other hand, it gives the values in the benefit orientation only if the reference is less than the minimum value in the decision matrix.
N19, one of the reference-based normalization techniques, produces normalized values in the range of 0-1 in all sets, but it never creates exactly normalized values as 0. N19 does not cause the rank reversal problem if the reference and ρ values are determined depending on the first decision matrix or independently from the decision matrix. In applications carried out in the context of Sets 1-14, the rank reversal was not observed in N19. Besides, the N19 can remove unit differences and provide all values in the benefit orientation.
N20 uses the average of the maximum and minimum values and the range in the normalization process. These features, on the other hand, cause the rank reversal. Unable to provide all values in the benefit orientation, N20 produces normalized values between -1 and +1. Also, N20 succeeded to cope with unit differences. N21 applies the formula used for the benefit orientation of N10 in the same way in all criteria. Being independent of the optimization orientation prevents N21 from providing all values in the benefit-orientation. N21 provided normalization in the range of 0-1 in all sets and N21 caused the rank reversal problem at the same time, but it managed to cope with unit differences.
Normalization techniques from N1 to N21 provide normalization either by considering the optimization orientation of the criteria or reference or specific values. The emergence of integrated-mixed normalization techniques in the literature has given researchers different perspectives. In integrated-mixed normalization techniques, the optimization orientations of the criteria or reference/specific values are used under certain conditions. With N22, which is one of the integrated-mixed normalization techniques, normalization could not be achieved in cells that had zero value in Set 1 and Set 2. Also, quite large normalized values, such as 20.33 and -3.56, were observed in normalization depending on the optimization orientation, similar to the techniques using maximum and minimum values in criteria. N22 can cause the rank reversal problem, but it can remove unit differences and gives all normalized values in the benefit orientation.
N23 gives normalized values in the range of 0-1 in all sets and is also successful in dealing with unit differences and providing all normalized values in the benefit orientation. Additionally, it produces non-monotonic normalization, but it can cause the rank reversal problem.
obtained by normalization techniques. The similarities between these rankings were examined using the Spearman rank correlation coefficient, and the results are presented in Table 7.
An examination of the correlation values in Table 7 reveals that the techniques that did not complete all normalization process in criteria having the benefit optimization orientation have a low or meaningless correlation with the other techniques. The same can be said for the techniques not having a linear structure. Among the twenty-three techniques, N12 has the lowest correlation with others. N3, N9, N14, N15, N16, N20, and N21 have a low level of correlation with the other techniques. As seen in Table 7, the technique that has the most inverse correlation with other techniques is N18. This can be attributed to the low number of normalization processes completed by N18 in Sets 1-14. The other techniques were found to provide a high level of correlation (rs> 0.8) with each other in general.
Conclusions
Normalization is a scaling process that is frequently used in creating the data structure required for the method to be used in quantitative studies. The normalization process directly affects the results of the analyzes to be performed. However, it wouldn't be right to use traditional normalization techniques for such a process without question.
In MCDM methods, normalization is used to obtain criteria that have the same weight, are dimensionless, and are suitable for compensatory processes. Although the vast majority of MCDM methods provide normalization within themselves, the nature of the decision problems makes it necessary to review these processes. On the other hand, changing the normalization process of an MCDM method partially leads to the emergence of a new extension of the relevant method. To this end, this study aimed to outline the positive and negative features of the normalization techniques that are widely used and can be used in MCDM problems, thus giving decision-makers or researchers an insight into the situations in which the considered normalization techniques can work well and where they should not be used.
Eleven techniques, based on optimization orientations of criteria, ten techniques independent of optimization orientations, and two integrated-mixed techniques were evaluated in fourteen sets, reflecting different criteria and data structures considered for normalization. In the comparison, the emphasis was made on the ranges of normalized values obtained by the techniques, the presence of the rank reversal problem, normalization performance in criteria with different structures, the capacity to remove unit differences, the ability to give all normalized values in the same optimization orientation, and completion of normalization in all sets. The comparisons in Sets 1-14 showed the importance of the normalization technique selection process. Normalization may not be completed if the normalization technique is not chosen according to the data structure, and should that be the case then the validity of the results may be disputable. The results showed that some normalization techniques are not able to achieve the purposes of development or use in some cases. N1, N3, N4, N5, N6, N7, N8, N9, and N22 could not complete the normalization procedures in all or part of the criteria with zero or negative values in the decision matrix. N1, N4, N5, N6, N7, N8, N9, N14, N15, and N22 produced quite large normalized values. Also, they did not produce normalized values within a certain range, such as 0-1, in all criteria. These techniques provide very high normalized values due to the criteria having a different structure in Set 1 and Set 2. The rank reversal problem was not observed in techniques N16 and N19 in Sets 1-14. Indeed, if the reference and ρ values cannot be changed after being determined at the beginning of the problem, N19 stands out as the only technique that does not cause any rank reversal problems. Also, N19 does not use values such as maximum, minimum that will change with the change in the decision matrix.
Normalization techniques have different structures. However, the nature of the decision matrix, preferences of the decision-maker, and the properties of the MCDM method to be used in the solution of the decision problem should also be considered in the selection of the normalization technique. If the aim is to choose the highest performance value in a criterion and avoid the alternative with the lowest performance value, or vice versa, it would be correct to use techniques that provide normalization depending on the optimization orientation. If the decision-maker has reference/ideal/utopic values determined for each criterion, it will be necessary to use reference-based normalization techniques. If the decision-maker has the opinion that the values in the criteria do not represent the monotonous increasing/decreasing benefit/cost, then the non-monotonic or non-linear techniques should be used. However, some data, that have zero and negative values, may prevent the use of some normalization techniques, as determined in the application section. Also, the problem of rank reversal, the ranges of normalized values, the capability of removing unit differences, and the ability to give all normalized values in the same orientation, robustness and validity of the results will affect the choice of the normalization technique.
This study presents a comparison of normalization techniques in different criteria structures. It sought to highlight the positive and negative features of the techniques in question thereby guiding decision-makers or researchers on the selection of techniques. The study is also envisioned to provide a perspective on the development of new normalization techniques and the creation of new extensions of MCDM methods by replacing the normalization techniques originally included in the MCDM methods with other techniques. Normalization techniques continue to be applied in most areas where data analysis is required. It is, therefore, necessary to conduct comparisons in other fields such as data mining as well. It may also be beneficial for future research to examine the normalization processes associated with fuzzy and rough sets used in MCDM problems.
Alternatives in Sets 1-14 have been ranked by SAW using the normalized values Table 7. Rank | 12,675.6 | 2021-03-21T00:00:00.000 | [
"Mathematics",
"Business",
"Computer Science"
] |
Does the Fundamental Metallicity Relation Evolve with Redshift? I: The Correlation Between Offsets from the Mass-Metallicity Relation and Star Formation Rate
The scatter about the mass-metallicity relation (MZR) has a correlation with the star formation rate (SFR) of galaxies. The lack of evidence of evolution in correlated scatter at $z\lesssim2.5$ leads many to refer to the relationship between mass, metallicity, and SFR as the Fundamental Metallicity Relation (FMR). Yet, recent high-redshift (z>3) JWST observations have challenged the fundamental (i.e., redshift-invariant) nature of the FMR. In this work, we show that the cosmological simulations Illustris, IllustrisTNG, and EAGLE all predict MZRs that exhibit scatter with a secondary dependence on SFR up to $z=8$. We introduce the concept of a"strong"FMR, where the strength of correlated scatter does not evolve with time, and a"weak"FMR, where there is some time evolution. We find that each simulation analysed has a weak FMR -- there is non-negligible evolution in the strength of the correlation with SFR. Furthermore, we show that the scatter is reduced an additional ~10-40% at $z\gtrsim3$ when using a weak FMR, compared to assuming a strong FMR. These results highlight the importance of avoiding coarse redshift binning when assessing the FMR.
INTRODUCTION
The metal content of galaxies provides key insights into galaxy evolution.Stellar winds and supernovae explosions eject metals formed in stars into the interstellar medium (ISM).Metals then mix via galactic winds (e.g., Lacey & Fall 1985;Koeppen 1994) and turbulence (e.g., Elmegreen 1999) within the disc while pristine gas accretion from the circumgalactic medium (CGM) and outflows dilute the metal content (e.g., Somerville & Davé 2015).Thus, the metal content (metallicity) of the gas within a galaxy is sensitive to such processes, providing a window into the evolutionary processes within a galaxy (Dalcanton 2007;Kewley et al. 2019;Maiolino & Mannucci 2019).
Evidence for the sensitivity of metal content to the gas dynamics within a galaxy is perhaps most clearly seen within the relationship between the stellar mass of a galaxy and its gas-phase metallicity.This mass-metallicity relationship (MZR) describes a relationship of increasing metal content in galaxies with increasing stellar mass ★ E-mail<EMAIL_ADDRESS>† ARC DECRA Fellow (Tremonti et al. 2004;Lee et al. 2006).At low stellar masses, the MZR relationship is well-described as a power-law, whereas at high masses (log[ * / ⊙ ] > 10.5) the MZR plateaus (e.g., Tremonti et al. 2004;Zahid et al. 2014;Blanc et al. 2019).Furthermore, at a fixed stellar mass, low (high) metallicity galaxies have systematically elevated (depressed) gas masses (Bothwell et al. 2013;Scholte & Saintonge 2023) and SFRs (Ellison et al. 2008;Mannucci et al. 2010).The inverse relationship between a galaxy's metal content and SFR (or gas content) at a fixed stellar mass has been seen in the gasphase in observations (e.g., Lara-López et al. 2010;Bothwell et al. 2016;Alsing et al. 2024;Yang et al. 2024) and simulations (e.g., De Rossi et al. 2017;Torrey et al. 2018) as well as for stellar metallicities in simulations (De Rossi et al. 2018;Fontanot et al. 2021;Garcia et al. 2024;Looser et al. 2024) and recent observations (Looser et al. 2024).This secondary dependence on SFR and gas content is qualitatively well-described with basic competing physical drivers: (i) as new pristine gas is accreted onto a galaxy, it drives galaxies toward higher gas fractions, higher star formation rates (SFRs), and lower metallicities, while (ii) galaxies will persistently tend to consume gas and produce new metals, driving galaxies toward lower gas fractions, lower SFRs, and higher metallicities (e.g., Davé et al. 2011;Dayal et al. 2013;Lilly et al. 2013;De Rossi et al. 2015;Torrey et al. 2018).
It is therefore expected that secondary dependence would remain present for galaxies across a wide redshift range given the ubiquity of these physical drivers.At higher redshift the MZR has been seen to persist (albeit with a lowered overall normalisation e.g., Savaglio et al. 2005;Maiolino et al. 2008;Zahid et al. 2011;Langeroodi et al. 2023) along with the secondary dependence on SFR (e.g., Belli et al. 2013;Salim et al. 2015;Sanders et al. 2018Sanders et al. , 2021)).Critically, it has been put forth that a single, redshift-invariant plane can be used to describe both the general evolution of the MZR as well as the secondary correlations (Mannucci et al. 2010).This single surface/relation that can describe the metallicity of galaxies over a wide mass and redshift range is referred to as the fundamental metallicity relation (FMR).Despite the success of characterising galactic metallicities at ≲ 2.5 (∼ 80% of cosmic history), Mannucci et al. (2010) report some evidence for deviations from the FMR at > 3. JWST observations have recently corroborated the existence of deviations from the FMR at > 3 (Heintz et al. 2023;Curti et al. 2023;Langeroodi & Hjorth 2023;Nakajima et al. 2023).
To remain truly redshift invariant, the FMR must capture two distinct features of the MZR simultaneously: (i) the existence of a secondary relationship with SFR at fixed redshift, and (ii) the redshift evolution (or lack thereof) in the normalisation.It is therefore possible that a change in either the MZR's secondary correlation with SFR or the redshift evolution of the normalisation of the MZR (or perhaps a combination thereof) may indicate FMR evolution.Many of the previously mentioned studies investigating high-redshift galaxy populations apply a ∼ 0 calibrated FMR to higher redshift data (e.g., Mannucci et al. 2010;Wuyts et al. 2012;Belli et al. 2013;Sanders et al. 2021;Curti et al. 2023;Langeroodi & Hjorth 2023;Nakajima et al. 2023).However, it is unclear how to effectively decouple (and subsequently interpret) the observed evolution at high redshift in these frameworks.Some work has been done up to this point observationally looking at higher redshifts independently to specifically isolate the scatter about the MZR (e.g., Salim et al. 2015;Sanders et al. 2015Sanders et al. , 2018;;Li et al. 2023;Pistis et al. 2023).These works find that there may be some evolution within the scatter about the MZR at intermediate redshifts (Pistis et al. 2023 suggest potentially as low at ∼ 0.63).Yet, there are comparatively few simulations results on a systematic examinations on the strength of the secondary dependence on gas content and/or SFR at individual redshifts.
In this work, we investigate the redshift evolution of the MZR's secondary dependence on SFR from the perspective of the cosmological simulations Illustris, IllustrisTNG, and EAGLE.The rest of the paper is as follows: In §2 we describe the simulations we use, our galaxy selection criteria, and summarize definitions of the FMR.In §3 we present the redshift evolution of the FMR as found in simulations.In §4 we quantify the impact of the new framework on the scatter about the MZR, discuss the advantages and challenges in the new framework, and then discuss potential impacts of the physical models.Finally, in §5 we present our conclusions.
METHODS
We use the Illustris, IllustrisTNG, and EAGLE cosmological simulations to investigate the dependence of the gas-phase metallicity on stellar mass and star formation.Each of these simulations has a sub-grid ISM pressurisation model, which creates "smooth" stellar feedback.We believe that generic results from all three of these simulations should constitute a fair sampling of predictions from subgrid ISM pressurisation models owing to the appreciably different physical implementations.
Here we briefly describe each of the simulations from this analysis, the galaxy selection criteria we employ, and present a new framework for interpreting the Mannucci et al. (2010; hereafter M10) FMR projection.All measurements are reported in physical units.
Illustris
The original Illustris suite of cosmological simulations (Vogelsberger et al. 2013(Vogelsberger et al. , 2014a,b;,b;Genel et al. 2014;Torrey et al. 2014) was run with the moving-mesh code arepo (Springel 2010).The Illustris model accounts for many important astrophysical processes, including gravity, hydrodynamics, star formation/stellar evolution, chemical enrichment, radiative cooling and heating of the ISM, stellar feedback, black hole growth, and AGN feedback.The unresolved star forming ISM uses the Springel & Hernquist (2003) equation of state, wherein new star particles are created from regions of dense ( H > 0.13 cm −3 ) gas.The masses of the stars within the star particle are drawn from a Chabrier (2003) initial mass function (IMF) and metallicities are adopted from the ISM where they are born.As the stars evolve, they eventually return their mass and metals back into the ISM.The stellar mass return and yields used allow for the direct simulation of time-dependent return and heavy metal enrichment, explicitly tracking nine different chemical species (H, He, C, N, O, Ne, Mg, Si, and Fe).
IllustrisTNG
IllustrisTNG (The Next Generation; Marinacci et al. 2018;Naiman et al. 2018;Nelson et al. 2018;Pillepich et al. 2018b;Springel et al. 2018;Pillepich et al. 2019;Nelson et al. 2019a,b, hereafter TNG) is the successor to the original Illustris simulations, alleviating some of the deficiencies of and updating the original Illustris model.As such, the Illustris and TNG models are similar, yet have an appreciably different physical implementation (see Weinberger et al. 2017;Pillepich et al. 2018a, for a complete list of differences between the models).A critical difference between the Illustris and TNG models for the context of this work is TNG's implementation of redshift-scaling winds.The TNG model employs a wind velocity floor not present in the original Illustris model in order to prevent low mass haloes from having unphysically large mass loading factors.Consequently, low redshift star formation is suppressed in the TNG model.TNG implements the same equation of state for the dense star forming ISM as Illustris (Springel & Hernquist 2003).As in Illustris, new star particles are created from dense gas using the Chabrier (2003) IMF.Furthermore, TNG tracks the same nine chemical species as Illustris, while also following a tenth "other metals" as a proxy for metals not explicitly monitored.
EAGLE
Unlike Illustris and TNG, "Evolution and Assembly of GaLaxies and their Environment" (EAGLE, Crain et al. 2015;Schaye et al. 2015;McAlpine et al. 2016) employs a heavily modified version of the smoothed particle hydrodynamics (SPH) code gadget-3 (Springel 2005; anarchy, see Schaye et al. 2015 Appendix A).EAGLE includes many of the same baryonic processes (star-formation, chemical enrichment, radiative cooling and heating, etc) as Illustris and TNG.The dense (unresolved) ISM in EAGLE is also treated with a sub-grid equation of state (Schaye & Dalla Vecchia 2008;hereafter, SDV08), much like that of SH03.The SDV08 prescription forms stars according to a Chabrier (2003) IMF from the dense ISM gas.The density threshold for star formation is given by the metallicitydependent transition from atomic to molecular gas computed by Schaye (2004) with an additional temperature-dependent criterion (Schaye et al. 2015).Stellar populations evolve according to the Wiersma et al. (2009) evolutionary model and eventually return their mass and metals back into the ISM.EAGLE explicitly tracks eleven different chemical species (H, He, C, N, O, Ne, Mg, Si, S, Ca, and Fe).
The full EAGLE suite is comprised of several simulations ranging from size (12 Mpc) 3 to (100 Mpc) 3 .We use data products at an intermediate resolution (2 × 1504 3 particles) run with a box-size of (100 Mpc) 3 referred to as RefL0100N1504 (hereafter simply EAGLE) as a fair comparison to the selected Illustris and TNG runs.
Galaxy selection
All three simulations in this work select gravitationally-bound substructures using subfind (Springel et al. 2001;Dolag et al. 2009), which identifies self-bound collections of particles from within friends-of-friends groups (Davis et al. 1985).We limit our analysis to central galaxies that we consider 'well-resolved' (i.e., containing ∼100 star particles and ∼500 gas particles), thus we restrict the sample to galaxies with stellar mass log( * [ ⊙ ]) > 8.0 and gas mass log( gas [ ⊙ ]) > 8.5.We place an upper stellar mass limit of log( * [ ⊙ ]) > 12.0 1 .Following from a number of previous works (see, e.g., Donnari et al. 2019;Nelson et al. 2021;Hemler et al. 2021;Garcia et al. 2023), we exclude quiescent galaxies by defining a specific star formation main sequence (sSFMS).We do so by fitting a linear-least squares regression to the median sSFR-M * relation with stellar mass log( * [ ⊙ ]) < 10.2 in mass bins of 0.2 dex.The sSFMS above 10.2 log ⊙ is extrapolated from the regression.Galaxies that fall greater than 0.5 dex below the sSFMS are not included in our sample.As we show in Garcia et al. (2024;that paper's Appendix B), our key results (using stellar metallicities) are not sensitive to our sample selection.We obtain the same result here in the gas-phase: our key results are qualitatively unchanged by the same variations as Garcia et al. (2024) in selection criteria (see Appendix A).
As metallicity measurements are typically limited to star forming regions in observations (Kewley & Ellison 2008;Kewley et al. 2019), all of the analysis of gas-phase metallicities presented here is based only on star-forming gas (as defined in Section 2.1 for Illustris/TNG and Section 2.3 for EAGLE). 1 The upper mass limit does not exclude any galaxies for most redshifts . Decision tree for the ZR, see Section 2.5 for full details.This shows the different relationships that can be included under the umbrella metallicity relation ( ZR; see Equation 1).First is the traditional MZR where = 0.0 and second is the FMR where ≠ 0.0.The FMR can be further broken into two categories: strong and weak depending on if varies as a function of redshift (weak) or not (strong).
Definitions of the FMR
M10 propose that the 3D relationship between stellar mass, gas-phase metallicity, and star formation rate (SFR) can be projected into 2D using a linear combination of the stellar mass and star formation: where is a free parameter that ranges from 0 to 1 2 .The free parameter holds all the diagnostic power on the strength of the MZR's secondary dependence with SFR.By varying , the distribution of galaxies in -metallicity space varies.We define a -metallicity relation ( ZR) for each as a linear-least squares regression 3 of the data.We compute the ZR for = 0.0 to 1.0 in steps of 0.01 and obtain the residuals about each regression.The projection that yields the minimum scatter in the residuals (smallest standard deviation) is deemed the best fit.The value associated with this minimum scatter projection is henceforth referred to as min .We define an uncertainty on min by assuming that a projection that has scatter within 5% of the minimum value is a plausible candidate for the true min (following from Garcia et al. 2024).
min physically represents the direction to project the 3D massmetallicity-SFR ( * − gas − SFR) space into a minimum scatter distribution in 2D − space.Thus, the ZR is the relation of merit in the 2D projection of the * − gas − SFR relation.There are two outcomes, either (i) min = 0.0, wherein the canonical MZR is recovered, or (ii) min ≠ 0.0, wherein an FMR is recovered.In this way, the ZR can be thought of as a superset of relations containing the MZR, the strong FMR, and the weak FMR (relationships 2 In reality, the correlated scatter about the MZR has some non-negligible mass dependence.In fact, the correlation with SFR has been seen to weaken, or even invert, at high stellar mass (e.g., Yates et al. 2012;Alsing et al. 2024).Parameterisations exist that account for this mass dependence exist (e.g., Curti et al. 2020).We opt to not present other forms of the FMR in this work as a exercise on the extent to which the M10 projection can describe the of scatter at fixed redshift (see further discussion in Section 4.2). 3 M10 use a fourth-order polynomial for fitting.This practice is inconsistent in the literature with many (e.g., Andrews & Martini 2013) considering a linear regression.We show that using a fourth-order polynomial instead of a linear regression does not significantly alter our min determination in Appendix B. We show these values in Figure 2.
Illustris
illustrated in Figure 1).Framing the FMR in this way underscores the decisions required in establishing the FMR.Previous studies have been somewhat restrictive in regards to these decisions.We therefore highlight the need to take a deliberate approach to our definitions to build a framework by which potential redshift evolution can be assessed.
Traditionally (as in, e.g., M10), the FMR is defined by determining min at = 0.This value has been seen to be roughly constant at ≲ 2.5 (e.g., M10; Andrews & Martini 2013).We henceforth refer to the idea that min does not vary as a function of redshift as the strong FMR.A single min can describe both the MZR's secondary dependence and its normalisation evolution in the strong FMR.In this work, we investigate the claim that min is constant over time by identifying the min value that minimizes scatter at each redshift independently.This procedure allows min to (potentially) vary as a function of redshift.Here we introduce the concept of a "weak" FMR.We define the weak FMR as a counterpoint to the strong FMR: that min ≠ 0, but min is not constant as a function of redshift (see illustrated relationship in Figure 1).
There are actually more parameters beyond min that the FMR is defined by: the parameters of the regression (in our case slope and intercept).These additional parameters add complexity to the interpretation of the evolution.Regressions are inherently linked to the min determination, yet the parameters of the fit can have a profound impact on interpretation of FMR evolution irrespective of min variations.The impact of these parameters is beyond the scope of this work since we only examine each redshift bin independently here and the effect of the regression parameters is only felt when comparing different redshift bins.We do, however, address the impact of these parameters in a companion work (Garcia et al. in prep).
Does 𝛼 min vary as a function of redshift?
We use the best-fit min values derived as a function of redshift to evaluate whether the scatter about the MZR evolves significantly with redshift.We find that min ≠ 0 at all redshifts in each of the three simulations (Figure 2).Based on the first step of the decision tree in Figure 1, the non-zero min values show there is an FMR in each simulation.The secondary dependence on SFR is present, at least to some extent, within the scatter of all the MZRs analysed here.It should be noted, however, that the uncertainty 4 on the TNG = 0 min value does include = 0.This implies a somewhat weak dependence on SFR at this redshift.In Garcia et al. (2024), we attribute a lack of a relationship at = 0 in TNG to the redshift scaling of winds within the TNG model 5 .Briefly, the effect of adding winds that change with redshift suppresses low redshift star formation and increases the efficiency of high redshift stellar feedback compared to the Illustris model (see Pillepich et al. 2018a).It is therefore likely that the suppressed low redshift star formation causes the large uncertainty on min at = 0.As such, features of min are sensitive to details of the wind implementation/strength prescribed by the model on which it is built (see Section 4.3 for further discussion).
Overplotted on Figure 2 (gray squares) are three observationally determined values of min from M10 (0.32), Andrews & Martini (2013;0.66), and Curti et al. (2020;0.55).Each of these values was determined using SDSS galaxies at ≈ 0 (offset horizontally for clarity).Deviations in the observational values are attributed primarily to: (i) different metallicity calibrations, (ii) using individual galaxies versus galaxy stacks (as in Andrews & Martini 2013), and (iii) selection biases towards higher star forming galaxies.Simulations are not directly affected by metallicity calibrations in the same way as observations.The sample selection criteria (outlined in Section 2.4) should help mitigate the effect of selection function biases of observations.Though we select star-forming galaxies, we do not just select the highest star forming galaxies.In spite of these potential differences, it is worth noting that the M10 min value agrees fairly well with the TNG and Illustris derived values at = 0.Although the uncertainty on the TNG min is significant enough to include the Curti et al. (2020) value by a factor of ∼ 1.5 times higher.Similarly, the Andrews & Martini (2013) value of 0.66 agrees fairly well with the derived value from EAGLE at = 0, though we caution that this analysis was done with galaxy stacks whereas we us individual galaxies here.
Furthermore, we find that min values show some level of redshift evolution in all three simulations (Figure 2 and Table 1).Interestingly, each simulation has qualitatively different redshift evolution.TNG min values vary significantly from = 0 to = 1 but then level off, the Illustris min values increase monotonically with redshift, and the EAGLE values decrease monotonically as a function of redshift.
We conduct a one-sample -test to validate this apparent redshift evolution in each simulation.The null hypothesis is that the mean min is equal to min at = 0 (i.e., there is not redshift evolution).Given the uncertainty associated with each min , we compute the sample mean by weighting each min by the reciprocal of its squared uncertainty 6 .Additionally, we normalize the -statistic by the estimated error on the mean (the reciprocal sum of squared weights).We find that -statistics are −6.23,13.34, and 7.04 in Illustris, TNG, and EAGLE (respectively).These correspond to -values of 2.5 × 10 −4 for Illustris, 9.5 × 10 −7 for TNG, and 1.0 × 10 −4 EAGLE.We therefore reject the null hypothesis at a significance level of = 0.05 4 Uncertainties on min correspond to the uncertainty in the minimum dispersion (see Section 2.5 for definition) 5 We show this in Garcia et al. (2024) for stellar metallicities.That work also demonstrates that stellar and gas-phase metallicities are related to each other (see Section 4.1 of that work).Therefore, the same physical mechanism suppressing the correlated scatter for stellar metallicities is likely what is suppressing min for the gas-phase. 6We note that we make the simplifying assumption of symmetric uncertainty by defining the offset as the average of the upper and lower offsets.We additionally verify that instead choosing the magnitude of either the upper or lower offsets does change the result.in each simulation, indicating statistically significant redshift evolution of min .From the decision tree of Figure 1, the strong FMR is ruled out in favour of a weak FMR for each individual simulation.It is interesting to note that if we test instead using the = 1 value of min in TNG we obtain a -statistic of 1.64 (-value of 0.139).We could not reject the null hypothesis of no evolution of the FMR at the 0.05 significance level in that case (see above discussion on redshift-scaling winds in TNG model).
Our result in EAGLE indicating significant redshift evolution seemingly contradicts a previous study finding the FMR is in place and does not evolve out to ≈ 5 in EAGLE (De Rossi et al. 2017).There is a subtle difference in the analysis between the two works, however: De Rossi et al. (2017) do not parameterise the FMR to test min variations.They qualitatively examine the secondary dependence within the MZR and show that a * − gas −SFR relation exists at = 0 − 5 (i.e., there is at least a weak FMR over these redshift ranges).We find that an * − gas −SFR relation at = 0 − 5 exists in EAGLE via non-zero min values (i.e., there is at least a weak FMR over these redshift ranges), consistent with De Rossi et al. (2017).Despite the persistence of the * − gas −SFR relation, we confirm that there is a weak in EAGLE by using the M10 projection of the FMR.It should be noted that the uncertainty of the = 0 and = 5 values do overlap in EAGLE.The subtly of the redshift evolution may therefore be difficult to detect without fitting each redshift independently.
Scatter assuming different FMRs
The derived min values show that Illustris, TNG, and EAGLE all have weak FMRs.We now examine how the impact of marginalising over variations in min when assuming a strong FMR.Specifically, we want to quantify how the scatter changes when assuming a strong versus weak FMR.This will provide a quantitative metric for assessing how important considerations for an evolving FMR are.If the scatter were to remain unchanged, or change only marginally, the need for a weak FMR would be minimal.
To this end, we define three different ratios for quantitatively evaluating the importance of using a weak FMR.We consider ratios of the scatters about: (i) the weak FMR compared to the MZR, (ii) the strong FMR compared to the MZR, and (iii) the weak FMR compared to the strong FMR. Figure 3 illustrates these three different ratios evaluated at each redshift for Illustris (orange diamonds), TNG (green stars), and EAGLE (blue circles).We note that we calculate scatter about the MZR in the same way as we used to determine min in the previous section (i.e., using the projection from Equation 1) in the following discussion.
The left panel of Figure 3 shows the standard deviation of the residuals (henceforth, scatter) about each redshift's weak FMR ( min determined at that redshift) normalised by the scatter about the MZR ( min = 0) as a function of redshift ( weak / MZR ).We find for all redshifts across the three simulations that the weak FMR reduces the scatter by ∼10 − 30% compared to the MZR.The exception is TNG at = 0 having scatter reduction of less than 5% -falling within the nominal uncertainty on min and implying there is functionally no difference between the scatter of the MZR and FMR at this redshift.This = 0 TNG exception was discussed previously in the context of the min value (see Section 3.1) and the lack of a relation was attributed to the redshift scaling winds in the TNG model (see Pillepich et al. 2018a).The scatter reduction is roughly constant as a function of redshift in both TNG and EAGLE at around ∼ 20% (barring the aforementioned TNG exception).Scatter reduction in Illustris ranges from ≲ 10% at = 0 to nearly 30% at = 8.
The middle panel of Figure 3 shows the scatter at each redshift assuming a strong FMR compared to that of the MZR at that redshift ( strong / MZR ).We define the strong FMR fit analogously to observations: we apply a = 0 determined min value to all redshifts.In TNG, we find a similar trend to that of weak / MZR : a roughly constant scatter reduction as a function of redshift, albeit at a reduced value of ∼5 − 10% (see previous discussion about the exception at = 0).The scatter reduction in Illustris is similarly constant around 10%.Evidently, the redshift evolution in Illustris seen The dark and light gray shaded regions (in all panels) represent 5% and 10% variations, respectively, of each ratio.Centre: Same as left, but now the numerator is the scatter about the FMR evaluated in each redshift bin with a = 0 calibrated min ( strong ).Right: Previous two panels divided by each other, the reduction in scatter in the relationship by determining min at each redshift independently ( weak ) divided by using the = 0 min value ( strong ).
previously with weak / MZR disappears when assuming a strong FMR. strong / MZR actually increases nearly monotonically in EA-GLE as a function of redshift: the strong FMR fit on the low redshift bins is significantly better than the highest redshifts.Remarkably, assuming a strong FMR actually begins to increase the scatter by 10 − 40% compared to the MZR at high redshift ( > 5) in EAGLE.The concept of an FMR is one that relies on minimizing scatter compared to the MZR, yet at the highest redshifts in EAGLE it achieves the opposite.This is a clear failure of the strong FMR in EAGLE as well as a cautionary tale for interpreting future high-redshift FMR observations.
Finally, the right panel of Figure 3 shows the ratio of the scatter of the weak FMR divided by the scatter of the strong FMR evaluated at each redshift ( weak / strong ).The ratio weak / strong is of particular interest as it provides a diagnostic for how well an assumed strong FMR characterises galaxies at higher redshift compared to their minimum scatter projection.The ratio is unity at = 0 by construction, since the strong FMR assumes the = 0 min value for all redshifts.In Illustris and EAGLE, the scatter reduction of the weak FMR at ≤ 2 is less than 5%.The relatively low decrease in the scatter in these two simulations implies that the strong FMR might approximately hold at these low redshifts (qualitatively consistent with previous observational findings; M10; Cresci et al. 2019).The scatter reduction at ≥ 3, however, is ≳ 10% for Illustris and EA-GLE.Both have monotonically decreasing ratios of scatter in the high- regime out to a 20% decrease in Illustris and nearly 40% in EAGLE.On the other hand, the scatter reduction in TNG stays roughly constant at around 15% at > 0.
Overall, the 10-40% decrease in using a weak FMR indicates that high redshift galaxy populations are different from the low redshift systems.The strong FMR does not effectively characterise these high redshift galaxies.This marked shift in efficacy of the strong FMR further supports the idea there is some time evolution within the FMR in Illustris, TNG, and EAGLE.
It is worth noting how deceptive the lack of evolution in the strong FMR scatter reduction ratio (central panel of Figure 3) is for Illustris and TNG.Looking at the reduction in scatter of the strong FMR by itself in these two simulations may lead one to conclude that a strong FMR holds -the strong FMR does reduce scatter at all redshifts, even by a roughly constant amount.Indeed it is remarkable that the strong FMR reduces the scatter by a similar amount at = 0 as = 8 despite ignoring a variation of a factor of > 2 in min .However, we emphasize that using the weak FMR significantly improves the characterisation of these galaxy populations, particularly at high redshift.
In summary, by determining min at each redshift independently, we find that the scatter can be reduced an additional ∼ 10 − 40% compared to an assumed strong FMR.We therefore conclude that the variations in min are significant, particularly at high redshift.The significant variations past ≳ 3 seem to imply that the strong FMR is not even a good approximation in the early universe in our simulations.
What do variations in 𝛼 min mean?
The main idea of the FMR comes from the idea that in the MZR, at a fixed stellar mass, the metallicities and SFRs of galaxies are anti-correlated.Using min is an attempt to represent the strength of the correlation between metallicity and SFR; however, it does not actually explicitly tell us about that relationship.Rather, min values are tuned to minimize scatter.It is therefore critical to develop an understanding of the strength of the correlation between metallicity and star formation rates that is not just a scatter-minimisation tool.To build this understanding, we first take the FMR regression as defined in Section 2.5: a fourth-order regression.We show that our choice of first-order does not significantly impact our results in Appendix B 8 We note, however, that while min is related to the strength of the (anti-) correlation between Δ and SFR, in actuality, there is another scaling with the slope of the FMR () slope is shallow at = 0 and gets steeper with increasing redshift.This behaviour is consistent with the min variations seen in Illustris - min is small at = 0 in Illustris and increases with increasing redshift (see Figure 2).We find that the = 0 slope in TNG is significantly weaker than the > 0 slopes (central panel of Figure 4), consistent with the min values from TNG.It should be noted, however, that the min values at ≥ 5 are all the same, whereas the slopes of Δ versus SFR change slightly at ≥ 5. Finally, in EAGLE, we find that the slope of Δ versus SFR is steepest at = 0 and shallows with increasing redshift, again consistent with behaviour in min as a function of redshift.We emphasize that there is an additional term, (the slope of the FMR regression from Equation 2), included in the slope of Δ versus SFR.We therefore caution against too strong a comparison against min and slopes in Δ-SFR space.Changes in the slope of the FMR may cause a change in the slope of Δ versus SFR.Δ is only proportional to min log SFR.Furthermore, we note that slopes in Δ-SFR space have some dependence on the stellar mass bin chosen.This behaviour is consistent with more recent parameterisations of the FMR (e.g., Curti et al. 2020) that find that the MZR turnover mass has some dependence on SFR.While an interesting area of future exploration, the stellar mass dependence of these slopes (and min ) is beyond the scope of this work.We therefore caution that the model presented here neglects variations in the role of SFR as a function of stellar mass.However, since Equation 3 explicitly assumes a thin mass bin, the simple model presented here should be relatively robust to stellar mass variations since the Δ-SFR space slopes are not highly sensitive to mass (which we find to be predominately be the case in each simulation at each redshift).
In summary, the slope between offsets from the MZR and SFR offers a more straight-forward way to understand the (potential) evolution in the relationship between metallicity and SFR suggested by min variations.
Advantages and challenges of a weak FMR Framework
The key advantage of fitting each redshift independently is to more effectively minimize the scatter.The weak FMR gives us a clear-cut metric for the strength of the MZR's secondary dependence on SFR arises that is completely independent of the evolution of the normalisation of the MZR.Independence from other redshift populations removes the possibility of conflating evolution of the normalisation of the MZR with evolution of the scatter.By using a = 0 derived min at all redshifts (i.e., strong FMR) we suppress any potential variation of min as a function of redshift.As a consequence, the strong FMR does not optimally reduce scatter across redshift (discussed in more detail in Section 3.2).Using a weak FMR assumption therefore allows a more careful examination for the extent to which the observed FMR has variations.
A challenge of performing a similar analysis in observations is the amount of data available.For example, lower redshift galaxy populations are well sampled (e.g., M10 use 141,825 SDSS ∼ 0 galaxies), but at higher redshift sampling becomes more difficult (e.g., the Nakajima et al. 2023 andCurti et al. 2023 analyses use less than 200 objects spanning a wider redshift range of = 3 − 10).It is possible that subtle changes can be measured at lower redshifts (see Pistis et al. 2023 for a potential detection of MZR scatter variations at ∼ 0.63); however, the most significant scatter reduction happens in the high redshift ( ≳ 3) populations (Figures 3).More complete samples of galaxy populations at these early times with, e.g., JWST are therefore required in order to undergo any weak FMRstyle analysis to detect significant deviations from the = 0 min values.Moreover, a redshift-complete sample would be limited by our understanding of metallicity in the high redshift universe.Recently, work has been done to obtain reliable metallicity diagnostics at > 4 using JWST/NIRSpec (e.g., Nakajima et al. 2023;Shapley et al. 2023;Sanders et al. 2024).However, more complete galaxy samples are required, particularly at the low metallicities seen at this epoch, to fully characterise these diagnostics.As such, it is currently difficult to ensure that min values determined observationally are fair comparisons across the broad redshift range examined in this work.
Dependence on small scale physics implementations
We find that Illustris, TNG, and EAGLE have weak FMRs.The min values are not the same, nor do they evolve in the same fashion, in the different models, however.The value of min at any given redshift is a complicated by-product of a number of different physical processes.We therefore caution the reader against drawing conclusions on min (and the evolution thereof) from the aggregation of the three individual models.While we have some qualitative understanding of how min is set (or changed), the exact mechanisms setting min are not entirely clear in detail.
What is clear is that min is sensitive to the physics driving galaxy evolution.For example, in Section 3.1, we attributed the lowered min values in TNG at = 0 to the redshift-dependent wind prescription in the TNG model (as mentioned in Section 3.1).Through this example, the sensitivity of min to the input physics within the simulation models becomes clear.The redshift-dependent winds in TNG work to increase wind velocities at low redshift which suppresses star formation.This star formation suppression likely plays a significant role in the overall decrease of min seen at low redshifts in TNG.We therefore propose the evolution (or lack thereof) in the scatter about the MZR as a testable prediction to constrain the physical models of the simulations.
All three models examined here rely on effective equation of state sub-grid models for the dense, unresolved ISM (Springel & Hernquist 2003 for Illustris/TNG and Schaye & Dalla Vecchia 2008 for EAGLE).In recent years, however, high-resolution simulation modelling has begun to directly resolve the sites of star formation (e.g., Feedback In Realisitc Environments model; Hopkins et al. 2014).The stellar feedback in such simulations is much burstier than in the models presented here.We believe that bursty stellar feedback events should suppress min values compared to Illustris, TNG, and EAGLE.Subgrid pressurization lends support to the ISM that is not coupled to star formation, and therefore blunts rapid variations in star formation and stellar feedback.Models without subgrid pressurization (like FIRE) do not have this source of ISM support, and therefore exhibit more rapid (i.e., bursty) variations in star formation and stellar feedback.Bursts may therefore curtail the effectiveness of star formation rates in regulating the gas-phase metallicity of a galaxy.Therefore the redshift variations in min may be able to provide constraining power on the extent to which galaxies' feedback is more bursty or smooth.Although it should be noted that even within these smooth feedback models there is some disparity.
CONCLUSIONS
We select central star forming galaxies with stellar mass 8.0 < log( * [ ⊙ ]) < 12.0 with gas mass log( gas [ ⊙ ]) > 8.5 from = 0 − 8 in the cosmological simulations Illustris, IllustrisTNG, and EAGLE.We investigate the extent to which the M10 parameterisation (see Equation 1; ZR) of the fundamental metallicity relation (FMR; Equation 1) holds.The parameter of merit in the ZR is min , which is a parameter tuned to minimize scatter in the relation.Physically, min sets a projection direction of the mass-metallicity-SFR space to a 2D space with minimal scatter.Many observational studies have claimed that this projection direction does not evolve with redshift (Mannucci et al. 2010;Cresci et al. 2019).
We discuss a new framework in which to examine the ZR as a superset of the MZR ( = 0) and FMR ( ≠ 0).We further define both a strong and weak FMR.A strong FMR indicates that min is constant as a function of redshift.Conversely, the weak FMR is where min varies with redshift (see Figure 1 for complete illustrated relationship of ZR).More generally, the strong FMR states the the M10 parameterisation can describe both the scatter and noramlisation of the MZR at the same time.
Our conclusions are as follows: • We find that min ≠ 0 for all redshifts in Illustris, TNG, and EAGLE.This shows that there is an FMR in each of these simulations.We note, however, that the uncertainty in min in TNG at = 0 includes min = 0.0.We attribute this to the increased suppression of low redshift star formation in the TNG model.
• Furthermore, we find that there is non-negligible evolution in min as a function of redshift (Figure 2).This result suggests that the FMR in Illustris, TNG, and EAGLE is a weak FMR.
• We find that the weak FMR ( min determined at each redshift independently) consistently reduces scatter around 10 − 30% compared to the the MZR (left panel of Figure 3).The strong FMR also reduces the scatter compared to the MZR, albeit to a lesser extent than the weak FMR.At high- in EAGLE, however, using the strong FMR actually increases scatter compared to the MZR (centre panel of Figure 3).Overall, we find that at ≳ 3 fitting galaxies with a weak FMR can reduce scatter ∼ 5 − 40% more than using the strong FMR (right panel of Figure 3).
• We suggest that the interpretation of min variations is more well-understood in the context of the slope of Δ (offsets from the MZR) as a function of log SFR (see Figure 4).In this context, the weak FMR suggest that the relationship between metallicity and SFR changes through cosmic time, whereas the strong FMR suggests that it does not change.We also show that the slope in Δ−log SFR space is proportional to min (see Equation 3).
Obtaining one relationship that describes the metal evolution of all galaxies across time is an ambitious goal.It is worth appreciating how reasonably well a simple linear combination of two parameters can begin to achieve that goal at low redshift.Yet it is not perfect.To begin to rectify this, we develop a substantial overhaul to the current FMR paradigm (summarized in Figure 5).The results from this work show that Illustris, TNG, and EAGLE indicate deviations from the strong FMR.It is presently unclear whether the same is true in observations.Understanding whether the FMR in observations is weak or strong will aid in being able to understand the recent JWST observations suggesting high redshift FMR evolution.Figures 1 and 5.We acknowledge the Virgo Consortium for making their simulation data available.The EAGLE simulations were performed using the DiRAC-2 facility at Durham, managed by the ICC, and the PRACE facility Curie based in France at
Figure 2 .
Figure 2. min values as a function of redshift in Illustris, TNG, and EAGLE. min values as a function of redshift are plotted as orange triangles, green stars, and blue circles for Illustris, TNG, and EAGLE, respectively.The errorbars here are obtained by finding values that reduce the scatter to within 5% that of the minimized scatter.The gray squares are observational values of min from M10, Andrews & Martini (2013), and Curti et al. (2020) determined at ≈ 0 via SDSS (offset from = 0 for aesthetic purposes).
Figure 3 .
Figure 3. Reduction in scatter for weak FMR versus MZR, strong FMR versus MZR, and weak FMR versus strong FMR.Left: the scatter about the FMR by fitting min at each redshift individually ( weak ) divided by the scatter in the MZR at each redshift ( MZR ) as a function of redshift.The dark and light gray shaded regions (in all panels) represent 5% and 10% variations, respectively, of each ratio.Centre: Same as left, but now the numerator is the scatter about the FMR evaluated in each redshift bin with a = 0 calibrated min ( strong ).Right: Previous two panels divided by each other, the reduction in scatter in the relationship by determining min at each redshift independently ( weak ) divided by using the = 0 min value ( strong ).
Figure 5 .
Figure 5. Summary of Key Points.The strong FMR (left, red) is where min , a parameter tuned to minimise scatter about the MZR, is constant as a function of redshift.Consequently, in thin mass bins, the offsets from the MZR, Δ, as a function of (log) SFR have roughly the same slope at all redshifts (although there is a dependence on the slope of the FMR, see Section 4.1 for more details).The weak FMR (right, blue) is where min varies as a function of redshift.In this scenario, the individual redshift's have different strengths of correlations between offsets from the MZR and (log) SFR.
Figure A1 .Figure B1 .
Figure A1.Determination of min as a function of redshift for Illustris (top), TNG (centre), and EAGLE (bottom) for the four different sSFMS variations.
Table 1 .
All min values at = 0 − 8 for Illustris, TNG, and EAGLE.These min values are determined at each redshift individually.The superscripts are the upper limits of the uncertainty while the subscripts are the lower limits.
TGCC, CEA, Bruyèresle-Châtel.AMG and PT acknowledge support from NSF-AST 2346977.KG is supported by the Australian Research Council through the Discovery Early Career Researcher Award (DECRA) Fellowship (project number DE220100766) funded by the Australian Government.KG is supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.RJW acknowledges support from the European Research Council via ERC Consolidator Grant KETJU (no.818930) | 10,347.4 | 2024-03-13T00:00:00.000 | [
"Physics"
] |
The effect of varying levels of vehicle automation on drivers’ lane changing behaviour
Much of the Human Factors research into vehicle automation has focused on driver responses to critical scenarios where a crash might occur. However, there is less knowledge about the effects of vehicle automation on drivers’ behaviour during non-critical take-over situations, such as driver-initiated lane-changing or overtaking. The current driving simulator study, conducted as part of the EC-funded AdaptIVe project, addresses this issue. It uses a within-subjects design to compare drivers’ lane-changing behaviour in conventional manual driving, partially automated driving (PAD) and conditionally automated driving (CAD). In PAD, drivers were required to re-take control from an automated driving system in order to overtake a slow moving vehicle, while in CAD, the driver used the indicator lever to initiate a system-performed overtaking manoeuvre. Results showed that while drivers’ acceptance of both the PAD and CAD systems was high, they generally preferred CAD. A comparison of overtaking positions showed that drivers initiated overtaking manoeuvres slightly later in PAD than in manual driving or CAD. In addition, when compared to conventional driving, drivers had higher deviations in lane positioning and speed, along with higher lateral accelerations during lane changes following PAD. These results indicate that even in situations which are not time-critical, drivers’ vehicle control after automation is degraded compared to conventional driving.
Introduction
Advanced driver assistance systems (ADAS) are becoming increasingly accessible, with systems such as the Volvo IntelliSafe Autopilot [1], and the Tesla Model S Autopilot [2] currently providing vehicle automation at SAE Level 2 [3]. The next step in vehicle automation development will be the trial of vehicles operating at SAE Level 3, where the vehicle provides sustained lateral and longitudinal vehicle control, with the understanding that the driver will intervene when requested to do so [3]. Although this increased automation of the driving task has the potential to lead to safety benefits such as a reduced number of crashes [4], along with potentially reducing vehicle emissions [5], it will also result in a fundamental shift in the drivers' role from that of an active participant to a passive supervisor [6,7]. The impact of this role change PLOS is likely to lead to reduced situation awareness, or knowledge of what's happening in the environment [8], and "out-of-the-loop" performance problems, which have been shown to impair drivers' ability to assume manual vehicle control in a timely and appropriate manner [9][10][11][12][13][14]. The effects of the changing demands on drivers' attention and involvement in the driving task are likely to vary depending on the level of automation, as defined by SAE [3]. Until recently, much of the research into the effects of automation has focused on drivers' responses to critical situations where the automated system reaches a limitation, and a transfer of control back to the driver is required. The majority of these studies have used driving simulators to investigate the impact of automation on driver behaviour during the transition. Some of the most highly researched issues arising during these critical transitions of control include (i) response times to critical and imminent take-over requests [10,15]; (ii) the pattern of drivers' eye movements during the transition of control [12,14], (iii) brake and steering patterns after retaking control [16,17], and (iv) vehicle positioning and stabilisation in the moments after a takeover request [13,18]. Results have shown that while drivers can respond quite quickly to these take-over requests, they are associated with costs in terms of vehicle control [10,13]. For example, when compared to manual driving, results show that following resumption of control from automation, drivers exhibit sharper trajectories and increased levels of high frequency steering activity, along with increased lateral and longitudinal accelerations, and higher brake pedal inputs [10,16,19]. These effects are exacerbated when the driver engages in other, non-driving related tasks, while the automation is on [7].
Although there is mounting evidence to suggest that drivers' performance suffers during system-initiated transfers of control, less is known about the quality of driver-initiated takeovers in non-urgent scenarios. With an increasing number of vehicles having functionality such as Adaptive Cruise Control (ACC) and Lane Keeping Assist (LKA) as standard, these driver-initiated transfers are likely to become more common, for example when drivers wish to change the vehicle's trajectory to overtake a lead vehicle, or to exit a motorway. In these types of situations drivers have more control over the take-over process, and can take some time to regain situation awareness before resuming control. A recent paper by Eriksson & Stanton [20] showed that when drivers were given a takeover request without a time restriction, there was large variability across participants in the time taken to resume control. In particular, there was a significant increase in response time when drivers were engaged in a secondary task during automation-resumption times ranged from 1.97 s to 25.75 s. Engagement in a secondary task did not lead to any significant increase in corrective steering actions, as measured by the standard deviation of steering angular rate. However, there was no comparison between drivers' vehicle control performance with an automated system and conventional, manual driving. Thus, more research is needed to gain a clearer understanding of whether there are any performance decrements associated with drivers' vehicle control in these non-critical situations, and whether the effects vary in any way at different levels of automation. The current study addresses this issue by examining drivers' behaviour during lane changes in manual driving, partially automated driving (PAD), and conditionally automated driving (CAD).
Changing lane represents a safety-relevant driving manoeuvre which incorporates many of the critical aspects of driving. These include basic vehicle control elements, such as smoothly steering from one lane to an adjacent lane, and higher-order perceptual elements, such as maintaining situation awareness, decision-making and decision-execution [21][22][23]. Problems when changing lane can have a negative impact on both traffic safety and traffic flow [24], with approximately 539,000 two-vehicle lane change crashes occurring in the U.S. in 1999 [25]. It is possible that having to re-take control from an automated system to initiate a lane change will increase this risk. Therefore, it is important to gain an understanding of the effects of automation on drivers' overtaking performance.
Previous studies have developed models of drivers' decision-making during lane change and overtaking manoeuvres, identifying a number of key issues which drivers need to consider. These include the choice of lane, gap acceptance, relative speed, distance to the vehicle ahead, and distance to the point at which a lane change must be completed (e.g. [26][27][28][29]). However, little is known about the effects of these factors on drivers' experience of overtaking while using different levels of automation. A study by Abe, Sato, and Itoh [30] showed that drivers had different requirements for passing bicycles and scooters during automated driving compared to when they were in control of the vehicle. They reported higher levels of trust and comfort when a larger lateral distance and earlier steering timing was adopted in automation, even if this did not match their manual driving behaviour. However, the study only examined drivers' subjective evaluations of the overtaking scenarios during automation, and drivers did not have any control over the overtaking manoeuvre itself.
Current study
The aim of the current study was to consider the above issues, by examining drivers' experiences and vehicle control while changing lanes in manual driving, partially automated driving (PAD), and conditionally automated driving (CAD). We looked at how, and when, drivers initiated an overtaking manoeuvre during manual driving, and compared this to when they were interacting with a PAD and CAD system. In PAD, drivers were required to resume manual control of the vehicle in order to make a lane change, while in CAD, the automated system controlled all aspects of the driving task including the lane change, but drivers used the indicator lever to initiate the manoeuvre.
In particular, the study sought to address the following questions: 1. Are there any differences between manual driving, PAD, and CAD, regarding the time at which drivers initiate an overtaking manoeuvre?
2. Are there any differences in the distance to a lead vehicle at which drivers overtake in manual, PAD, and CAD?
3. Are there any differences in drivers' vehicle control, as measured by lateral and longitudinal accelerations and lateral positioning during the overtaking manoeuvre, when drivers are fully in control (manual), compared to when they are required to resume control from automation (PAD)?
4. Are there any differences in drivers' subjective evaluation of PAD and CAD systems?
Participants
Following approval from the University of Leeds Research Ethics Committee (Reference Number LTTRAN-054), 30 participants were recruited for the study. 1 participant dropped out, leaving a total of 29 participants who completed the experiment (15 male), with an age range of 21-60 years (M = 34.21 years, SD = 8.94). All participants held a full driving licence for a minimum of 2 years (M = 13.62 years, SD = 9.62) and were regular drivers, driving an average of 8092.00 miles per year (SD = 7151.28). Participants were recruited via the University of Leeds Driving Simulator database, and received a payment of £20 in appreciation of their time.
Materials & design
The experiment took place in the University of Leeds Driving Simulator (UoLDS), which consists of a Jaguar S-type cab with all driver controls operational. The vehicle is housed in a 4 m spherical projection dome and has a 300˚field of view projection system. A Seeing Machines faceLAB eye-tracker was used to record eye movements at a rate of 60Hz. All drives were completed on a three-lane motorway, which included straight and curved sections of road. It should be noted that this experiment was designed around a UK road, where vehicles travel on the left. There was a continuous stream of slow-moving traffic on the inside lane (left-hand lane) and no traffic in the outside lane (right-hand lane, see Fig 1). The speed limit was set at 70 mph, which is the national speed limit in the UK.
This study adopted a repeated-measures design with three drives: 1. A manual drive, where drivers had full control of the vehicle and were asked to overtake any vehicle travelling more slowly than them in the centre lane (SAE level 0).
2.
A partially automated drive (PAD), operating at SAE Level 2, in which the automated system controlled driver speed, lane positioning, and distance to vehicles ahead (minimum forward headway of 2 s). However, drivers were required to disengage automation and resume manual control to overtake any slow moving lead vehicles. Vehicle automation could be disengaged by either pressing a button on the steering wheel, turning the steering wheel more than 2˚, or pressing the brake or accelerator pedals. After completing an overtaking manoeuvre, drivers were required to re-engage the automation by pressing a button on the steering wheel.
A conditionally automated drive (CAD), operating at SAE Level 3, in which automation
performed the vehicle control aspects of the driving task, including any overtaking manoeuvres. However, drivers had to use the indicator lever to initiate a lane change manoeuvre in either direction, and were required to monitor the system and the driving scene. The order in which participants experienced each drive was counterbalanced. For the manual drive, participants were asked to travel in the centre lane, and drive at the speed limit. For both the PAD and CAD drives, automation was only available when the driver was in the centre lane and travelling at a speed of approximately 70 mph. Drivers were instructed to engage automation as soon as possible at the start of both automated drives.
There were a total of 12 overtaking events in each drive, all initiated on straight segments of the road. For each of these events, a vehicle entered the driver's lane from the slow lane (left lane in the UK), at a distance of approximately 180 m ahead of the driver, and travelled at a speed of 50 mph, approximately 20 mph slower than the driver's vehicle (see ego vehicle in Fig 1). Each event ended once the driver had returned to the middle lane and re-engaged automation if required. There was a 30 second gap between each event.
Procedure
On arrival at UoLDS participants were briefed about the experiment and filled out a consent form and initial questionnaire containing questions about their age, gender, mileage, etc. To assess whether participants' behaviour was affected by their general attitudes towards automation, eight questions were administered using a seven-point anchored scale. All participants then completed a practice drive, accompanied by the experimenter, where they became accustomed to the simulator environment and vehicle controls. During the practice drive, they first drove manually for approximately 10 minutes and were encouraged to change lanes a number of times. Participants were then given the opportunity to practice the automated drive. They were asked to engage the automation by pressing a button on the steering wheel, after which they completed six overtaking manoeuvres. After the practice drive, participants completed the first experimental drive. This was followed by another short practice drive and the second and third experimental drives. At the start of each drive, they were reminded to overtake every slow moving lead vehicle, and to return to the centre lane once they had done so. Participants were allowed a short break after each drive, during which the next drive was set up in the simulator. Immediately after each of the PAD and CAD drives, they completed a questionnaire, which incorporated questions on system acceptance [31], the System Usability Scale [32], and a Human-Machine Interface (HMI) Evaluation Scale (adapted from [33]). At the end of the experiment, they completed a final questionnaire which included items on their preferred system, and a series of questions about their attitudes towards automation. Only the system acceptance and preferred system items are reported in this paper.
Statistical analysis
Statistical analyses were completed using IBM SPSS v21. Shapiro Wilk's tests showed that the data for maximum lateral accelerations and speed variance were not normally distributed. As the maximum lateral acceleration data was strongly positively skewed, logarithmic transformations were used for the analyses. The speed variance data was moderately positively skewed, and therefore square root transformations were applied based on the recommendations of Tabachnick and Fidell [34]. Analysis of variance (ANOVA) results are based on the transformed responses, while the graphs represent the original units. An alpha value of 0.5 was used as the criterion for statistical significance, and partial eta squared was used to measure effect sizes. Where Mauchly's test indicated a violation of sphericity, degrees of freedom were Greenhouse-Geiser corrected.
Response time
Although they were not explicitly instructed to do so, almost all drivers used their indicator in all three drives (N = 24). Therefore, drivers' indication time was taken as the first signal of a decision to change lane. Response time was measured as the time from when the lead vehicle entered the driver's lane to the time the indicator was first pressed.
A 2-way repeated-measures ANOVA examining the effects of Drive (manual, PAD, and CAD), and Event (1-12)
Inverse time to collision and forward headway
Drivers use the looming retinal image of a lead vehicle as a cue for detecting its deceleration rate [35], and inverse time to collision (invTTC) provides a measure of this visual looming effect [17,[35][36][37]. To establish whether the looming effect of the lead vehicle had any effect on the time taken to initiate an overtaking manoeuvre, a 2-way repeated measures ANOVA examining invTTC at time of indication was calculated. The independent variables were Drive (manual, PAD, & CAD) and Event (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12). There was no significant effect of Drive (F(2,56) = 1.92, p = 0.16) or Event (F(6.50,181.94) = 0.63, p = 0.80) on invTTC at indicator time. Therefore, it appears that the looming effect was not different in any of the three drives. The invTTC values ranged from 0.09 s -1 in CAD and PAD to 0.10 s -1 in manual, suggesting that drivers adopted a 10-11 second time to collision as a comfortable overtaking time in all three drives.
To further explore the effects of the distance to the lead vehicle on overtaking manoeuvres, a 2-way repeated measures ANOVA was conducted with Time Headway (to the lead vehicle) at indicator time as the dependent variable. There was a significant effect of Drive (F(1. Taken together, these results imply that drivers in PAD were likely to take a little extra time to understand both the driving situation and how the system was working prior to initiating an overtaking manoeuvre. However, the fact that there was no significant differences in TTC across the groups suggests that the deceleration caused by the ACC ameliorated the relationship between speed and distance which would have increased the criticality of any looming effect.
Vehicle control during manoeuvres
Numerous studies have explored lane changing trajectories during manual driving and automated driving under various conditions, for example as a result of driver distraction [38,39], during visual occlusion [40,41], and in different traffic densities [28,42]. The following section uses some of the metrics identified in these studies to understand how PAD affected factors such as drivers' lateral positioning, speed profile, and steering behaviour following a driver-initiated resumption of control in non-critical situations. As CAD did not require any vehicle control input from drivers, it is not included in the following analyses.
Automation disengagement method. In PAD, drivers could disengage automation by either pressing a button on the steering wheel, turning the steering wheel more than 2˚, or pressing the accelerator or brake pedals. As shown in Fig 4, the majority of disengagements occurred by turning the steering wheel, followed by button press disengagements, and use of the accelerator pedal. This is perhaps unsurprising as the lane-change manoeuvre required participants to use the steering wheel to change their trajectory. The brake pedal was not used as a disengagement tool by any participant in this experiment.
Lateral position. The standard deviation of lateral position (SDLP) relative to the centre of the road was used to provide a measure of the quality of the steering movement during the overtaking manoeuvre [21,39]. A two-way ANOVA was conducted to examine the effect of Drive and Event on SDLP. As all drivers completed their overtaking manoeuvre at a different time and position along the road, the start of each driver's overtaking trajectory was anchored around the point at which the lead vehicle appeared in their lane, and measured for 40 seconds after this point. This time window was sufficient to ensure that all lane changes were captured.
Results indicate that SDLP was significantly larger in PAD (M = 1.45m, SE = 0.03) than in manual driving (M = 1.39m, SE = 0.03; F(1,28) = 13.31, p<0.01, ηp 2 = 0.32; see Fig 5). There was no significant effect of Event (F(5.53, 154.93) = 1.57, p = 0.11) and no interaction effect (F(6.39, 178.97) = 0.46, p = 0.93). As shown in the top graph of Fig 5, drivers started the manoeuvre later in PAD and had a slightly sharper trajectory than in manual driving, confirming the earlier analyses of indicator response time and time headway.
Speed profiles. In order to compare speed behaviour during manual and PAD, drivers' speed profiles were also anchored around the lead vehicle appearance and measured for 40 seconds after this point. A two-way repeated measures ANOVA on mean speed during this time showed no significant effect of Drive (F(1, 28) = 2.37, p = 0.14) or Event (F(5.27, 147.58) = 0.81, p = 0.63) across the 24 manoeuvres (manual and PAD). However, a second two-way ANOVA on the standard deviation of speed during the overtaking manoeuvre found that speed variance was significantly higher in PAD, compared to manual driving (F(1,28) = 49.63, p<0.001, ηp 2 = 0.64). The bottom graph in Fig 5 shows that this variance lasted across the overtaking manoeuvre, suggesting that drivers were less consistent in maintaining their speed after resuming control from automation. These results suggest that the process of turning off the automated system, and resuming control of the brake and accelerator pedals led to fluctuations in speed as drivers became accustomed to the force required to control the vehicle. The speed instability remained across the 12 overtaking manoeuvres, suggesting that the destabilising effects of resuming control from automation did not reduce with repeated exposures. There was no significant effect of Event on the standard deviation of speed (F(11,308) = 1.09, p = 0.37), nor was there any interaction effect (F(11, 308) = 1.07, p = 0.39).
Lateral acceleration. To further explore drivers' vehicle control during the overtaking manoeuvre, maximum lateral acceleration in manual driving and PAD were compared. This measure is considered to be a good indicator of the level of sharpness or jerkiness associated with a lane change [16].
As the overtaking manoeuvre involved changing lanes in two different directions (into and out of the third lane), the metrics for exiting and re-entering the lane were considered separately for this analysis. Previous studies have shown that steering wheel movements during a lane change consist of three sub-movements, the first of which usually provides the greatest change in positioning and the sharpest movement [21,43]. We expected that this movement would occur prior to the point at which the greatest deviation in road position occurred. Thus, the maximum lateral acceleration for the lane exit was measured from the point at which the lead vehicle appeared to the point at which the greatest deviation in road position to the right occurred. The maximum lateral acceleration for lane re-entry was measured from this point to the point at which the greatest deviation in road positioning to the left was achieved.
A three-way repeated measures ANOVA was conducted on drivers' maximum lateral acceleration, with Drive (manual and PAD), Event (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12), and Direction (lane exit, lane reentry) as the independent variables. Results indicate a significant main effect of Drive on maximum lateral acceleration during the overtaking manoeuvres (F(1,28) There were a number of significant interaction effects. Firstly, there was a significant interaction between Drive and Event (F(6.72,188.13) = 2.94, p<0.01, ηp 2 = 0.10). There was a reduction in maximum lateral accelerations across events in PAD, which led to a decrease in the differences between PAD and manual driving. There was also a significant interaction between Drive and Direction (F(1,28) = 21.89, p<0.001, ηp 2 = 0.44), with a much larger difference in maximum lateral accelerations between PAD and manual driving during lane exit than lane re-entry. Finally, there was a significant three-way interaction between Drive, Event, and Direction (F(6.81,190.57) = 2.43, p<0.01, ηp 2 = 0.08), which is displayed in Fig 6. The largest differences in maximum lateral accelerations between manual driving and PAD occurred while moving into the overtaking lane during the first six events. The size of the Drive differences diminished across the final 6 events, suggesting that drivers had learned to re-take control more smoothly after around the sixth event. However, the lack of overlap between the error bars shows that lateral accelerations during PAD were still significantly higher than in manual driving. On re-entry to the centre lane after overtaking, the difference in maximum lateral acceleration between manual and PAD was smaller, suggesting that drivers' vehicle control in PAD had become more stable over the time taken to complete the overtaking manoeuvre. Nevertheless, there was still a sizeable difference for the majority of drivers during the first 5 events. The maximum lateral acceleration values were higher for both manual driving and PAD when re-entering the centre lane, suggesting that regardless of condition, drivers moved sharply back into the middle lane once they had overtaken the slow-moving vehicle.
Subjective evaluation
The final analyses focused on gaining an understanding of drivers' subjective evaluations of both of the vehicle automation systems. In this paper, we focused on drivers' evaluations based on two different questions. At the end of the experiment, drivers were asked to select their preferred automated system-CAD or PAD. The majority of drivers (60%) preferred the conditionally automated system to the partially automated one (36.7%), with 3.3% participants failing to select a favourite.
Drivers were also asked to provide ratings of system acceptance using Van der Laan et al.'s [31] scale, comprising of items measuring how useful and satisfying users found each system. Results showed that there were no significant differences in the ratings of system usefulness (t(28) = 2.03, p = 0.05). However, participants rated the CAD system as being significantly more satisfying to use (t(28) = 2.63, p<0.05; see Fig 7).
Discussion
Although there is increasing evidence to suggest that vehicle automation leads to performance decrements during transfers of control in critical situations [10,11,13,16], there has been little investigation of the quality of driver-initiated transfers in non-urgent situations. This is an important issue, as users of SAE Level 2 and Level 3 vehicle automation are likely to encounter these types of non-urgent situations on a regular basis, for example, if they wish to change lane during motorway driving. Therefore, the aim of the current study was to address two main gaps in the literature. To begin, the study provides one of the first investigations into the vehicle control implications of driver-initiated transitions from automation. Secondly, the study provides a comparison of the effects of different levels of automation on drivers' vehicle control in situations which are not time-critical, by comparing how and when they initiated an overtaking manoeuvre in manual driving, PAD, and CAD.
As outlined in the Introduction, the study specifically addressed four main questions. Our first two questions investigated whether there were any differences between manual driving, PAD, and CAD, regarding the time taken by drivers to initiate an overtaking manoeuvre, and the distance to the lead vehicle at which they initiated this manoeuvre. Eriksson & Stanton [20] showed that when drivers were given a takeover request without a time restriction, transition times were substantially longer than those reported in time-critical studies. However, the transitions in their study were initiated by a system reminder, and were not linked to any changes in the driving environment. Our results show that when asked to respond to elements in the environment i.e. a slow moving lead vehicle, drivers had slightly longer response times in PAD than in manual driving or CAD, and got closer to the lead vehicle before initiating a lane-change. This provides some support for the idea that drivers will take additional time, when available, to regain an understanding of the situation before re-entering the vehicle control loop. However, on average this process only took one extra second, and may just have been a result of drivers moving their hands and feet into position for driving, or checking the system to see who was in control. This implies that even in non-urgent situations, where the ACC would protect them from a crash, drivers do not take much time to re-orient themselves to the situation prior to taking control from automation. There were no significant differences between the inverse TTC values at indicator time, suggesting that the looming effects were the same in all three drives. For the automated drives, the ACC adapted drivers' speed to maintain a minimum time headway of 2 seconds. As drivers initiated their overtaking manoeuvres at approximately a 3 second headway, it is likely that the ACC had started to decelerate, thus minimising the effects on TTC of any slight variations in headway.
Our third question was to establish whether there were any differences in the quality of the overtaking manoeuvre during manual driving compared to during the resumption of control from PAD. Our results provide evidence that even in driver-initiated transfers, with low criticality, there are still significant differences in vehicle control between manual driving and PAD. Drivers displayed greater fluctuations in their speed and lateral position when re-taking control from automation. It is possible that this is a function of the way in which automation was de-activated. For example, if drivers de-activated using the steering wheel, the very action of having to turn the steering wheel more than 2 degrees to turn off automation may have contributed to a sharp trajectory for some drivers. Thus, it may be that this method of disengagement should be avoided when vehicle manufacturers are designing their disengagement criteria. In addition, the process of transferring control of the brake and accelerator pedals is likely to lead to fluctuations in speed while drivers become accustomed to the force required for normal vehicle control. Merat et al. [13] found that it took drivers 35-40 seconds to stabilise their lateral vehicle control after a transfer from automation. The entire overtaking manoeuvre in the current study took less than 30 seconds, suggesting that during a simple overtaking manoeuvre there is not sufficient time for adequate vehicle stabilisation. Interestingly, it appears that increased exposure improved drivers' ability to control some elements of the transition, with an examination of maximum lateral accelerations showing that the difference between manual and PAD reduced during the final six events when the maximum lateral accelerations in PAD became more consistent. This builds on previous research with both ACC and higher levels of automation, which shows that drivers who are familiar with a system are more likely to respond appropriately [15,44,45]. However, although the ability to control the vehicle after a transition improved over time, at least regarding lateral accelerations, responses were still higher in PAD than in manual, suggesting that the learning effect cannot fully mitigate the detrimental effects of being out of the loop during the transfer of control. This variability in speed and vehicle positioning could have the potential to cause confusion for other traffic, and may lead to dangerous interactions if there are other vehicles travelling in the overtaking lane.
Our final question was to evaluate whether there were any differences in drivers' evaluations of using different levels of driving automation. A number of authors suggest that automated driving systems should attempt to mirror individuals' driving styles to increase acceptance and use of these systems (e.g. [46]). However, although drivers enjoyed using both automated systems, they preferred the CAD system, even though its lane-change trajectory was quite different from that adopted by drivers in manual and PAD. This suggests that, given a choice, drivers prefer not to have to intervene with the automated system, even when not engaging in other tasks. In addition, the requirement for automated systems to mirror an individual's driving style may be less important than previously suggested, a finding supported by a two recent studies which showed that drivers did not necessarily prefer an automated system that matched their driving style [30,47]. These findings have implications for the potential success of endeavours to decrease vehicle emissions and improve traffic flow through increased vehicle automation and electrification [48,49]. If drivers are happy to use an automated system which doesn't match their driving style, then they are more likely to accept a vehicle which adopts a slower speed or smoother trajectory than they would when driving themselves.
As with all studies, there are some limitations which must be acknowledged. The current study required drivers to overtake 12 times per drive, with each overtaking event occurring in very similar circumstances. The repetitive nature of the task is likely to have impacted on their behaviour, which may have been more varied if the conditions surrounding the overtaking process were changeable. In addition, there was never any traffic in the overtaking lane, meaning the lead vehicle was the only element of the road environment to influence drivers' responses. Additional research is needed to understand if the same responses would be made if drivers also needed to consider the size of the gap available in the overtaking lane. It would also be interesting to understand whether drivers would choose to overtake at all if not instructed to do so.
Conclusions
The current study compares drivers' overtaking behaviour in manual driving, PAD, and CAD; providing insights into chosen headways and vehicle control capabilities in non-urgent situations. Drivers appeared to enjoy using both PAD and CAD systems, suggesting that acceptance of these systems is likely to be high, at least as long as there are no system failures.
Previous research has tended to focus on the effects of vehicle automation during systeminitiated transfers of control in critical situations. By focusing on non-urgent, driver-initiated, transfers of control, the results of this study provide an important contribution to our understanding of the impacts of different levels of automation on driving performance. The vehicle control metrics indicate that even in non-urgent situations, there are safety implications of retaking control from vehicle automation, which must be considered when designing these systems. Our results show that the additional second taken by drivers to initiate a lane change in PAD was not sufficient for them to regain full situation awareness, with increased variability in vehicle positioning, and both longitudinal and lateral speed remaining an issue throughout the overtaking manoeuvre. This suggests that even when a driver has control of when to re-enter the driving loop, the effects of being out-of-the-loop remain, which has implications for vehicle manufacturers designing for transitions of control. The results highlight the importance of considering the most effective disengagement criteria, and emphasize the possible difficulties associated with SAE Level 2 and Level 3 systems, which will require drivers to re-enter the driving loop occasionally. Further research is required to understand if solutions such as providing more informative HMI or shared haptic control [50, 51], or solutions which imbed the automated vehicle technology within smart infrastructure [52], would enable a smoother and safer transfer of control in these situations. | 7,812 | 2018-02-21T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Absorption effects in particle oscillations
Particle oscillations in absorbing matter are considered. The approach based on the optical potential is shown to be inapplicable in the strong absorption region. Models with Hermitian Hamiltonian are analyzed.
Introduction
In particle oscillations in the medium absorption can play an important role, for example, in the K 0K 0 [1][2][3][4] and nn [5][6][7][8] oscillations. In this paper we consider nn transitions in the medium followed by annihilation n →n → M. (1) Here M are the annihilation mesons. The reason for considering this process is that the absorption (annihilation) ofn is extremely strong.
In the standard approach (later on referred to as a potential model) then-medium interaction is described by antineutron optical potential Un. We have objections to this model (Sect. 2). The alternative models with a Hermitian Hamiltonian are considered in Sect. 3. This fact should be emphasized particularly.
In Sect. 5 the results are summarized. The problems of the model with a Hermitian Hamiltonian are pointed out as well. The restriction on the free-space nn oscillation time τ critically depends on the description of absorption. In this regard, the main goal of this paper is to consider the absorption model itself.
Potential model
We consider process (1). In the standard approach [5][6][7] the nn transitions in the medium are described by Schrodinger equations ImUn = −Γ/2,n(0, x) = 0. Here U n and Un are the potential of n and the optical potential of n, respectively; ǫ nn is a small parameter with ǫ nn = 1/τ , where τ is the free-space nn oscillation time, Γ being the annihilation width ofn.
In the lowest order in ǫ nn the process width is [5][6][7] Γ pot = ǫ 2 Un is the basic element of the model. In this connection the following problems arise: 1. The optical model was developed for the Schrodinger type equations. The physical meaning of ImUn follows from the corresponding continuity equation. Coupled Eqs. (2) give rise to the following equation: The continuity equation cannot be derived from (4).
2.
To get Γ pot , the optical theorem or condition of probability conservation are used. However, the S-matrix is essentially non-unitary.
3. The structure and Γ-dependence of (3) provoke some objections. Due to this an alternative model should be considered.
Models with a Hermitian Hamiltonian
The interaction Hamiltonian of process (1) is where H nn and H are the Hamiltonians of nn conversion [5] and then-medium interaction, respectively. The background neutron potential is included in the neutron wave function:
Model with a bare propagator
The nn conversion comes from the exchange of Higs bosons with m H > 10 5 GeV. Then annihilates in a time τ a ∼ 1/Γ. We deal with a two-step process with a characteristic time τ a .
The general definition of the antineutron annihilation amplitude M a is Here | 0n p > is the state of the medium containing then with the 4-momentum p = (ǫ, p); < M | denotes the annihilation mesons, N includes the normalization factors of the wave functions.
The antineutron annihilation width Γ is expressed through M a : where N 1 is the normalization factor.
The amplitude of process (1) M 1 is given by In the lowest order in H nn one obtains where G 0 is the antineutron propagator. Since pn = p, ǫn = ǫ, then G 0 ∼ 1/0. M a contains all then-medium interactions followed by annihilation including antineutron rescattering in the initial state. So in this case the antineutron propagator is bare.
We deal with infrared singularity. For solving the problem a field theoretical approach with a finite time interval has been proposed [9]. The process (1) probability was found to be [10] where W f is the free-space nn transition probability. Equation (12) leads to a very strong restriction on the free-space nn oscillation time: τ = 10 16 yr.
Auxiliary process
Starting from (5) and (6) we have drawn the singular amplitude M 1 . To gain a better understanding of the problem, we consider the nn transitions in the medium followed by β + -decay: The neutron wave function is given by (6). The interaction Hamiltonian is where V is defined by (2), H W is the Hamiltonian of the decayn →pe + ν. In the lowest order in H nn the amplitude M 2 is where M d is the amplitude of the β + -decay, G is the antineutron propagator.
The process width Γ 2 is The propagator is dressed due to the additional field V .
Model with a dressed propagator
We return to process (1). Let us try to compose a model with a dressed propagator. By analogy with (14) in the Hamiltonian H (see (5)) we separate out the scalar field V 1 : where H a is the annihilation Hamiltonian. Now the antineutron annihilation amplitude M an is defined through H a : The interaction Hamiltonian is given by In the lowest order in H nn the amplitude of process (1) is The antineutron propagator G d is dressed. V 1 plays the role of antineutron self-energy Σ. M 3 corresponds to the first order in H nn and all the orders in V 1 and H a . Compared to (7), M an is calculated through the reduced Hamiltonian H a instead of H.
The process width Γ 3 is The amplitude M 3 is non-singular because the propagator is dressed. The antineutron selfenergy Σ = V 1 appears due to separation of the field V 1 . This procedure seems to be artificial and unjustified. There are no similar problems for process (13) since the self-energy and decay of n are generated by different fields H W and V 1 . This point should be given particular emphasis.
In any case Γ an ∼ Γ, and so
Discussion
First of all we compare the potential model with the model with a dressed propagator. In (21) we have to take the same parameters as in the potential model: V 1 = V and Γ an = Γ. Then we get Equation (23) The same conclusion has been drawn in [8]. It was shown that double counting leads to full cancellation of the leading terms. However, in [8] the model with a bare propagator has been considered. The approach with a finite time interval was used, but it can provoke additional questions. The above-given consideration is transparent.
If we want to remove double counting, we have to make direct calculations of the off-diagonal matrix element (see Sect. 3).
Let us compare the Γ-dependence of the results. In (21) one should use realistic parameters.
We take V 1 = ReV , then we obtain Therefore, Γ 3 ∼ Γ. For the K 0K 0 transitions in the medium followed by decay and regeneration of the K 0 S -component an identical Γ-dependence takes plays [11,12]. In the potential model Γ pot ∼ Γ only at light absorption. Indeed, if Γ/2 ≪| ReV |, then (25) In the first approximation (25) coincides with (24). This agreement was expected since the dominant role was played by ReUn.
We consider the difference in the results in the region of strong absorption. It is seen from the ratio For nuclear matter we take Γ = 100 MeV. If | ReV |= 50 MeV, then r = 1. If | ReV |= 10 MeV, then r = 25. When | ReV | decreases, Γ 3 and r increase.
In the oscillation of other particles (for example, K 0K 0 ) the difference between Γ 3 and Γ pot is less, however this difference can be essential for the problem under study.
For the realistic parameters Γ = 100 MeV and | ReV |= 10 MeV, the lower limit on the free-space nn oscillations time is τ = 1.2 · 10 9 s. When V 1 = 0, the model with a dressed propagator converts to the model with a bare propagator. It gives τ = 10 16 yr. On the basis of this one can accept that the lower limit on the free-space nn oscillations time is in the range 10 16 yr > τ > 1.2 · 10 9 s.
Thus we conclude the following: (1) The smaller | ReV | (antineutron self-energy), the greater the difference in the results (see (27)). It is a maximum for the model with a bare propagator.
Finally, in the strong absorption region the model with an optical potential is inapplicable.
In the models with a Hermitian Hamiltonian the optical potential is not used. The conclusions made above do not depend on the specific models of the blocks M a and M an .
Conclusion
The potential model is applicable only in the case of slight absorption.
If absorption is strong, the potential model is inapplicable: (1) It contains double counting.
This is a main statement of this paper. The chief drawback in the model with a dressed propagator is that the procedure of separation of V 1 is artificial and unjustified. In our opinion the model with a bare propagator is preferable. There are a lot of arguments in favor of the model with a bare propagator [10]. The only objection to this model is that it gives the result which essentially differs from the result of the potential model. The potential model has been considered above.
Since the problem is of a great nicety, further investigations in the framework of the approach with a Hermitian Hamiltonian are needed. | 2,227.4 | 2018-09-08T00:00:00.000 | [
"Physics"
] |
A Sub-solar Fe/O, logT~7.5 Gas Component Permeating the Milky Way's CGM
Our study focuses on characterizing the highly ionized gas within the Milky Way's (MW) Circumgalactic Medium (CGM) that gives rise to ionic transitions in the X-ray band 2 - 25 \AA. Utilizing stacked \Chandra/\ACISS\ \MEG\ and \LETG\ spectra toward QSO sightlines, we employ the self-consistent hybrid ionization code PHASE to model our data. The stacked spectra are optimally described by three distinct gas phase components: a \warm\ (\logT\ $\sim$ 5.5), \warmhot\ (\logT\ $\sim 6$), and \hot\ (\logT\ $\sim$ 7.5) components. These findings confirm the presence of the \hot\ component in the MW's CGM indicating its coexistence with a \warm\ and a \warmhot\ gas phases. We find this \hot\ component to be homogeneous in temperature but inhomogeneous in column density. The gas in the \hot\ component requires over-abundances relative to solar to be consistent with the Dispersion Measure (DM) from the Galactic halo reported in the literature. {For the hot phase we estimated a DM = $55.1^{+29.9}_{-23.7}$ pc cm$^{-3}$}. We conclude that this phase is either enriched in Oxygen, Silicon, and Sulfur, or has metallicity {over 6} times solar value, or a combination of both. We do not detect Fe L-shell absorption lines, implying O/Fe $\geq$ 4. The non-solar abundance ratios found in the super-virial gas component in the Galactic halo suggest that this phase arises from Galactic feedback.
INTRODUCTION
The circumgalactic medium (CGM) refers to the gaseous component located beyond the galactic disc and within the galactic virial radius.The CGM is a mixed medium with complex structures like filaments, bubbles, and multiphase regions.It comprises ionized and neutral gas with different temperatures and densities (e.g., Tumlinson et al. 2017;Mathur 2022).
The CGM is thought to play a crucial role in regulating the exchange of matter and energy between the galactic disc and its environment (e.g., Kereš et al. 2005, Zheng et al. 2015).Numerical simulations have shown that shock-heating processes might have heated and ionized the gas during galaxy formation, preventing it from falling into the galactic disc.On the other hand, feedback processes such as supernovas and galactic winds could have expelled large amounts of material into the CGM (e.g., Stinson et al. 2012).Processes such as the infall of material (commonly pristine gas) from the intergalactic medium towards the galactic disc and the expulsion of metals formed in stars from the disc into the surrounding region make the CGM an important clue to understanding galactic formation and evolution (e.g., Tumlinson et al. 2017;Li et al. 2018).These studies suggest that the CGM is a large reservoir of gas that can fuel future star formation within the galaxy and the place where the missing baryons and metals could reside.
In the local universe, at galactic scales, the number of baryons observed in galaxies with luminosity less or near Schechter L ★ lies near half under the amount of baryonic mass predicted by the nucleosynthesis in the Big Bang theory and inferred from the density fluctuations of the cosmic microwave background.For galaxies less massive, more mass is missing (e.g., Kirkman et al. 2003 and references therein;Planck Collaboration et al. 2016, McGaugh et al. 2010).Coupled with the Missing Baryon Problem, there is also The Missing Metal Problem at galactic scales.The number of metals expected from the stars observed and the star formation history of galaxies is about two times larger than the number of metals we can observe (Peeples et al. 2014).
Theoretical studies suggest that the missing baryons and missing metals in galaxies might reside in the CGM; however, its diffuse nature makes it difficult to study this material (e.g., Feldmann et al. 2013;Mathur et al. 2021;Mathur et al. 2023).
Because of our unique point of reference, studying the CGM of the Milky Way is much easier than studying it in external galaxies.A common way to detect the CGM of the Milky Way is by studying it in absorption against bright background sources, such as quasars (e.g., Gupta et al. 2012, Mathur 2022).
The CGM is expected to be close to the galaxy's virial temperature.For a Milky Way-like galaxy, this temperature is about log(/K) ∼ 6.We will call the gas at this high temperature the warm-hot component.For the Milky Way, this gas phase has been studied in emission and absorption using UV and X-ray spectra (e.g., Wang et al. 2005;Gupta et al. 2012;Das et al. 2019a In recent studies, Das et al. (2019a), Das et al. (2021b), and detected a hotter gas component in the Milky Way CGM for the first time.They found a gas phase with super-virial temperature at log(/K) ≳ 7. Hereafter, we will refer to this component as hot.Using deep XMM-Newton RGS observations of the blazar 1ES 1553+113, Das et al. (2019a) detected in absorption Ne X K associated with this hot component.Later on, in 2021 Das et al. (2021b) also found Si XIV K and Ne X K in the line of sight towards the blazar Mkn 421.Their results in these two lines of sight indicate that the CGM is a multiphase system with a warm (log(/K) ∼ 5.5), warm-hot (log(/K) ∼ 6), and hot (log(/K) ∼ 7.5) components.
Focused on these gas components, and using Chandra observations towards 47 different sightlines and the stacking technique, Lara-DI et al. (2023) (hereafter LDI-2023a) were able to detect Si XIV K and for the first time S XVI K at = 0 in absorption.Their discovery confirms the presence of a hot component in the Milky Way CGM, suggesting it is a widespread component throughout the entire CGM.The presence of the hot component was also confirmed by McClain et al. (2024) in the sightline toward NGC 3783.
The newly discovered hot component was also detected in emission (e.g., Das et al. 2019b;Bluem et al. 2022;Gupta et al. 2021;Bhattacharyya et al. 2023;Gupta et al. 2023).Bhattacharyya et al. (2023) studied in emission the hot phase of the CGM around the Mkn 421 sightline.Their study complements the work done by Das et al. (2021b) in the line of sight towards Mkn 421, showing that the emitting gas has higher density, possibly coming from regions close to the Galactic disc.On the other hand, the absorption measurements arise from low densities extending to the virial radius.They found a scatter in the temperature in both the warm-hot component and the hot component; this contributes to understanding the CGM as a multiphase system.
Despite these recent efforts, characterizing the CGM remains challenging.Information about its geometry, homogeneity, and how the different gas phases expand through the entire CGM Milky Way remains to be determined.The super-virial hot component is not expected from theoretical studies, and its finding opens new questions crucial to better understanding galactic formation and evolution.This paper focuses on absorption studies of the Milky Way CGM.We use the LDI-2023a data sample to characterize the CGM by identifying the gas phases from which these and other absorption lines come.LDI-2023a focused on the spectral range 4 -8 Å; now, we will focus on the study of the spectral range from 2 to 25 Å using a self-consistent ionization model (PHASE, Krongold et al. 2003) to fit the data.We structure our paper as follows.In Section 2, we present the sample selection.In Section 3, we describe the data analysis.In Section 4, we show our results, and in Section 5, we discuss their implications.
DATA SAMPLE
This paper uses the same data sample and stacked spectra used in LDI-2023a.The data sample consists of 47 different sight lines to QSOs, Seyfert-1, and Blazars from Chandra X-ray Observatory public observations.This sample excludes changing look Active Galactic Nuclei (AGN)1 , which sometimes act like a Seyfert-1 and sometimes as Syfert-2 objects.We also exclude long observations with very high S/N2 that would dominate the stacked spectra in our analysis, along with ES1553+113 and Mkn 421 sightlines used by Das et al. (2019a) and Das et al. (2021b).Finally, NGC 4051 ( = 0.002), which presents at = 0 clear contribution of narrow absorption lines due to its Warm Absorber (WA), was also excluded (e.g., Krongold et al. 2007).In this way, we avoid intrinsic absorption lines of the WA contaminating our analysis.The sightlines chosen have high Galactic latitude, meaning our data has a small cross-section with the Milky Way's Interstellar Medium (ISM) (see LDI-2023a for more details).
The stacked spectra with MEG observations comprise 46 (10.96Ms) of the 47 sightlines, while LETG has nine (1.09 Ms).Except for one sightline (TON S 180), all the sightlines covered by LETG are contained within MEG.The contribution of LETG sightlines to the overall exposure time of MEG amounts to about 10%.It is important to note that MEG data is dominated by sources with WA, whereas LETG is not.
We study the spectral range 2 -25 Å, with a signal-to-noise (S/N) ratio of 2181 for MEG and 651 for LETG.In LDI-2023a, the complete list of sightlines, the Aitoff projection of these targets, and the complete table with the list of individual observations are included.See also LDI-2023a for details on how the data was reprocessed and stacked.
Fitting the Continuum
To analyze MEG and LETG stacked spectra, we used the spectral fitting software XSPEC (v12.13.0) with 2 statistics.Errors presented in this paper correspond to 1 level.
We first modeled the continuum of each spectrum from 2.0 to 25.0 Å.We used a model comprising a power law (powerlaw), a black body (bbody) component, an ISM absorption component (Tbabs), and as many Gaussian lines (agauss) as required to account for AGN intrinsic absorption or emission features (i.e., lines produced intrinsically in the sources and not arising from the Milky Way's CGM).
PHASE
To model the absorption lines on the spectra arising from the gas components at = 0 of the Milky's Way CGM, we used the hybrid (photo + collision) ionization code PHASE (Krongold et al. 2003).This code allows the analysis of X-ray spectra by generating a synthetic spectrum that fits the data and gives the gas phase component that best describes it.PHASE has as free parameters the temperature of the gas (), the hydrogen column density ( ), the redshift (), the micro-turbulent velocity of the gas (), the ionization parameter (), and the abundances of the elements, which are set to solar by default.
The CGM is expected to have densities for which photoionization from the metagalactic background should be negligible.Therefore, in spectroscopic modeling we fixed the ionization parameter of the gas to low values (log U ∼ −4).Since our study focuses on the Milky Way's CGM, we set the redshift ≈ 0. The width of the lines was set to be less than the resolution element of the instrument.Therefore, our model is constrained to fit narrow lines.In particular, ≈ 10 km s −1 .
This way, we added a PHASE component to the continuum model.This first component was set to fit the virial component (warm-hot) by only considering spectral ranges where Ne IX and O VII appear and letting fit the model to the best log(/K) and log(N H /cm −2 ) for this component.Once we had the best temperature for this component, we extended our analysis to cover all spectral ranges and introduced an additional hotter component (hot) with a higher temperature.Finally, we included a warmer (warm) component with a lower temperature and fitted the three components simultaneously.However, it is important to note that our results do not depend on the order in which the PHASE components are added to the continuum.
Abundances
In the X-ray band, we do not have a diagnostic for Hydrogen, so we cannot constrain the absolute metallicity of the CGM.However, we can study the relative abundances of different elements.To do this, we allow each element's abundance to vary independently when necessary.
RESULTS
Figures 1 and 2 show the lines fitted with PHASE.The best model fitting MEG and LETG data comprises three different gas components.Each component models different absorption lines in the rest frame of the Milky Way.According to our models, some ionic transitions span through two different temperature components, while others are produced in a single gas phase.
In Table 2, we present the physical parameters of the three components modeled in MEG and LETG datasets.We display the position of the absorption line, its Equivalent Width (EW) and its Ionic Column Density (N ion ).Next to Si XIV K and S XVI K parameters in the hot component, we also included in columns ( 12) and ( 14) the EW and the ionic column density of the lines as reported in LDI-2023a.
ACIS-S HETG-MEG
The first component modeling MEG data is a warm-hot component at log(/K) = 6.19
ACIS-S LETG
The best model fitting LETG data comprises three distinct gas components.The first component (hereafter LETG-WARM-HOT) represents a warm-hot component with a temperature of log(/K) = 6.56±0.12 and a column density of log(N H /cm −2 )= 19.56 +0.21 −0.27 .This component models Ne IX K, O VII K, and O VIII K absorption lines.This component improves the fit only by a Δ 2 of 10 for two additional free parameters.
The second and hotter component modeling LETG data is a gas component with a temperature log(/K) = 7.50±0.05and a column density of log(N H /cm −2 )= 21.75 +0.08 −0.14 .This component is referred here as LETG-HOT.The absorption lines modeled with this component are O VIII K, S XVI K, and Si XIV K.This component improves the fit by a Δ 2 of 31 for two additional free parameters.
Finally, the third component fitting this dataset is characterized by a temperature of log(/K) = 5.64±0.06 and a column density of log(N H /cm −2 )= 19.76 +0.09 −0.11 , (hereafter LETG-WARM).The absorption lines modeled in this component are O VI K and O VII K.This component improves the fit by a Δ 2 of 36 for two additional free parameters, see Table 1.
In our results for the LETG data, we find O VI K contributing exclusively to LETG-WARM.O VII is present in the LETG-WARM and LETG-WARM-HOT components.Ne IX is found exclusively in the LETG-WARM-HOT component.O VIII contributes in both, LETG-WARM-HOT and LETG-HOT, while Si XIV and S XVI are exclusively found in LETG-HOT.
Metallicity of the CGM
In MEG data, we could not model MEG-HOT with solar Fe abundance since it grossly overpredicts the Fe absorption lines.In Figure 3, we present the spectrum of MEG data in the range 10.4 -11.2 Å.In this Figure , we show how the model overpredicts the absorption lines of Fe when its abundance is that of Oxygen.To avoid this, the model must consider an abundance of Fe at least four times less than that of Oxygen.We find the abundance of other elements correspond to solar mixture.
In LETG data, the model also overpredicts the Fe L-shell lines in the hot component.Figure 3 shows how the model overpredicts Fe when its abundance is that of Oxygen.In this case, the abundance of Fe in the hot component needs to be at least six times less than that of Oxygen to properly model the data.
We also noticed that for LETG the code prefers ∼ 4 times more S/O than the solar value.In this same dataset, the code prefers ∼ 1.5 times solar Si/O.Nonetheless, the data is not very sensitive to the abundance of these two elements.
DISCUSSION
We have found that the Milky Way's CGM comprises at least three phases, a warm (log(/K) ∼ 5.5), warm-hot (log(/K) ∼ 6), and hot (log(/K) ∼ 7.5) component, confirming previous results pointing to the third phase with high temperature (Das et al. 2019 and2021;LDI-2023a).This hot phase has a column density at least an order of magnitude higher than the warm-hot and warm
Physical state of the gas in the hot component
It is interesting to note that we do not see Ne X K in MEG or LETG.This is consistent with the high temperature of the hot component.Ne X peaks around log(/K) ∼ 6.7, and the fraction of Ne X decreases sharlply for higher temperatures.At log(/K) ∼ 7.5, Ne is almost completely ionized.Therefore, our data is consistent with not being able to detect Ne X in MEG or LETG, and this is a nice confirmation that the temperature of the gas reaches log(/K) ∼ 7.5.The Ne X line should be very weak due to the plasma being very hot.Therefore, it is not detectable with the S/N of our data.In Figure 4 we show our model fitting the data at the position of the rest-wavelength of Ne X .We see no prominent absoprtion line of this ion.Das et al. (2019a) and Das et al. (2021b) reported the detection of this line at the same temperature, but their model required super-solar Ne/O abundance.If the plasma had been a factor of two or three lower temperature, say at log(/K) ∼ 7, the Ne X line should be very prominent and detectable in our spectra.This result excludes the presence of high quantities of gas in the CGM with temperatures log(/K) over ∼ 6.5 and below ∼ 7.5 in the sightlines probed in the stacked spectra.
Comparison between detections of LDI-2023a and this work
As mentioned in § 1, LDI-2023a clearly detected Si XIV K and S XVI K in the same stacked spectra presented here.In Table 2, we show the EW and N ion predicted by our model with PHASE, contrasted with those measured empirically by LDI-2023a over MEG and LETG with Gaussians.The striking similarity in these values shows that detecting these elements is a robust result in these studies.Therefore, the results presented here fully confirm, with a self-consistent ionization model, the presence of a hot super-virial gas component in the Milky Way's CGM.
Statistical significance of each component
In Table 1, we show how much each component changes the 2 statistic for two free parameters (namely, the temperature and the column density of the gas).For MEG data, we find that the gas components weighting more in the statistic are MEG-WARM-HOT with Δ 2 = 61 and MEG-HOT with Δ 2 = 57.The fact that these phases are statistically significant is not a surprise, given the number of absorption lines fit with MEG-WARM-HOT component and given the high column density of MEG-HOT component.On the other hand, the MEG-WARM only changes the statistic by a Δ 2 = 2.We note, however, that there is vast evidence of the presence of this component in UV data (e.g., Tumlinson et al. 2017 and references therein).
In LETG we find that the LETG-WARM and LETG-HOT weight similarly with Δ 2 = 36 and Δ 2 = 31 respectively, making both components statistically significant.On the other hand, LETG-WARM-HOT component only weights Δ 2 = 10 in our model, making it barely significant.This can be attributed to the low S/N ratio of these data, given that the column density of this component found in MEG and LETG are similar.We notice, however, that the O VII K line is clearly detected in the data but with less significance ( ≈ 2).
Comparison between MEG and LETG models
The results from MEG and LETG are consistent.In the warm-hot component, the model over both datasets has similar column densities (within the errors) and temperatures.These parameters are strikingly similar to what has been reported in individual lines of sight (e.g., Gupta et al. 2012).Thus, our results bring more evidence to the homogeneity of this component.This is in contrast to the results found in emission, where a large dispersion up by an order of magnitude has been reported in the temperature (e.g., Henley et al. 2010; Bluem MEG-WARM MEG-WARM-HOT MEG-HOT log(/K) = 5.39 +0.13 −0.07 log(/K) = 6.19 +0.06 −0.08 log(/K) = 7.50±0.03log(N H /cm −2 ) = 18.08 +0.31 −0.40 log(N H /cm −2 ) = 19.12+0.08 −0.10 log(N H /cm −2 ) = 21.30The same happens with the temperature of the hot component, which is consistent within the errors between the two datasets.However, there is a factor of ∼ 3 in the column densities between the data of the two instruments.We note that only one source in LETG is not present in MEG stacked spectrum.Nevertheless, there are 38 more sources in MEG than in LETG.Therefore, this result, along with the consistency between our findings and those found in individual lines of sight (Das et al. 2019a, Das et al. 2021b), suggest that this component might also be homogeneous in temperature but with some dispersion in column density.
In the warm component, there are some differences between MEG and LETG.These differences are marginal in temperature but become much larger in terms of column density.We find that this component is only constrained through one or two lines from one or two ions.Given this, and the low S/N ratio in the LETG data, and the low significance of this component in the MEG data, we consider that while we have enough evidence for the presence of this component (e.g., Tumlinson et al. 2017), we cannot constrain robustly its parameters.
Atomic abundances in the hot component
We find additional evidence of inhomogeneities in the hot component in addition to those in the column density reported above.In particular, the abundance of S XVI is ∼ 3.5 times larger in LETG than in MEG.Additionally, we find that Si/O in LETG is ∼ 2 times that in MEG.The abundance of Fe is strikingly below what is expected for solar mixture.As observed in Figure 3, this is a strong observational constraint.In LETG we required O/Fe larger than solar by at least a factor of 6, and in MEG by at least a factor of 4. The sub-solar Fe/O was also found by Das et al. (2019a) and Das et al. (2021b).Abundances for other atoms are consistent with solar mixture, but we notice that the sensitivity of both stacked spectra is very limited for many of the absorption transitions.
Dispersion Measure of the hot phase
In the present work, the column density of the hot phase as measured in MEG data is where the Oxygen abundance ( / ) and metallicity () in the PHASE models discussed in this paper have been assumed to be solar ( / = 4.9 × 10 −4 Asplund et al. (2009) and = 1 ⊙ ).The dispersion measure (DM), however, provides a constraint on the combination of these parameters.
These results implies that the observed hot gas is either enriched in Oxygen, Silicon and Sulfur (the elements driving the detection of the hot component in this work), or has super-solar metallicity or a combination of both.For the best value of super−virial = 55.1 pc cm −3 , and assuming solar abundance, the metallicity required would be ∼ 9 times solar, or over 6 solar assuming the 1 lower limit.In § 5.5 we already discussed the over solar mixture of these elements.This further suggests that the hot component finds its origin in galactic winds of enriched material.
Contribution for O II K𝛽?
Finally, we note that the rest wavelength of O VI K is 22.034 Å.This wavelength also corresponds to the rest-wavelength of O II K (22.04 Å; Mathur et al. 2017).We are uncertain about the contribution of O II K to the O VI K absorption line.However, this possible contamination by O II does not affect the detection of the warm component since N VI K in MEG (EW = 1.63 mÅ), arising from this gas, is detected.
CONCLUSION
Our detections confirm the presence of the hot component of the Milky Way's CGM coexisting with a warm and warm-hot phases.The characteristic high temperature at log(/K) ∼ 7.5 of this component where Si XIV and S XVI are detected is further supported by the non-detection of Ne X , since Ne is expected to be almost completley ionized into Ne XI at those temperatures, and its abundance is not that high as that for Oxygen.This result also excludes the presence of high quantities of gas in the CGM with temperatures log(/K) over ∼ 6.5 and below ∼ 7.5.This hot component is permeating the Milky Way's CGM, and points to be homogeneous in temperature and inhomogeneous in column density.
In our analysis of Lara-DI et al. 2023, we inspected for each sightline the presence of prominent absorption lines at = 0. None of these individual sightlines present these prominent features.Moreover, we conducted simulations to evaluate whether the observed detections could arise from only a few sightlines.We determined the likelihood of encountering, in an N-sightlines stacked spectrum, individual spectra presenting an absorption line at least N times stronger than that detected in the stacked spectrum.Our analysis indicates that after stacking 40 spectra, this probability diminishes to a negligible value.Therefore, our results point to a hot phase being widespread in the halo -since the final stacked spectra is the result of the contribution from many sightlines.
In order to be consistent with the DM obtained from FRBs in the literature, the hot CGM gas observed in the halo is either enriched in Oxygen, Silicon, and Sulfur, or has metallicity of at least 4.4 times solar value, or a combination of both.This, along with the sub-solar Fe/O found for this component, and that found in the ISM in other works, are key for understanding the heating, cooling and mixing mechanisms within the CGM.In particular, our results provide tantalizing suggestions that the super-virial component of the CGM is the result of feedback from winds arising in the Galaxy.
Figure 1 .
Figure 1.We present the plots of absorption lines for different ionic transitions in MEG data modeled with PHASE.The data points are represented by black markers, the best-fit model is indicated by a red line, and the position of the theoretical rest wavelength is denoted by a vertical blue dashed line.
Figure 2 .
Figure 2. We present the plots of absorption lines for different ionic transitions in LETG data modeled with PHASE.The data points are represented by black markers, the best-fit model is indicated by a red line, and the position of the theoretical rest wavelength is denoted by a vertical blue dashed line.
Figure 3 .
Figure 3.These plots present the model in red overpredicting Fe L-shell absorption lines in MEG (top) and LETG (bottom) data when Fe abundance is set to be that of Oxygen.
Figure 4 .
Figure 4.Here we present a spectral window of our MEG (top) and LETG (bottom) data modeled.We show in vertical blue line the position of Ne K res-wavelength These results, along with other claims of the non-solar mixture in the hot component (e.g., for Ne/O Das et al. 2019a, Das et al. 2021b, Gupta et al. 2021; for Si/O Das et al. 2021b) suggest that it has inhomogeneous element abundance mixtures which are different than solar.
Henley et al. 2010= 0. Using and combining their results with emission results fromHenley et al. 2010, they derive that the warm-hot phase of the Milky Way CGM is a massive component of about log(/M ⊙ ) ∼ 10, which extends over a vast region around the galactic disc.Their results suggest that it is in this component where the missing baryons and metals could reside.
Gupta et al. (2012)2)detected in absorption towards extragalactic sight lines O +0.06 −0.08 and column density log(N H /cm −2 ) = 19.12+0.08 −0.10 (hereafter MEG-WARM-HOT).The absorption lines modeled with this component are N VI K, N VII K, Ne IX K, O VII K, O VII K, O VII K, and O VIII K.This component improves the fit by a Δ 2 of 61 for two free pareameters.The absorption lines modeled with this component are O VIII K, Si XIV K, and S XVI K.This component improves the fit by a Δ 2 of 57 for two additional free parameters The third component comprising the model fitting MEG data is a warm component at log(/K) = 5.39 +0.13 −0.07 and column density log(N H /cm −2 ) = 18.08 +0.31 −0.40 (hereafter MEG-WARM).Only N VI K absorption line is modeled with this component improving the fit marginally by a Δ 2 of 2 for two additional free parameters, see Table 1.VIII K is contributing in the warm-hot and hot components.On the other hand, N VII K, Ne IX K, O VII K, O VII K, O VII K contribute exclusively to the warm-hot component.Si XIV and S XVI contribute exclusively to the hot component.
N VI K contributes on the warm and warm-hot component, while O
Table 1 .
Δ 2 improved in MEG and LETG when including a component modeling a different gas phase with two free parameters.
Table 2 .
Identified ionic absorption lines probing the super-virial hot component at z=0. | 6,615 | 2024-07-23T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Fore reef location influences spawning success and egg predation in lek-like mating territories of the bird wrasse, Gomphosus varius
Many fish spawn in aggregations, but little is understood about the dynamics governing the success of spawning interactions. Here, we evaluate the influence that location of lek-like mating territories has on spawning interactions of Gomphosus varius. We used direct observations of spawning and egg predation events as well as local population counts to compare the rates of spawning, spawning interruptions, and predation on the eggs of G. varius at Finger Reef, Apra Harbor, Guam. We hypothesized that spawning rates would be highest among seaward locations that facilitate transport of pelagic larvae from reefs and that those territories would subsequently experience higher densities of egg predators, egg predation rates, and spawning interruptions. Male spawning success was highly skewed by mating territory location, with holders of the outer, seaward mating territories being more successful than those males holding territories in the middle and inner areas of the aggregation site. Within the outer territories, male mating success was also skewed by location. Egg predation was observed occasionally and increased linearly with bird wrasse spawning frequency. The population densities of egg predators were distributed equally across the study area. Spawning interruptions occurred most frequently within the inner zone of the spawning aggregation due to greater male-male aggression in intraspecific competition for females and territories. This study provides evidence that reef location influences the spawning success, egg predation rates, and spawning interruption rates of fishes that reproduce using lek-like mating territories.
Introduction
The behaviors associated with spawning play an important role in the survival of fish and their offspring (Hunter 1981). Spawning is a time of increased vulnerability for fish that often draws the attention of predators since recently spawned eggs are a common food item for many planktivores (Robertson and Hoffman 1977). Prior observations of spawning fish have noted the presence of planktivores, however, egg predation rates greatly vary across locations and species (Johannes 1978;Colin and Clavijo 1988;Colin and Bell 1991;Claydon 2004). In general, spawning by many fish species is thought to occur at times and locations that reduce the chance of predation, increase chances of egg dispersal, and increase the success of larval survival and settlement (Claydon 2004;Molloy et al. 2012). Even so, many species spawn during daylight and therefore, must rely upon other biotic or abiotic factors that discourage egg predation and promote egg dispersal. For example, many fish species spawn in areas where eggs have the best chance of being carried away from predators by currents, such as near channels, at the edge of a reef, during prime tidal conditions, and at depths high enough above the substrate to prevent non-swimming predators from reaching their eggs (Robertson and Hoffman 1977;Thresher 1984;Claydon 2004). To increase reproductive success, spawning usually occurs in areas with optimal current, during prime tidal conditions, and at depths high enough above the substrate to prevent nonswimming predators from reaching their eggs such as on the edges of reefs (Thresher 1984;Claydon 2004).
Spawning aggregations occur when a group of fish of the same species gather for the purpose of spawning. Spawning aggregations can be divided into two groups; transient and resident. Transient spawning aggregations are typically formed by larger pelagic or reef species and may involve a long migration of days to weeks to reach the spawning aggregation site. Resident spawning aggregations are usually formed by smaller reef species (although some much larger species form them as well) and occur within the home ranges of the individuals involved (Domeier and Colin 1997). Transient aggregations, typical of many fishery species such as groupers and snappers, form seasonally and are usually linked to a lunar cycle (Domeier 2012). Various other species form resident aggregations that may not have seasonality at low latitudes, are not necessarily linked to a lunar cycle, and may form every day (Domeier and Colin 1997). The dynamics of spawning aggregations are not well understood; how, why, and when they are formed can be explained by many factors (Domeier and Colin 1997). Many commercially important fish species spawn in aggregations, thus, there are ecological, evolutionary, and economic motivations to better understand this important phenomenon (Domeier 2012). Unfortunately, unfished, and thus fully functional spawning aggregations of many commercially important species may often occur at unknown locations or areas that are difficult to reach (Sadovy de Mitcheson et al. 2008). The wrasses (family Labridae) have a considerable number of species that form spawning aggregations (Claydon 2004).
To increase our understanding of fish spawning aggregations, we examine the intra-and interspecific interactions of the bird wrasse, Gomphosus varius Lacepede 1801, which form resident spawning aggregations as temporary courtship territories using a lek-like mating system (Desvignes et al. 2017) and spawn in semilunar cycles (Kuwamura et al. 2016). We propose that locations of the fore reef from seaward to shoreline will influence spawning interactions and egg predation by planktivorous fishes on spawned gametes. Using resident G. various courtship territories at Finger Reef, Guam, we compare spawning rates, spawning interruptions rates, and predation rates on the eggs of G. varius across the fore reef zone. A lek-like system is characterized by female mate choice. In this system, males defend a temporary spawning territory, which may stand alone or be within a spawning aggregation site to which females migrate (Loiselle and Barlow 1978;Donaldson 1990;Chop 2008). These females then choose a male to spawn with and leave after spawning (Moyer and Yogo 1982;Colin and Bell 1991;Gladstone 1994;Molloy et al. 2012). The mating territories in this aggregation can be separated into three distinct categories based on their locations across the fore reef zone within the aggregation: outer, middle, and inner. The outer territories are in the deepest water and are the most seaward, located on the reef's edge. The inner territories are the shallowest and are closest to the shore. The middle territories are located in between the outer and inner territories. A territory within the lek is defined as an area that is held and protected temporarily by a male for courtship and mating (Arita and Kaneshiro 1985). We hypothesized that spawning rates would be higher at the seaward locations that better aid the dispersal of gametes from reef to pelagic settings. In response to higher spawning rates, we hypothesized that egg predator densities during spawning and egg predation rates would also be higher for mating territories in seaward locations. Additionally, we hypothesized that spawning interruption rates would be higher in the outer reef locations as fish abandon spawning to avoid interference from higher numbers of planktivorous egg predators in the seaward locations.
Study site
Apra Harbor, a deep water commercial and naval port, is situated on the western coast of the island of Guam, Mariana Islands (Fig. 1). Finger Reef lies within the harbor and runs westward along the southern shore of the harbor. The depth of Finger Reef ranges from 1 to 6 m and the benthic composition is predominantly Porites rus coral. This site is frequented by recreational divers and snorkelers, and fish feeding by both has been observed here. Gomphosus varius courts and spawns using a lek-like mating system within a spawning aggregation that occupies about 800m 2 of this location.
Data collection
Fish were not collected, killed, or harmed in any way over the course of this study. Territories of spawning male G. varius were located using snorkeling and SCU-BA. An observer swam a transect along the reef and when a courting male was observed, the location of his territory was marked with a color-coded zip tie, tied to the coral, and a photo that was tagged with a GPS point. We repeated these methods until all male mating territories within Finger Reef were marked ( Fig. 1) and did not conduct further surveys for new territories throughout the course of this study. Gomphosus varius displays strong sexual dimorphism where all females are grey and terminal phase males holding territories are bright blue and green. It is not known if G. varius has functional initial phase males with similar coloring to that of females, but we did occasionally observe transient individuals whose coloring was midway between that of a female and a male. Nevertheless, this information was not needed for our study as all observed pair spawns were between a sex-changed terminal phase male and female. We were not able to identify individuals in this study.
To ensure that the behaviors of spawning fish did not change in the presence of an observer, we placed cameras near spawning males for an hour. We compared recorded courtship behaviors, successful spawns, and egg predation to direct observations and no qualitative differences in behavior were observed. We did not identify specific individuals for our observations but rather considered that the location of the mating territory would dictate both mating success of the individuals GoogleEarth Image (c)DigitalGlobe holding that territory and the level of egg predation attempts made there.
We found a total of eight active G. varius spawning territories on Finger Reef: three in the outer area, three in the middle area, and two in the inner area. A territory was considered active if a male was seen courting above the established territory at the beginning of spawning each day. Outer reef territories were located farthest from the shore while inner territories were located nearest the shore. We did not perform additional surveys to determine if new territories were formed over the course of the study, but none were noticed during our spawning observations. All data were collected between January 2018 and May 2018. We conducted over 36 h of active spawning observations, and spawning occurred on 17 of 20 observation days. Spawning occurred between 0900 and 1400H with the start and end time varying daily.
Each day, we randomly selected up to four territories and the behavior of the male holding each territory was observed for 30 min. During these observations we recorded the following: 1) The number of successful spawning events. A successful spawn was defined as when a male and female complete a spawning rush that ended in the release of gamete which appear as a milky cloud in the water column. 2) The number of interrupted spawning attempts. An interrupted spawning attempt was defined as when a pair begins a spawning rush but do not complete it with the release of gametes.
3) The sex of the individual aborting the spawning attempt. The individual that turned away from the rush was identified as the individual aborting spawning. Since G. varius displays strong sexual dimorphism, males were easily distinguished from females by their bright coloring. 4) The number of egg predation events. An egg predation event was defined as when a planktivore rushed through the gamete cloud immediately after a spawning rush. 5) The species of egg predator feeding on eggs. We repeated these methods at each mating territory on four separate occasions throughout the study.
To estimate the species composition, abundance, and density of G. varius and the egg predators within each spawning territory, we used NOAA's Stationary Point Count (nSPC) method (based upon Bohnsack & Bohnsack and Bannerot 1986). We performed visual fish surveys within nine stationary point count cylinders with a diameter of 10 m (total area~314 m 2 ), with three each located within inner, middle, and outer zones of the spawning aggregation site (Fig. 1). Surveys were completed while G. varius was spawning. Since some of the mating territories were closer than 10 m apart, the cylinders were counted with sufficient spacing between each to avoid overlapping while still surveying the general area of the mating territories within the reef zones. We conducted counts of G. varius and the egg predators Chromis atripectoralis Welander & Schultz 1951, Abudefduf sexfasciatus Lacepede 1801, A. vaigiensis Quoy & Gaimard 1825, and Thalassoma hardwicke Bennett 1830 within each cylinder for a fiveminute period to assess the species present, and then for a 10-min period to estimate the abundance of each listed species, chosen based on prior observation. We did not record fish sizes. We replicated these surveys twice within each reef zone during the study period for a total of 18 survey samples.
Statistical analysis
Successful spawns and spawning interruption count data had zero-inflated, negative binomial distributions that we tested using hurdle models (Zuur et al. 2009). These models have two parts: the first describes the probability of a zero count and the second describes the expected rates of the non-zero counts. Tests of significance took both parts into account simultaneously. The idea behind a hurdle model is that for something to be observed, a hurdle must first be crossed. For an example related to this study, if said hurdle is crossed, you would see a successful spawn but if the hurdle is not crossed no successful spawns would be observed. In this model all zeros are treated the same. If there were not any spawns, or if there were but they were not seen, the count was still zero.
We used a one-way analysis of variance (ANOVA) to test for differences in densities (number of individuals per area) of planktivorous fishes between outer, middle, and inner zones, after confirming that the data fully conformed with the assumptions of ANOVA. Egg predation events were rare and could not be statistically compared between mating territory locations. Instead, we performed a linear regression model to determine the relationship between predation rates (predation events per 30-min observation period) and spawning rates (successful spawns per 30-min observation periods) after confirming that the data fully conformed with the assumptions of linear regressions.
Observational results
Prior to courtship and spawning, groups of males were observed swimming around the site together, but no male-male aggression was seen during this time. Directly before courtship and spawning began, males positioned themselves above their territories that were usually located at a prominent Porites rus coral head. At this time, males chased both planktivorous fishes and conspecific males from their territories. Females began migrating to the spawning aggregation site and chose a territory where they waited to spawn. When females arrived at a male's territory, he began courtship by swimming in circles above her and fluttering his pectoral fins until a female swam up to him to initiate a spawning rush. Spawning was occasionally interrupted by a female that initially approached the male but then abandoned the spawning rush by turning away from the male and returned to the coral head or by the male chasing away other males and planktivores. Spawning occurred when a female swam up to meet the male and the pair rapidly swam towards the surface with their bellies touching, released their gametes near the surface, and returned to the bottom. After the release of gametes, various planktivorous species sometimes rushed to the gamete cloud and consumed gametes within it. After spawning, the male continued with courtship and territorial defense. Nearly all spawns observed were paired. Only two events of streaking by other terminal phase males were observed. Streaking occurs when a second male joins the spawning pair at the apex of their spawning rush (Warner et. al 1975). There were no observations of predation attempts on spawning adults by piscivores. Occasionally during spawning hours males appeared to "herd" females from other areas of the reef into their territories. A terminal phase male was seen chasing females from the middle and inner territories towards the outer territories. However, it wasn't clear if this male was holding an outer territory and attempting to get more females into his territory or if this was just aggressive behavior. This happened only twice, and these behaviors were not formally included in the study.
Statistical results
There was a significant difference in spawning rates between mating territory locations (outer, middle, and inner) with 98.8% of spawning occurring in the outer zone territories (Fig. 2) and 90.3% of all spawning occurring solely in one outer territory (OT2) (Fig. 3). For the first part of the hurdle model (the probability of getting a zero count) the mating territory location influenced the probability of spawning occurring with the outer zone territories being significantly different from the inner and middle zone territories (z-statistics = −2.069 and − 2.157 respectively, for the probability of obtaining zero counts, and P < 0.05 for both zeroinflated hurdle comparisons between mating territory locations). For the second part of the hurdle model (the expected rates of the non-zero counts) the mating territory location did not predict the spawning rate. Within the outer zone territories (OT 1, OT 2, and OT 3), mating territory location did not influence the probability of spawning occurring (the first part of the hurdle model) but it did predict the spawning rate (the second part of the hurdle model) with OT 2 being significantly different from OT 3 (z-statistic = −3.235 for the difference in spawning rates where non-zero counts existed, P < 0.01 for zero-inflated hurdle comparisons of OT 2 and OT 3).
All egg predation events occurred in one outer territory (OT 2). Egg predation was minimal during the study period, however, and it occurred after only 8.2% of all spawns (Figs. 2 and 3). Egg predation rates were positively and linearly correlated with spawning rates (Fig. 4, R 2 = 0.6115, P < 0.05). Furthermore, there was no significant difference in the densities of egg predators across mating territory locations (Figs. 5 and 6, F-statistic = 0.781, P = 0.476, one-way ANOVA).
There was a significant difference in both male and female spawning interruption rates between mating territory locations (Fig. 7). For females, the mating territory location did not influence the probability of female spawning abandonment occurring (the first part of the hurdle model). However, it did predict the rate of abandonment (the second part of the hurdle model) with the inner zone territories being significantly different from the middle and outer zone territories (z-statistic = −5.007 and − 4.528, respectively for the difference in spawning rates where non-zero counts existed, and P < 0.001 for both zero-inflated hurdle comparisons of mating territory locations). For males, mating territory location influenced the probability of male spawning abandonment occurring (the first part of the hurdle model) with the inner zone territories being significantly different from those in the middle zone (z-statistic = −2.289 for the probability of obtaining zero counts, and P < 0.01 for zero-inflated hurdle comparisons between mating territory locations). Mating territory location, however, did not predict the rate of male spawning abandonment (the second part of the hurdle model).
Discussion
Males in the outer zone territories were predicted to have higher spawning rates and our results support this. Only one spawning event was seen in each of the middle and inner territories. The days in which these spawns were observed were particularly busy spawning days for G. varius, as well as many other wrasse species that spawn at Finger Reef. Females may become less Fig. 2 Spawning and predation rates across temporary mating territory locations of Gomphosus varius. Most spawning events and all predation occurred in the outer mating territories (P < 0.05) Fig. 3 Spawning and predation rates across outer temporary mating territories of Gomphosus varius. Most spawning events and all predation occurred in outer temporary mating territory 2 (P < 0.01) selective in their mates when the wait time increases for spawning with a more desirable male, as fertilization success tends to decrease with time after ovulation (Kuwamura et al. 2016). More data needs to be collected to find a peak spawning time and to determine if there is a correlation between it and the spawning rates found in the middle and inner zone mating territories. Additionally, males that were positioned in the more successful outer zone territories for the day remained there for the entirety of spawning, but males positioned in the inner and middle zone territories did not remain within a single territory and it was unclear if territory possession changed throughout the day. Fiske et al. (1998) found that territory attendance (the time a male spent at their territory) was most highly correlated with male mating success. At Finger Reef, females may use attendance as a signal when choosing a mate. During several observation periods, females observed waiting at inner or middle territories left due to the absence of males.
Nearly all of the spawning was done at a single territory (OT 2), which is not uncommon in lekking Fig. 4 Linear regression between egg predation rates and spawning rates of Gomphosus varius (P < 0.05). Shaded area indicates a 95% confidence interval (Emlen and Oring 1977;Moyer and Yogo 1982;Arita and Kaneshiro 1985;Kirkpatrick and Ryan 1991;McDonald and Potts 1994;Petrie et al. 1999;Sherman 1999;Duraes et al. 2009). The reason for this highly-skewed success is not well understood but it has been hypothesized that males in a lek are kin or that there is a hierarchal system in leks (Sherman 1999). It would be useful to track territory possession and to collect genetic data to determine whether there is a hierarchy in place or if lekking males are kin, a possibility that seems doubtful given the life history strategies of pelagic spawning and larval dispersal (Hamner and Largier 2012). The presence of a hierarchy system at the spawning aggregation site is more likely. Males were often observed swimming side-by-side in small groups. Males may be using this lateral display behavior to size each other up and determining territory ownership for the day (Oliveira and Almada 1998). Another factor driving the success of the male at OT 2 is that most successful males have often been found to be positioned in the center of leks (Fiske et al. 1998), and OT 2 is positioned between OT 1 and OT 3. This central Fig. 6 Species specific distribution of planktivores and Gomphosus varius across temporary mating territory locations on Finger Reef, Guam. The species observed were as follows: Thalassoma hardwicke (Thha), Abudefduf sexfasciatus (Abse), A. vaigiensis (Abva), Chromis atripectoralis (Chat), and Gomphosus varius (Gova) Fig. 7 Male and female spawning interruption rates across temporary mating territory locations of Gomphosus varius. The rates of interruption in the inner territories were significantly different from the middle and outer territories for both males and females (P < 0.01 and P < 0.001 respectively) position along the edge of the reef may offer extra protection for the spawning individuals and their gametes, thus making it a more desirable spawning location. OT 2, however, was the only one that experienced gamete predation, so this does not hold true for egg predation.
Egg predation rates were predicted to be higher in the outer territories. Every predation event occurred in the outer territories therefore we were unable to compare zeroes and the data did not allow for a meaningful comparison. Most gamete predation events were by a single terminal phase T. hardwicke male who held a mating territory in the same location as OT 2 and usually was already in the area and could feed on eggs intermittently. There was only one instance where a group of Abudefduf sexfasciatus and A. vaigiensis swarmed the recently spawned gametes of G. varius. Kuwamura et al. (2016) found that G. varius was often aggressive towards T. hardwicke as well as damselfishes during spawning and that while predation events were rare, T. hardwicke occasionally attacked the eggs of G. varius. Additionally, predation rates were found to increase as spawning rates increased. This outcome is likely given that spawning events are obvious, and they quickly draw the attention of egg predators. Therefore, when spawning occurs frequently, the probability of egg predators encountering eggs may be higher. Spawning is known to draw the attention of predators and many observations of spawning fish have noted the presence of planktivores; however, egg predation rates greatly vary across locations and species based upon the availability of other planktonic food or how busy an aggregation site is on a given day (Johannes 1978;Colin and Clavijo 1988;Colin and Bell 1991;Claydon 2004). Interestingly, Colin and Bell (1991) didn't observe predation on G. varius gametes during spawning at Enewetak, Marshall Islands. So, why does predation occur only in some systems or locations? Could fish feeding by tourists at Finger Reef, which attracts various species of fishes, including planktivores, be related to this? The identification of similar G. varius spawning aggregation sites at Guam that do not have fish feeding would provide an opportunity to compare egg predation rates between the two sites.
Planktivore densities were predicted to be highest in the outer zone territories during the spawning period. While the outer zone territories had higher densities of planktivores, the difference was not significant. Planktivorous fish densities tend to be higher in deeper waters near reef edges that provide more plankton (Hobson and Chess 1978;Thresher and Colin 1986;Friedlander et al. 2010). The even distribution of planktivores at Finger reef is likely due to the amount of group spawning by other species that occurs closer to the middle zone territories of G. varius. The majority of planktivores on Finger Reef were the damselfishes A. sexfasciatus and A. vaigiensis and were often seen consuming the eggs of group spawning T. hardwicke. Thalassoma hardwicke also had high densities at Finger Reef, which is most likely because they also have a spawning aggregation at this site. A comparison of planktivore densities across various spawning aggregation sites of G. varius or other wrasse species would be interesting to see if this pattern is common.
Outer zone spawning territories were predicted to experience higher spawning interruption rates. We thought that the increased threat of egg predation would result in more cautious spawning, but our results did not support this hypothesis. Instead, spawning interruptions primarily occurred from direct interactions between bird wrasses, not with egg predators. The higher numbers of bird wrasse males in the inner zones of the spawning aggregation site led to increased intraspecific competition for females and territories. Sometimes multiple males would try to court a single female, and this often led to an altercation between rival males that interrupted spawning. Spawning interruptions by both females and males in the outer zone territories appeared to be more often associated with encroaching planktivores rather than due to conspecifics but occurred less frequently than intraspecific interruptions. It was sometimes difficult, however, to discern the reason for spawning interruption and more data should be collected to confirm these observations. Spawning aggregations provide an efficient way for fishes to increase their reproductive success. They also have social and economic importance from a fisheries management perspective (Sadovy de Mitcheson and Erisman 2012.) Little is known about the characteristics and dynamics of the spawning aggregations of many species, thus making it important that we increase our understanding of these reproductive systems. Recognizing the role that spawning locations and egg predation pressures play in spawning success can help better inform fisheries management decisions. Protecting spawning aggregations sites, especially those that host multiple species, can benefit the entire ecosystem and sustain complex food webs (Erisman et al. 2017). The family Labridae has a considerable number of species that form resident spawning aggregations, (Colin and Bell 1991) some of which use a lek-like mating system within an aggregation (Desvignes et al. 2017). This study provides insight into the influence that reef location has on a spawning aggregation with a lek-like mating system of G. varius at Finger Reef. This species is observed easily and serves as an excellent model species to further our understanding of reproductive behavior and spawning aggregation dynamics. | 6,389.6 | 2021-04-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
A Chlamydia trachomatis VD1-MOMP vaccine elicits cross-neutralizing and protective antibodies against C/C-related complex serovars
Ocular and urogenital infections with Chlamydia trachomatis (C.t.) are caused by a range of different serovars. The first C.t. vaccine in clinical development (CTH522/CAF®01) induced neutralizing antibodies directed to the variable domain 4 (VD4) region of major outer membrane protein (MOMP), covering predominantly B and intermediate groups of serovars. The VD1 region of MOMP contains neutralizing B-cell epitopes targeting serovars of the C and C-related complex. Using an immuno-repeat strategy, we extended the VD1 region of SvA and SvJ to include surrounding conserved segments, extVD1A and extVD1J, and repeated this region four times. The extVD1A*4 was most immunogenic with broad cross-surface and neutralizing reactivity against representative members of the C and C-related complex serovars. Importantly, in vitro results for extVD1A*4 translated into in vivo biological effects, demonstrated by in vivo neutralization of SvA and protection/cross-protection against intravaginal challenge with both SvA and the heterologous SvIa strain.
INTRODUCTION C.t. infections cause several human diseases, including trachoma, the leading cause of blindness as a result of infection, and a spectrum of diseases caused by sexually transmitted infections, among them infertility, ectopic pregnancy, and chronic pelvic pain. The infections are caused by a range of serovars separated into 2 major and 2 minor complexes. The serovars of the B complex include B/Ba, D/Da, E, F, L1 and L2, the C complex includes serovars A, C, H, I/Ia, J/Ja, and the two intermediate groups of serovars include F and G/Ga (B-related) and K and L3 (C-related). Serovars A-C, Ba are major causes of trachoma, D-K, Da, Ia, Ja are linked to genital sexually transmitted diseases (STDs), and L1-L3 are commonly associated with lymphogranuloma venereum 1,2 . Vaccine development is ongoing and the ultimate goal is to design vaccines that cover all or the most prevalent serovars.
A Chlamydia vaccine, based on neutralizing B-cell epitopes, has been developed in our laboratory with the ability to promote both broadly neutralizing antibodies and high levels of T-cell immunity 3 . The importance of broad serovar coverage became evident in early clinical trials with whole-cell vaccines in both humans and nonhuman primates (NHP). Human volunteers experimentally infected with ocular C.t. were protected against rechallenge with a homologous but not against a heterologous serovar 4-6 . NHP vaccination with wholecell vaccines, NHP studies using sera from ocular infections, and toxicity studies in mice all demonstrated that antibody specificities for different serovars correlated with protection/neutralization against the homologous serovar, but with little cross-reactivity [7][8][9] .
The main target of surface binding and neutralizing antibodies is the major outer membrane protein (MOMP). MOMP is a transmembrane protein consisting of five constant domains and four surface-exposed variable domains (VDs) 10,11 . Serotype specificity is determined by the ompA gene, coding for MOMP, and is located within the four surface-exposed VDs, explaining serological reactivity [12][13][14] . MOMP is due to its abundance (60% of protein mass) in the outer membrane of the chlamydial elementary body (EB) 15 , an important vaccine target, and has been extensively studied as a vaccine antigen in both its native form (nMOMP) [16][17][18][19][20] and as recombinant expressed versions (rMOMP) [21][22][23][24] . Superior protection of nMOMP has been attributed to strong conformational neutralizing epitopes, which can be difficult to obtain with a recombinantly expressed protein 20 . However, the development of a broadly protective nMOMP vaccine is challenging due to the nature of C.t. as an intracellular bacterium and the complicated β-barrel transmembrane structure of MOMP 11,15,25,26 . To address these concerns, our vaccine design strategy is based on selected VDs of MOMP harboring known neutralizing B-cell epitopes.
Antibody responses against VDs during infection have been mapped and characterized by monoclonal antibodies (MAbs) [27][28][29][30][31] . Recognition and neutralization of C.t. were either serovar-or serogroup-restricted, and no MOMP-specific MAb had the ability to target or neutralize all serovars. Antibodies directed against the highly conserved TTLNPTIAG sequence in the VD4 region 13,14,31,32 neutralize B and intermediate groups of serovars 14,31,32 . We previously developed a multivalent vaccine construct Hirep-1 (heterologous immuno-repeat-1), based on VD4s and their surrounding conserved membrane anchors from the most prevalent serovars D-F. To avoid unwanted folding by formation of disulfide bridges, cysteines were exchanged with serines. This vaccine construct demonstrated in vitro and in vivo neutralization and protection against a vaginal challenge with both SvD and SvF 3,33 . A CTH522 vaccine, built on the Hirep concept, has recently completed clinical phase I trial with promising results 34 . Humans vaccinated with CTH522 in combination with either the adjuvants CAF®01 or aluminum hydroxide induced high titers of CTH522-specific antibodies with the functional capacity to in vitro neutralize C.t. and induced, in addition, significant levels of CTH522-specific T cells. Neutralizing antibodies from this vaccine has the potential to target the B complex and the intermediate groups of serovars (SvD, E, F, G, K, B, L1, L2, L3), and to a lesser degree the C complex serovars (A, C, H, I, J), indicating different accessibility of this conserved region on the surface of different serovars 14,35 . For C and C-related complex serovars, MAbs directed against the VD1 region have been demonstrated to be effective. Compared to the VD4 region, the VD1 region has a higher degree of serovar-restricted recognition and no VD1-specific MAb has been identified with the ability to target all C/C-related complex serovars [29][30][31]35 .
Here, we explore the development of a vaccine construct based on the VD1 region and designed to target ocular and genital serovars from C complex serovars. Using our Hirep vaccine design, we produced immuno-repeats of extended regions of VD1 from SvA and SvJ/C, each comprising VD1 regions previously demonstrated to hold neutralizing B-cell epitopes and representing both ocular and genital strains. We compared the immunogenicity and neutralizing activity of the constructs and demonstrated strong cross-neutralizing potential with the VD1 construct from SvA. We finally demonstrated significant protection of extVD1 A *4/CAF®01vaccinated mice against a vaginal challenge with C.t. SvA and SvIa, and we demonstrated a protective role of extVD1 A *4-specific antibodies in in vivo neutralization experiments challenging mice with C.t. SvA pretreated with sera from vaccinated animals.
RESULTS
Immunogenicity and neutralizing activity of extended VD1 constructs from SvA and SvJ With the purpose of generating high titers of functional antibodies against the VD1 region of SvA and SvJ/C, we compared the immunogenicity of different constructs covering the VD1 regions of those serovars. Su et al. previously demonstrated that the VD1 region from C.t. SvA was nonimmunogenic in A/J mice and a chimeric peptide composed of a colinear synthesis of the SvA T-cell epitope A8, and the VD1 region was necessary for induction of a VD1-specific antibody response 36 . We used another approach and instead of introducing a T-cell epitope from a distant part of MOMP, we analyzed the effect on the immunogenicity of the VD1 region when extending the VD1 region to cover the surrounding conserved parts. This approach was previously applied to the VD4 region with success 3,33 . Extended versions of the VD1 regions from C.t. SvA and SvJ (extVD1 A and extVD1 J ) were designed (Table 1). Initially, we compared the immunogenicity of extVD1 A with A8-VD1 A and VD1 A (Supplementary Table 1 for sequences) in CAF01 adjuvant 37,38 . After vaccination, the mice were bled and plasma or sera were tested for IgG reactivity against the VD1 A region, the extVD1 A region, against intact C.t. SvA/HAR-13 and in a neutralization assay, and we found extVD1 A to be significantly more immunogenic than A8-VD1 A and VD1 A alone ( Supplementary Fig. 1).
Since extending the VD1 region to cover the surrounding conserved parts increased immunogenicity compared to A8-VD1 A , we continued by designing a recombinant protein based on four repeats of the extVD1 A sequence (extVD1 A *4, Table 1), in order to investigate, if we could further enhance the immune response compared to the monomer as previously published with the extVD4 regions 3 . We further produced an immuno-repeat of extVD1 J (extVD1 J *4, Table 1). The immunogenicity of the two immunorepeat constructs was compared to their respective monomers (extVD1 A and extVD1 J , Table 1). A/J mice were immunized with 10 µg of the individual constructs in CAF01. After vaccination, the mice were bled and plasma or sera were tested for IgG reactivity against the extVD1 regions (Fig. 1a, d), against intact C.t. (Fig. 1b, e) and for functional antibody activity by an in vitro neutralization assay (Fig. 1c, f). The immuno-repeat constructs induced a more than 10 times stronger IgG response compared to the monomers (Fig. 1a, b and Fig. 1d, e)-an IgG response composed of both IgG1 and IgG2a/IgG2b ( Supplementary Fig. 2). The IgG response correlated with enhanced ability to neutralize the homologous serovar (Fig. 1c, f). In particular, the SvA immuno-repeat construct induced a very potent neutralizing antibody response with a reciprocal 50% neutralization titer (NT 50 ) > 10,000. In comparison, the extVD1 J *4-specific serum had a weaker ability to neutralize C.t. SvJ and a reciprocal NT 50 titer of around 300 was detected.
Specificity of the antibody and T-cell responses
For a vaccine to be broadly effective against a range of serovars, it is paramount that B-and T-cell epitopes are located in either conserved regions or that essential binding motifs are conserved among several serovars. To map the region(s) responsible for the neutralization observed after vaccination and to investigate the localization of the T-cell epitopes, we next investigated the specificity of both the B-and T-cell responses using overlapping peptides. Antibody responses were analyzed using 9-mer peptides with 8aa overlap spanning the whole extVD1 A and extVD1 J regions (Fig. 2a, d). A number of B-cell epitope regions, both within the conserved and specific parts of the constructs, were identified. The extVD1 A *4 construct induced a response to the previously identified C.t. SvA -neutralizing epitope DVAGLEKD (VD1 A minimal) located in VD1 A 31,36 (Fig. 2a). However, strong antibody responses were also found against three conserved segments C1-C3 (C1: MRMGYYGDFVFDRVLK, C2: VNKEFQMGAAPT, C3: NVARPNPAYGKHM) (Fig. 2a). Likewise, the extVD1 J construct induced antibody responses against the variable region (VD1 J ) and the same three conserved segments C1-C3 (Fig. 2d). In contrast to the narrow recognition pattern of VD1 A , the response to the variable VD1 J region was not as well-defined and seemed to cover the entire VD1 J region (AAPTTSDVAGLQNDPTTNVARP).
To identify which region(s) were targeted by neutralizing antibodies, we designed peptides representing the conserved (C1-C3), the minimal VD1 A epitope DVAGLEKD (VD1 A minimal*4), the minimal VD1 J epitope DVAGLQND (VD1 J minimal*4), the whole VD1 J region AAPTTSDVAGLQNDPTTNVARP (VD1 J ), and the extVD1 A and extVD1 J as positive controls. We performed a competitive inhibition of neutralization assay incubating the serum with and without a high concentration of the individual peptides before incubation with C.t. SvA or SvJ. We demonstrated that the sequence/region responsible for generating the major part of the neutralizing antibody response after vaccination, for both constructs, was located in the variable regions (Fig. 2b, e). With the extVD1 A *4-specific serum, the neutralizing ability was completely inhibited when incubating the serum with the extVD1 A peptide and approximately 80% of the ability to neutralize was abrogated when incubating with the minimal VD1 A (DVAGLEKD) peptide (Fig. 2b). With the extVD1 J *4-specific serum, complete inhibition of neutralization was seen when incubating the serum with the whole extVD1 J *4 sequence. Incubating with the minimal VD1 J epitope representing DVAGLQND 31 reduced the ability to neutralize SvJ by 2 Fine specificity of antibody and T-cell responses after extVD1 A *4 and extVD1 J *4 vaccination. A/J mice were immunized 3 times s.c. with either 10 µg of extVD1 A *4/CAF01 or extVD1 J *4/CAF01. Three weeks after the last vaccination, sera from immunized mice were pooled (n = 5-6), diluted 1:200, and the fine specificity of the IgG antibody responses was studied using a panel of biotinylated overlapping peptides (9-mers with 8 amino acid overlap) representing the extVD1 regions from SvA (a) and SvJ (d). Each bar represents the mean OD value of two determinations. Competitive peptide inhibition of in vitro neutralization of C.t. SvA and SvJ with peptides representing the four identified B-cell epitope regions in extVD1 A (b) and extVD1 J (e), respectively. Spleen cells were used to investigate the specific IFN-γ responses using panels of 20-22-mer peptides with 10 amino acid overlap (Supplementary Table 2) spanning extVD1 A (c) and extVD1 J (f) regions. Cells from 6 mice/group were pooled and tested in triplicates. Each bar represents mean ± SEM. The individual experiment was repeated 2-3 times with similar results.
70%, demonstrating a main role of this region. However, inhibiting with a longer peptide spanning the complete VD1 J region AAPTTSDVAGLQNDPTTNVARP, a complete reduction in the neutralizing ability was detected, demonstrating that residues outside the minimal epitope also play a role in the neutralization of SvJ (Fig. 2e).
The specificity of the T-cell response was analyzed by measuring the in vitro stimulatory properties of overlapping 20-22-mer peptides covering the extVD1 regions of MOMP from C.t. SvA and SvJ (corresponding to MOMP P4 to MOMP P8, Supplementary Table 2) on splenocytes from vaccinated mice (Fig. 2c, f). The dominant T-cell epitopes were mapped to different regions in the two constructs. For the SvA construct, a dominant T-cell epitope was located in the N-terminal highly conserved part of the construct (MOMP A P4), whereas the T-cell epitope after vaccination with the SvJ construct was located in MOMP J P8 spanning both variable and conserved regions.
Cross-neutralization of SvA and SvJ
Since a broad serovar coverage is important, we next investigated if extVD1 A *4-specific serum could cross-neutralize C.t. SvJ and vice versa. ExtVD1 A *4-specific serum cross-neutralized C.t. SvJ with a reciprocal NT 50 titer of around 3000 (Fig. 3a), a titer that was much higher compared to the titer (NT 50 = 300) obtained with the homologous extVD1 J *4-specific serum (dotted curve inserted from Fig. 2f). ExtVD1 J *4-specific serum was likewise able to crossneutralize C.t. SvA with a reciprocal NT 50 titer of 2000 (Fig. 3b). This was, however, a much lower neutralization titer compared to the NT 50 titer of more than 10,000 obtained with the homologous extVD1 A *4-specific serum (dotted curve inserted from Fig. 2c). Since extVD1 A *4-specific serum was superior to extVD1 J *4-generated serum in all investigated aspects of the immune response, we decided to further investigate the biological effects of extVD1 A *4, i.e., protective efficacy in a vaginal challenge model and the ability to cross-bind and cross-neutralize other C.t. serovars.
In vivo effect of functional antibodies
To translate the in vitro activity of extVD1 A *4-specific antibodies into biological activity in vivo, we established a vaginal infection model with C.t. SvA in mice. To investigate the in vivo protective efficacy of extVD1 A *4-specific immune responses, A/J mice were vaccinated three times by the simultaneous (SIM) s.c. and i.n. routes (see "Methods"). Four weeks after the last vaccination, the extVD1 A *4specific IgG and IgA antibodies were measured in vaginal wash samples. Statistically significant levels of both isotypes were To investigate the functional role of antibodies in the initial phase of infection, independently of an adaptive T-cell response, we performed an in vivo neutralization experiment. Serum was isolated from A/J mice s.c. vaccinated with extVD1 A *4/CAF01 and diluted 32 times when mixed with C.t. SvA/HAR-13. Since C3H/HeN mice are considered to be more susceptible to C.t. compared to other mouse strains 39 and since the MHC haplotype of the mouse is insignificant in this assay, naive C3H/HeN mice were challenged with 10 µl of the mixture (1 × 10 6 IFU/mouse). Vaginal loads were monitored at PID 3 and 7 (Fig. 4d). Serum from extVD1 A *4/CAF01vaccinated mice significantly reduced the ability of C.t. to establish a genital tract infection compared to control serum, at both day 3 and 7 post infection, indicating an important role of functional antibodies in controlling infection.
Cross-recognition of the VD1 regions of C.t. with extVD1 A *4specific serum Since the VD1 region plays a dominant role in neutralization of both SvA and SvJ (Fig. 2b, e), we investigated the ability of polyclonal extVD1 A *4-specific antibodies to recognize 20-22-mer peptides representing the majority of the variable VD1 region of sequencerelated serovars (SvA/2497 (clinical isolate), C, H, I, Ia, J, and K) and of more distant serovars (SvD, E, F, G, and B) 13,14 (Fig. 5a, b). For comparison and as a positive control, we included the corresponding VD1 region from SvA/HAR-13. Strong cross-recognition of the VD1 regions of SvA/2497, C/J, I, and Ia was found, whereas weaker recognition of the VD1 region of SvK and SvH and no recognition of the VD1 region from SvD, E, F, G, and B was detected (Fig. 5b).
Cross-recognition/neutralization of C.t. serovars with extVD1 A *4-specific serum The biological effect of antibodies against C.t. can roughly be divided into neutralization of infectivity by direct blocking or by facilitating effector functions via complement or effector cells [40][41][42][43] . In both cases, a strong recognition of the bacterial surface is a prerequisite for an effector function. Besides the VD1 region, three conserved regions (C1-C3) were strongly recognized (Fig. 2a, d) and could potentially be involved in the surface binding of some serovars. We, therefore, measured the ability of extVD1 A *4-specific antibodies to recognize the bacterial surface of both C/C-related serovars (SvA/2497, C, H, I, Ia, J, K) but also against B/B-related serovars (SvB, D, E, F, and G) where the VD1 region was not recognized. Pooled serum from extVD1 A *4-vaccinated mice was analyzed in triplicates in ELISA plates coated with EBs representing the different serovars. ExtVD1 A *4-specific serum strongly recognized the surfaces of SvA/2497, C, I, Ia, J, and K with reciprocal serum titers giving OD 450-620 = 0.5 ranging from 15,033 to 151,177. Much lower recognition of SvB, D, E, and F (titers from 32 to 1031) and no recognition of SvG and SvH were detected ( Table 2). Except for SvK, this correlated with the VD1 recognition (Fig. 5b). ExtVD1 A *4-specific serum pools from 2 to 4 experiments were further analyzed for neutralizing ability. Strong reciprocal NT 50 titers were demonstrated against SvA/2497, C, I, Ia, J, and K ranging from 400 to 10,100, whereas the serum had no neutralizing ability against SvH, B, D, E, F, and G correlating with both weak surface recognition and no VD1 recognition of those serovars ( Table 2, Fig. 5b).
To demonstrate that the ability of extVD1 A *4-specific serum to cross-target other serovars was related to the VD1 region, we competitively inhibited the surface binding by incubating the extVD1 A *4-specific serum with and without a high concentration of a 22-mer peptide representing the VD1 region of SvA/HAR-13 (VD1 A/HAR-13 for sequence see Fig. 5a). To ensure that the inhibition of the VD1-specific antibodies was complete in all used serum concentrations, we measured the ability of the VD1 A/HAR-13 blocked serum to bind to VD1 A/HAR-13 in an ELISA ( Supplementary Fig. 3). The VD1 A/HAR-13 response was completely blocked since no VD1 A/ HAR-13 -specific antibodies were detected. Of significant impact, we found a VD1-independent recognition of the surface of all tested serovars, however, this was most pronounced for SvK (Fig. 6a). This finding could explain the strong surface recognition and neutralization of SvK, despite the lower recognition of the VD1 region, and suggests that regions/amino acids outside the VD1 region are also involved in the surface binding of SvK.
To investigate the impact of the VD1 and the VD1-independent surface recognition on the ability to cross-neutralize, we performed a competitive inhibition of neutralization assay, incubating VD1 A/HAR-13 -blocked extVD1 A *4-specific serum with the different C.t. serovars (Fig. 6b). Incubation of the serum with the peptide led to loss of the major part of the detected neutralization, demonstrating that the VD1 region is responsible for most of the observed cross-neutralization. However, for SvIa and SvK epitopes, amino acid residues outside the VD1 region could play a role for optimal binding of the neutralizing antibodies, since some level of neutralization was still detectable after VD1 A/HAR13 inhibition.
Heterologous protection of extVD1 A *4/CAF01 against a SvIa challenge Following the demonstration of broad surface recognition and neutralization generated by extVD1 A *4, we finally investigated if the cross-reactivity of extVD1 A *4/CAF01-mediated immune responses could be translated into an in vivo effect against a heterologous serovar challenge. SvIa is a prevalent genital serovar of the C complex 44 , and therefore we decided to investigate if extVD1 A *4/CAF01-vaccinated mice were protected against an i.vag. SvIa challenge. A/J mice were vaccinated three times by the SIM vaccination strategy. Six weeks after the last vaccination, mice were challenged with 1 × 10 6 IFU of C.t. SvIa and swabbed at PID3, 7, and 10 ( Fig. 7). Mice vaccinated with extVD1 A *4/CAF01 had reduced levels of IFU at PID 3, which reached a significant reduction at PID7 (***p < 0.001). At PID10, no bacteria could be detected in vaccinated mice.
DISCUSSION
Chlamydia diseases continue to cause morbidity and there is a need for a broadly protective vaccine covering circulating serovars. The current study focused on developing a vaccine construct with the ability to induce broadly neutralizing antibodies against C/C-related complex serovars (SvA, C, H, I/Ia, J, and K). Exploiting our immunorepeat vaccine approach 3,33,34 , two novel vaccine constructs, extVD1 A *4 and extVD1 J *4, were designed based on the VD1 region of MOMP. Both constructs were highly immunogenic. ExtVD1 A *4 induced broadly neutralizing antibodies against all tested members of the C/C-related complex, except for SvH. This translated into protective immunity in a mouse genital challenge model of both an ocular (SvA) and a genital (SvIa) strain.
A broad serovar coverage of a Chlamydia vaccine is highly preferable, as low serovar coverage could lead to serovar emergence or replacement, as has been observed following vaccination with the Pneumococcal vaccine (PCV-7). Here a steady increase in pneumococcal disease caused by nonvaccine serotypes was reported in some populations 45 . Although a vaccine against the most prevalent sexually transmitted serovars worldwide D-F 46-49 would have a significant impact, demographical differences have been reported 44,50 . Our CTH522 vaccine, which has recently completed clinical phase I trial 34 , targets the prevalent SvD, E, F, and G with surface-binding and -neutralizing antibodies. With specificities against the VD4 region of MOMP, the CTH522 vaccine can target the B and intermediate groups of serovars, but to a lesser degree the C complex serovars. Therefore, to increase the range of antibody protection against multiple serovars, it would be beneficial to supplement the CTH522 vaccine with constructs that induce VD-specific antibodies with broad C complex serovar recognition.
Since neutralizing MAb antibodies against C/C-related complex serovars have previously been mapped to epitopes in the VD1 region, we took the approach of focusing on the VD1 region of SvA and SvJ/C and improved the immunogenicity of those by extending the VD1 sequence to cover surrounding constant domains (extVD1). We found that molecular repetition of the immunogens further strengthened the immune response by enhancing titers by more than one log 10 against the protein itself and the bacterial surface. These findings are in agreement with similar studies done in our laboratory on the VD4 region 3 , confirming that even low-valency repeated antigens (2-10) can be superior compared to monovalent antigens 3,51 .
For both extVD1 immuno-repeat constructs (extVD1 A *4 and extVD1 J *4), the B-cell epitopes were mapped to four major regions -the VD1 (DVAGLEKD within VD1 A and a broader region AAPTTSDVAGLQNDPTTNVARP within VD1 J ) and three conserved regions (C1-C3) (Fig. 2). Strong surface recognition and neutralization were detected with sera from both constructs. Structural reports on MOMP demonstrate accessibility of the VDs on the EB surface. This is in contrast to the constant domains (CDs) adjacent to the VDs that are not predicted to be displayed on the surface of EBs and hence not accessible for antibodies 11,15,22,25,26,52,53 . In support of this, competitive inhibition of neutralization with peptides covering the four B-cell epitope regions demonstrated that antibodies generated against the VD1 regions were responsible for the major part of the observed neutralization of the homologous strain (Fig. 2).
Previous studies investigating the reactivity of VD1-specific MAbs demonstrated that cross-reactivity and neutralization of related serovars were dependent of the ability to bind the VD1 region independently of serovar-specific amino acid residues within the main neutralizing region 29,30 . Analyzing extVD1 A *4specific serum, we detected strong cross-binding of the VD1 regions from SvC, I, Ia, J, and the clinical isolate A/2497, lower recognition of VD1 K and VD1 H , and no recognition of the VD1 regions from B/B-related complex serovars (Fig. 5). Except for SvK, a correlation between VD1 reactivity, surface recognition, and neutralization was demonstrated. This was confirmed with a competitive inhibition experiment using the VD1 A/HAR-13 peptide to compete for recognition. However, for all C/C-related complex serovars, we saw a constant level of surface recognition with serum depleted of VD1 A/HAR-13 -specific antibodies and we speculate that this could be due to antibodies recognizing the conserved regions in close proximity to the VD1 regions (Fig. 6a).
Only for SvK and SvIa, this translated into minor non-VD1-specific contributions to neutralization. The exact understanding of this will be the subject for further investigations. The role of antibodies against C.t. is more than direct blocking of infection, i.e., by neutralization. A range of other indirect effector functions like opsonophagocytosis 54 , antibody-dependent complement deposition (ADCD) 55,56 , antibody-dependent cellular cytotoxicity (ADCC) 42 , or combinations thereof could potentially play a role.
In this study, we demonstrated an in vivo protective effect of antibodies by preincubating C.t. SvA with extVD1 A *4-specific serum before infecting C3H/HeN mice. This led to a 1-3 log 10 -reduced bacteria level at days 3 and 7 post infection compared to incubation with control serum. Whether this in vivo effect is direct blocking of infection, other antibody-mediated effector functions or a combination is not known. Our results are in line with other studies translating the in vitro neutralization effect of antibodies into in vivo protection, by passive transfer of monoclonal 57,58 , polyclonal antibodies 3,33,59 , or by vaccination 3 . All have generally resulted in reduced early bacterial shedding after a vaginal challenge in mice.
An optimal Chlamydia vaccine is composed of a combination of broadly neutralizing B-cell epitopes and conserved T-cell epitopes. An important role of CD4 T cells and IFN-γ has been demonstrated in a range of animal studies 3,60-65 . In humans, CMI and IFN-γ has been associated with a reduced risk of reinfection 66 and HIV-infected women lacking CD4 T cells have an increased risk of developing C.t. pelvic inflammatory disease 67 . Here, we identified T-cell epitopes in extVD1 A *4 in the conserved N-terminal part of the construct, which is highly preferable in contrast to serovar-specific T-cell epitopes. Of importance in relation to the use of extVD1 A *4 in a future human vaccine, this region is overlapping a region described by Ortiz et al. to contain human CD4 T-cell epitopes in C.t.-infected patients 68,69 .
To evaluate the protective efficacy, we established a SvA/Ia genital infection model in A/J mice. Although C.t. SvA is designated an ocular strain, it is well described that ocular serovars can cause genital infections, and vice versa 10,70,71 . We immunized with a combined simultaneous s.c/i.n. (SIM) vaccination strategy previously published by Wern et al. 72 , a strategy that improves mucosal immunity (IgA responses and increased B and CD4 T cells in the genital tract) 72 . We detected strong IgG and IgA responses in the genital tract and found significant protection against an i.vag. challenge with C.t. SvA, with a median >1 log reduction in IFU at day 3 and sustained control throughout day 7 and 10 post infection. Whether a combined mucosal and parenteral vaccination strategy is necessary for a future vaccination protocol in humans remains unknown. Our previous studies demonstrated a similar level of protection of SIM and SC vaccination protocols 72 , probably due to high serum IgG levels entering the genital tract in combination with rapid recruitment of circulating Th1/Th17 cells upon challenge 72,73 . In the present study, we saw a highly significant protective effect of serum IgG, when coating the SvA EBs with extVD1 A *4-specific serum before infection and we have previously seen similar results using VD4-specific neutralizing antibodies 3 also in adoptive transfer experiments 33 . In humans, IgG is the predominant secreted isotype 74 . However, studies pointing to a protective role of sIgA in humans have been published 75 and similar observations were found in the minipig model where a correlation between vaginal sIgA correlated with accelerated clearance of C.t. 76 . Of significant importance, we also demonstrated protection against i.vag. challenge with a heterologous strain (SvIa), which strengthens the possibility of using this construct to induce broad protection against C/C-related complex serovars. A strong cross-neutralizing antibody response, in combination with induction of conserved T-cell epitopes, could be the key to the observed in vivo protection and cross-protection against SvA/SvIa.
A limitation to the current study is the use of inbred mice in the study of antibody responses. Although antibody responses generally translate well across species, ongoing studies in our lab will investigate responses in outbred species like the guinea pig and NHP, to study if we can induce the same strength and levels of cross-neutralizing antibodies in those species, or if a mixture of multiple serovar-specific immuno-repeats should be considered in a future C.t. vaccine. Nonetheless, our studies demonstrate that if strong titers of neutralizing antibodies are obtained by vaccination, an effect on in vivo protection can be seen. Future perspectives are to combine the extVD1 A *4 with CTH522 in a vaccine that will strengthen both the T-cell responses Fig. 6 Competitive inhibition of surface recognition and neutralization. A/J mice were immunized with extVD1 A *4/CAF01 (n = 20) or CAF01 alone (n = 20) using the SIM vaccination protocol. ExtVD1 A* 4-specific serum was pooled from 20 mice, prediluted, and mixed with 1 mg/ml of VD1 A/HAR-13 or 1× PBS buffer. After incubation, the mixture was added to C.t.-coated ELISA plates in duplicates and C.t.-specific IgG measured by ELISA (a). Each bar represents the mean of duplicate readings with and without the presence of the VD1 A/HAR-13 peptide. ExtVD1 A* 4-specific serum was pooled from 20 mice, prediluted, and mixed with 1 mg/ml of VD1 A/HAR-13 or SPG buffer. After 45 min of incubation, the mixture was further incubated 1:1 with different C.t. serovars for 45 min before inoculation onto a HaK cell monolayer, incubated, fixed, and inclusions counted (b). Bar represents % neutralization with and without the presence of the VD1 A/HAR-13 peptide.
together with a broad serovar coverage: ocular (SvA, B, and C) and genital (SvD, E, F, G, I, Ia, J, and K) strains. Since the development of vaccines comes with huge investments, this combined ocular/ genital vaccine could serve a dual purpose as both STI vaccine and as a vaccine to help with the eradication of trachoma in combination with current control strategies.
Antigen cloning and purification
ExtVD1 A *4 and extVD1 J *4 were produced as follows. Based on the MOMP amino acid sequences (NCBI YP_328507.1 and AAC31443.1) with addition of six N-terminal histidines, synthetic DNA constructs were codonoptimized for expression in E. coli followed by insertion into the pJexpress 411 vector (ATUM). By IPTG, we induced expression in E. coli BL-21 (DE3) cells transformed with the synthetic DNA constructs. Inclusion bodies were isolated and extracts were loaded on a HisTrap column (GE Healthcare, Chicago, Illinois, USA), followed by anion exchange chromatography on a HiTrap Q HP column and dialysis to a 20 mM glycine buffer, pH 9.2. Protein concentrations were determined by BCA assay.
Immunization
Mice received a total of three immunizations at two-week intervals either subcutaneously (s.c.) at the base of the tail in a total volume of 200 µl or simultaneously with the intranasal (i.n.) route in a volume of 30 µl. Vaccination protocols were as follows: 3× s.c. (SC vaccination protocol) or 1st s.c., 2nd s.c. + i.n., and 3rd s.c. + i.n (simultaneous (SIM) vaccination protocol). The vaccines given by both routes consisted of 10 µg of antigen, except for the experiment shown in Supplementary Fig. 1, where 25 µg were used for s.c. vaccinations. For s.c. vaccinations, the antigens were diluted in Tris-buffer (pH 7.4) and mixed by vortexing with adjuvant consisting of 50 µg/dose of the glycolipid trehalose 6,6′-dibehenate (TDB) incorporated into 250 µg/dose of cationic liposomes composed of dimethyldioctadecyl-ammonium (DDA) (CAF®01). For the i.n. delivery, the vaccines were delivered without adjuvant. In challenge experiments, the mice were rested 6 weeks before challenge.
ELISA for antigen-specific antibodies in serum and vaginal washes
Blood was collected after the last vaccination for quantification of vaccinespecific antibodies by enzyme-linked immunosorbant assay (ELISA). Blood was collected from the periorbital vein plexus or the superficial temporal vein into Eppendorf tubes with EDTA (Plasma) or without EDTA (Serum). For isolation of serum, the tubes were centrifuged for 10 min at 10,000 × g. To separate plasma, samples were centrifuged for 10 min at 500 × g. Vaginal washes were collected by flushing the vagina with 100 µl of sterile 1× PBS and samples stored at −80°C until analysis. Before dilution, the vaginal wash samples were treated with 25 µg/ml Bromelain (Sigma-Aldrich). Maxisorb plates (Nunc, Roskilde, Denmark) were coated with 50 µl of recombinant antigens (1 µg/ml), peptides (10 µg/ml), or C.t. serovars (10 µg/ml) overnight at 4°C followed by blocking for 2 h in 1× PBS with 2% BSA for IgG and 1× PBS with 1% skimmed milk powder and 0.05% Tween for IgA. The plasma, serum, and vaginal wash samples were Fig. 7 Protective effect of extVD1 A *4/CAF01 induced immune responses against SvIa challenge. A/J mice were immunized with extVD1 A *4/CAF01 (n = 8) or CAF01 alone (n = 9) using the SIM vaccination protocol. Six weeks post last vaccination, the mice were challenged i.vag. with 1 × 10 6 IFU of SvIa/mouse. Data are presented as log 10 IFU. Each line represents the median number of IFU with 25th and 75th percentiles recovered from vaginal swabs at days 3, 7, and 10 post infection. A Mann-Whitney test was used for comparison among groups. ***p < 0.001. serially diluted in 1× PBS with 1% BSA for IgG or 1× PBS with 1% skimmed milk powder and 0.05% Tween for IgA before being added to coated microtiter plates. After washing, HRP-conjugated rabbit anti-mouse IgG (Invitrogen #61-6520), HRP-conjugated goat anti-mouse IgG1 (Southern Biotech #1070-05), HRP-conjugated rabbit anti-mouse IgG2a (Life Technologies #610220), HRP-conjugated goat anti-mouse IgG2b (Invitrogen #M32407), or biotinylated goat anti-mouse IgA (Southern Biotech #1040-08) was added. After 1 h of incubation, the IgA plates were added Streptavidin-HRP (BD Biosciences # 554066). For all isotypes/subtypes, antigen-specific antibodies were detected using TMB-PLUS (Kem-En-TEC, Taastrup, Denmark). The reaction was stopped with H 2 SO 4 and OD (450-620 nm) was read using an ELISA reader. The results were presented either as titration curves, as absorbance (Abs) at one dilution, or serum titers were reported as the reciprocal of the dilution giving an Abs = 0.5 after subtraction of control serum.
Reactivity against 9-mer overlapping biotinylated Pepsets was investigated by ELISA. Plates were coated with streptavidin, incubated with biotinylated peptides, blocked with skimmed milk powder, washed, and then followed the ELISA procedure described above.
Neutralization assay
In vitro neutralization assay. Hamster kidney cells (HaK) (ATCC® CCL-15) were maintained in RPMI 1640 supplemented with 1% (vol/vol) L-glutamine, 1% nonessential amino acids, 1% sodium pyruvate, 70 µM 2-mercaptoethanol, 10 µg/ml gentamicin, 1% HEPES, and 5% heat-inactivated fetal bovine serum at 37°C, 5% CO 2 . Cells were grown to confluence in 96-well flatbottom microtiter plates (Costar, Corning, NY, USA). The different C.t. stocks were diluted to a predetermined concentration in SPG buffer and mixed with heat-inactivated (56°C for 30 min) and serially diluted serum. The mixture was incubated for 45 min at 37°C and inoculated onto HaK cells in duplicates. After 2 h of incubation at 36°C on a rocking table, the cells were washed once with RPMI 1640 and further incubated 24 h at 37°C, 5% CO 2 in culture media containing 0.5% glucose and cycloheximide (1 µg/ml). The cells were fixed with 96% ethanol and inclusions were visualized by staining with polyclonal rabbit anti-rCT043 serum (produced in our lab), followed by Alexa 488-conjugated goat anti-rabbit immunoglobulin (1:500-1:1000) (Invitrogen #A11008). Cell staining was done with Propidium Iodide (Invitrogen). IFU was enumerated by fluorescence microscopy using an automated cell imaging system (ImageXpress Pico automated Cell imaging system (Molecular Devices, San Jose, California, USA)) counting 25% of each well or by manual counting. The results were calculated as percentage reduction in mean IFU relative to control serum. A serum dilution giving a 50% or greater reduction in IFU relative to the control was defined as neutralizing. The serum dilution giving a 50% reduction in IFU was named reciprocal 50% neutralization titer (NT 50 ).
In vivo neutralization. C.t. SvA was incubated with heat-inactivated and sterile-filtered 32 times diluted serum from extVD1 A *4/CAF01 s.c.vaccinated A/J mice or serum from adjuvant control mice. After 30 min at 37°C, depo-provera-treated C3H/HeN mice were infected with 10 µl of the inoculum (a total of 1 × 10 6 IFU/mouse), swabbed at days 3 and 7 post infection, and IFU was determined as described in "Vaginal challenge and cultures".
Competitive inhibition of surface recognition and neutralization
In the competitive inhibition of surface recognition assay, extVD1 A *4specific and control serum (pool of sera from 20 mice) were preincubated for 45 min at 37°C with 1 mg/ml of the VD1 A/HAR-13 peptide diluted in 1× PBS or 1× PBS alone. Depending on the serovar being tested, predetermined dilutions of the extVD1 A *4-specific serum were used to ensure a response within the linear range of the titration curve (SvI/J: 7000×, SvC/Ia: 5000×, and SvK: 2500×). After incubation, the mixture was added to C.t.-coated ELISA plates in duplicates and C.t.specific IgG measured by ELISA as described in "ELISA for antigenspecific antibodies in serum and vaginal washes".
In the competitive inhibition of neutralization assay, extVD1 A *4-specific serum, extVD1 J *4-specific serum, and control serum (pool of sera from 20 mice) were preincubated for 45 min at 37°C with 1 mg/ml of peptides (C1, C2, C3, VD1 A minimal*4, VD1 A , extVD1 A , VD1 J minimal*4, and VD1 J or extVD1 J ) diluted in SPG buffer or SPG buffer alone prior to 45 min of incubation at 37°C with the different C.t. serovars. Depending on the serovar being tested, predetermined dilutions of the extVD1 A *4-specific serum were used when mixed with the VD1 A/HAR-13 peptide to ensure between 80 and 100% neutralization in the control wells (SvA/2497:1500×, SvC:500×, SvI:250×, SvJ/K:200×, and SvIa:50×). The extVD1 J *4-specific serum was prediluted 200 times when mixed with peptides. The mixtures were inoculated onto a HaK cell monolayer in duplicates. After 2 h of incubation at 36°C on a rocking table, the cells were washed once with RPMI 1640 and further incubated for 24 h at 37°C, 5% CO 2 , as described previously. Inclusions were fixed, stained, and counted, and percent neutralization was calculated as described in "Neutralization assay".
Vaginal challenge and cultures
Ten and three days before C.t. SvA or SvIa challenge, the oestrous cycle was synchronized by injection of 2.5 mg of medroxyprogesteronacetat (Depo-Provera, Pfizer, Ballerup, Denmark), increasing mouse susceptibility to chlamydial infection by prolonging dioestrus. The mice were challenged i.vag. with 1 × 10 6 C.t. SvA/HAR-13 or SvIa in 10 µl of SPG buffer. At different time points post infection (PID3, 7, 10, 14, and 17), the mice were swabbed. Swabs were vortexed with glass beads in 0.6 ml of SPG buffer and stored at -80°C until analysis. The infectious load was assessed by infecting 48-plate wells seeded with McCoy cells with the swab material undiluted and twofold diluted. Inclusions were visualized by staining with polyclonal rabbit anti-MOMP serum made in our lab, followed by an Alexa 488-conjugated goat anti-rabbit immunoglobulin (1:500-1:1000, Invitrogen #A11008). Background staining was done with Propidium iodide (Invitrogen). IFU was enumerated by fluorescence microscopy either manually or by using an automated cell imaging system (ImageXpress Pico automated Cell imaging system) (Molecular Devices) counting 50% of each well. If no IFU were detected in the counted area, 100% of each well was counted manually. Culture-negative mice were assigned the limit of detection of 4 IFU/mouse representing one IFU in the tested swab material (1/4 of the total swab material).
Statistical analysis
GraphPad Prism 8.3.0 was used for data handling, analysis, and graphic representation. Statistical analysis of vaginal wash samples and log 10 IFU was performed using the Mann-Whitney test or Kruskal-Wallis (one-way ANOVA) followed by Dunn's multiple-comparison test. Statistical analysis of the AUC data was done by the Mann-Whitney test. A p value of < 0.05 was considered significant.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
DATA AVAILABILITY
The authors confirm that all relevant data are included in the paper and available from the corresponding author upon request. | 9,674 | 2021-04-19T00:00:00.000 | [
"Medicine",
"Biology"
] |
Comment on “Strong signature of the active Sun in 100 years of terrestrial insolation data” by W. Weber
An analysis of ground-based observations of solar irradiance was recently published in this journal, reporting an apparent increase of solar irradiance on the ground of the order of 1% between solar minima and maxima [1]. Since the corresponding variations in total solar irradiance on top of the atmosphere are accurately determined from satellite observations to be of the order of 0.1% only [2], the one order of magnitude stronger effect in the terrestrial insolation data was interpreted as evidence for cosmic-ray induced aerosol formation in the atmosphere. In my opinion, however, this result does not reflect reality. Using the energy budget of Earth's surface, I show that changes of ground-based insolation with the solar cycle of the order of 1% between solar minima and maxima would result in large surface air temperature variations which are inconsistent with the instrumental record. It would appear that the strong variations of terrestrial irradiance found by [1] are due to the uncorrected effects of volcanic or local aerosols and seasonal variations. Taking these effects into account, I find a variation of terrestrial insolation with solar activity which is of the same order as the one measured from space, bringing the surface energy budget into agreement with the solar signal detected in temperature data.
Earth's surface energy budget
Using a simple argument based on the energy budget at Earth's surface one can show that a variation of terrestrial insolation with solar activity on the percentage level -as reported in [1] -would lead to unrealistically large temperature fluctuations which are not observed in the instrumental surface temperature record. To first order, the relationship between surface temperature T s and terrestrial insolation I s is governed by the balance between the absorbed short-wave radiation I s (1 − α s ) and the long-wave emission εσT 4 s according to the Stefan-Boltzmann law, with the surface reflectivity or albedo α s , the emissivity ε and the Stefan-Boltzmann constant σ. Therefore, changes in temperature dT are related to changes in insolation dI by since changes of surface albedo can be neglected for such small irradiance variations. Conventional wisdom suggests that terrestrial insolation I s varies with the solar cycle in the same way as the total solar irradiance (TSI) above the atmosphere, i.e. dI/I s 0.1% [2]. Using this value and an average surface temperature of T s = 288 K yields an expected temperature variation over the solar cycle of dT 0.07 K. This simple approximation ignores feedbacks in the climate system (due to clouds, water vapour, ice and changes in the lapse rate). The combination of these effects is generally considered to act as a positive feedback, the estimate above can thus be considered a lower limit.
A more appropriate estimate for the temperature response dT to changes in solar irradiance dI including these feedbacks is given by the relation with the change in radiative forcing dF = (1 − α)dI/4 (the change dI in incoming solar radiation I 1361 W m −2 corrected for Earth's albedo α 0.3 and geometry) and the transient climate sensitivity [3] describing the short-term response of the climate system to changes. For a change of 0.1% in insolation this yields dT 0.1 K, in excellent agreement with the value derived from global surface air temperature data [4,5].
This estimate is interesting for two reasons: First, it shows that most of the observed solar-cycle global temperature variation can be explained by changes in top-of-the-atmosphere irradiance alone. The remainder is likely due to well documented effects of changes in ultraviolet radiation, leaving little room for more speculative effects of cosmic rays. Secondly, a terrestrial insolation variation with solar activity one magnitude larger than the TSI changes (as suggested by the results in [1]) would lead to global temperature changes of dT 1 K between solar maxima and minima (Fig. 1b), a result clearly in conflict with the instrumental surface air temperature record (e.g. [6]) which does not exhibit such large variations over the 11-year solar cycle (see Fig. 1a). . Other global surface temperature datasets look very similar. (b) Monthly temperature anomalies due to the 11-year solar cycle according to [4] (black line) on the same scale as panel (a). The grey line indicates the response expected for a ten-times larger variation in terrestrial irradiance as suggested by [1] which is inconsistent with the temperature record shown in panel (a). This discrepancy between the strong variation of terrestrial insolation reported in [1] and the energy budget at Earth's surface suggests that the data analysis in [1] is biased by systematic effects not related to solar activity changes, a hypothesis which will be explored in the next section.
Analysis of the terrestrial insolation data
The analysis of ground-based insolation data in [1] neglects the effects of atmospheric aerosols from volcanoes or local pollution and seasonal variations, which in my opinion feigns a stronger influence of solar activity on terrestrial insolation. These arguments will be briefly summarised in this section, the complete re-analysis of the data is discussed in detail in [8].
www.ann-phys.org
The bias resulting from volcanic aerosols and pollution is illustrated in Fig. 2. During the time period 1924-1955 covered by the Smithsonian Astrophysical Observatory (SAO) data [9] analysed in [1], the years 1928-1934 and 1951-1955 were affected by aerosols from well documented volcanic eruptions and local pollution [10]. By coincidence, these periods overlap with two out of three of the solar minima during this interval (Fig. 2a). Since aerosols absorb and scatter incoming light, this results in a lower measured irradiance on the ground, as can be seen in Fig. 2b. Note that there is no apparent drop in irradiance during the minimum around 1944 which is unaffected by aerosols, demonstrating that the lower irradiance during the other two solar minima is indeed driven by aerosols rather than the low solar activity. (Note that the expected variation of the irradiance with solar activity of the order of 0.1% is too small to be seen with the naked eye on this scale.) In his reply [12] to this comment, Werner Weber observes that the scatter in the derived TSI during these periods of active volcanism is not larger than at other times and concludes that the SAO personnel did not take observations during these time intervals. This is not entirely convincing, however, since the effects of volcanic aerosols are visible for a few years after the eruptions, and there clearly exist data during these time periods. I would argue that the small scatter in the TSI demonstrates that the SAO method for correcting the observed irradiance for the measured effects of water vapour and aerosols even work at times of large volcanic aerosol loads in the atmosphere.
The seasonal bias in the data is a bit more intricate. Terrestrial irradiance exhibits a strong seasonal cycle (see Fig. 2b). Since days with low and high sunspot numbers are not distributed equally over all seasons in the SAO data, an analysis of trends of irradiance with sunspot number will be skewed if the seasonal variations are not corrected [8].
A reanalysis of the data corrected for seasonal variations and without the years affected by volcanic or other aerosols yields a trend of irradiance with sunspot number which is about a factor of ten smaller than the one reported in [1], corresponding to a variation of 0.1% between solar maxima and minima as for the TSI [8]. This variation of terrestrial insolation is in agreement with the temperature changes associated with the solar cycle [4,5], see the energy estimate presented in Sect. 1.
Since the publication of the original results in [1], Werner Weber has introduced an improved analysis technique for the SAO data described in his reply to this comment [12]. Starting with the observation that the measured terrestrial irradiance shows a second-order dependence on the precipitable water content W and the brightness A of the solar aureole, a 'scatter function' I can be constructed by approximating a second-order power series in W and A to the observed irradiance (see Fig. 2c for an example). Subtracting this scatter function I from the irradiance measurements I yields the 'reduced irradiance' I which exhibits a drastically reduced scatter as compared to the original irradiance (see Fig. 2d). This method is quite remarkable and is a legitimate technique to remove the effects of seasons and volcanic aerosols; physically it is equivalent to correcting the observed terrestrial irradiance for the effects of water vapour and aerosols in the atmosphere using empirical data.
In addition, however, Werner Weber argues that the scatter function I depends strongly on the sunspot number R and that the R dependence of I has to be removed by applying a Legendre-type transformation I → I with I i = I i − C i R i . The variation of I , however, is more likely caused by the effects of aerosols rather than by solar activity (see the discussion above and in [8]). Hence by applying the transformation an unrealistic R dependence of the reduced irradiance I is introduced in [12], leading to an artificially large variation of the terrestrial irradiance with sunspot number and a striking, yet misleading correlation between sunspot number and strong variations in the reduced (and transformed) terrestrial irradiance shown in Fig. 4 in [12].
This effect is best illustrated by the apparent dip in the reduced terrestrial irradiance around the solar minimum in 1945 in [12]: Looking at the original data, there is no decrease in terrestrial insolation and no increase in water content or aureole brightness during that time (see Fig. 2 in [12] and Fig. 2c in [8]), yet Fig. 2 in [12] shows an increase in the transformed scatter function I and a corresponding dip in the reduced irradiance I = I − I . These features can be naturally explained as artefacts of the transformation based on the assumption that the scatter function I depends on sunspot number R. Nevertheless, the technique of scatter reduction (without the transformation) introduced in [12] might offer the possibility to improve the analysis of terrestrial irradiance trends provided that the effects of solar activity and volcanism can be reliably separated. Indeed, the trends of the reduced irradiance (un-transformed) with sunspot number are in agreement with the results found in [8] and thus with the surface energy balance discussed in Sect. 1.
Discussion: Solar activity and Earth's climate
Quantifying the influence of solar activity on Earth's climate [13] is clearly an important issue, helping to distinguish between natural and anthropogenic causes of climate change. The energy balance of Earth's surface (Sect. 1) and investigations of the instrumental temperature record [4,5] show, however, that the temperature changes caused by the 11-year solar cycle amount to 0.1 K only, much smaller than the observed 20th-century warming of about 0.7 K [6]. On longer timescales, extended periods of low solar activity ('grand minima') like the 17th-century Maunder Minimum [14] are associated with a global cooling of only a few tenths of a degree [15], although regional and seasonal temperature signatures are www.ann-phys.org more pronounced. Furthermore, solar activity appears not to be the dominant cause of the "Little Ice Age" coinciding with the Maunder Minimum. Both lower greenhouse gas concentrations and strong volcanic eruptions during the 17th century contributed to the observed cooling [16,17]. This also means that even a future grand solar minimum could not offset the much larger temperature rise caused by anthropogenic greenhouse-gas emissions [18,19]. | 2,693.6 | 2011-09-20T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Development and execution of an impact cratering application on a computational Grid 1
Impact cratering is an important geological process of special interest in Astrobiology. Its numerical simulation comprises the execution of a high number of tasks, since the search space of input parameter values includes the projectile diameter, the water depth and the impactor velocity. Furthermore, the execution time of each task is not uniform because of the different numerical properties of each experimental configuration. Grid technology is a promising platform to execute this kind of applications, since it provides the end user with a performance much higher than that achievable on any single organization. However, the scheduling of each task on a Grid involves challenging issues due to the unpredictable and heterogeneous behavior of both the Grid and the numerical code. This paper evaluates the performance of a Grid infrastructure based on the Globus toolkit and the GridWay framework, which provides the adaptive and fault tolerance functionality required to harness Grid resources, in the simulation of the impact cratering process. The experiments have been performed on a testbed composed of resources shared by five sites interconnected by RedIRIS, the Spanish Research and Education Network.
Introduction
Impact cratering is an important geological process of special interest in Astrobiology that affects the surface of nearly all celestial bodies such as planets and satellites.The detailed morphologies of impact craters (see [23] for a detailed description) show many variations from small craters to craters with central peaks.Furthermore, a water layer at the target influences lithology and morphology of the resultant crater.Therefore, marine-target impact cratering simulation plays an important role in studies which involve hypothetical Martian seas [22].
Our target application analyzes the threshold diameter for cratering the seafloor of a hypothetical martian sea during the first steps of an impact.Results of this analysis can be used to develop a search criteria for future investigations, including techniques that will be used in future Mars exploration missions to detect buried geological structures using ground penetrating radar surveys, as the ones included in the ESA Mars Express and planned for NASA 2005 missions.The discovery of marine-target impact craters on Mars would also help to address the ongoing debate of whether large water bodies occupied the northern plains of Mars and help to constrain future paleoclimatic reconstructions [22].In any case, this kind of study requires a huge amount of computing power, which is not usually available within a single organization.
In order to determine the range for the critical diameter of the projectile which can crater the seafloor, we will perform a high number of simulations.Each computational task solves the equations of motion for compressible media combined with the equations of state, over a subset of input parameter values, namely: projectile diameter, the water depth and the impactor velocity.Additionally, the execution time of each computational task is not uniform because of the different numerical properties of each experimental configuration.
Grid technology provides a way to access the geographically distributed resources needed for executing compute-intensive Parameter Sweep Applications (PSA), like the one described above.In spite of the relatively simple structure of a PSA, its reliable and efficient execution on computational Grids involves several issues, mainly due to the nature of the Grid itself.In particular, one of the most challenging problems that the Grid computing community has to deal with is the fact that Grids present unpredictable changing conditions, namely: high fault rate and dynamic resource availability, load and cost.Adaptive scheduling has been widely studied in the literature [1][2][3]26,27] and is generally accepted as the cure to the dynamism of the Grid.Moreover, the different execution times for different tasks makes critical the use of an adaptive approach.The GridWay framework [17] achieves the robust and efficient execution of PSAs by combining adaptive scheduling and execution, to reflect the dynamic Grid characteristics; and re-use of common files between tasks, to reduce the file transfer overhead.The aim of this paper is to describe and analyze the results obtained in the simulation of the impact cratering process in a Grid infrastructure based on Globus and GridWay.
In Section 2 we present the highly heterogeneous testbed used in this work.The functionality and internals of the GridWay framework are briefly described in Section 3. The target application is outlined in Section 4. We demonstrate that a Grid testbed based on Globus and GridWay provides the functionality and reliability needed to execute the simulation tasks.The performance results are described in Section 6, where some performance metrics in order to evaluate the Grid computing platform are proposed: the Grid speedup metric, which quantifies the benefits of being part of a Grid, and the resource load variability, which could be used to adjust the components of the Grid infrastructure in order to achieve higher efficiencies.Finally, some conclusions are presented in Section 7.
The research testbed
The management of jobs within the same department is addressed by many research and commercial sys-tems [8]: Condor, Load Sharing Facility, Sun Grid Engine, Portable Batch System, LoadLeveler, etc.Some of these tools, such as Sun Grid Engine Enterprise Edition [16], also allow the interconnection of multiple departments within the same administrative domain.Other tools, such as Condor Flocking [9], even allow the interconnection of multiple domains, as long as they run the same distributed resource management software.However, they are unsuitable in computational Grids where resources are scattered across several administrative domains, each with its own security policies and distributed resource management systems.
The Globus toolkit [10] provides the services and libraries needed to enable secure multiple domain operation within different resource management systems and access policies.Globus is a core Grid middleware that provides the following components, which can be used separately or together, to support Grid applications: Grid Security Infrastructure (GSI), Grid Resource Allocation Manager (GRAM), Global Access to Secondary Storage (GASS), Monitoring and Discovery Service (MDS), GridFTP and Replica Management Services.
Table 1 shows the characteristics of the machines in the research testbed, based on the Globus toolkit 2.X [10].The testbed joins resources from five sites, all of them connected by the Spanish Research and Education Network, RedIRIS.The geographical distribution and interconnection network of sites are shown in Fig. 1.This organization results in a highly heterogeneous testbed, since it presents several resources (PCs, clusters, SMP servers), processor architectures and speeds, Resource Management Systems (RMS), network links, etc.In the following experiments, cepheus is used as client, and holds all the input files and receives the simulation results.In the case of clusters, we have limited to 5 the number of simultaneously used nodes, in order not to saturate these systems since they are at production level.
The GridWay framework
The Globus toolkit [10] supports the submission of applications to remote hosts by providing resource discovery, resource monitoring, resource allocation, and job control services.However, the user is responsible for manually performing all the submission stages in order to achieve any functionality: selection, preparation, submission, monitoring, migration and termination [24,25].Hence, the development of applica- tions for the Grid continues requiring a high level of expertise due to its complex nature.Moreover, Grid resources are also difficult to efficiently harness due to their heterogeneous and dynamic nature.In a previous work [17], we have presented a new Globus experimental framework that allows an easier and more efficient execution of jobs on a dynamic Grid environment in a "submit and forget" fashion.The GridWay framework provides resource selection, job scheduling, reliable job execution, and automatic job migration to allow a robust and efficient execution of jobs in dynamic and heterogeneous Grid environments based on the Globus toolkit [10].
GridWay architecture
The architecture of the GridWay framework is depicted in Fig. 2. The user interacts with the frame-work through a programming or command line interface, which forwards client requests (submit, kill, stop, resume) to the dispatch manager.The dispatch manager periodically wakes up, and tries to submit pending and rescheduled jobs to Grid resources.Once a job is allocated to a resource, a submission manager and a performance monitor are started to watch over its correct and efficient execution (see [17] for a detailed description of these components).
The framework has been designed to be modular, thus allowing extensibility and improvement of its capabilities.The following modules can be set on a per job basis: -The resource selector module, which is used by the dispatch manager to build a prioritized list of candidate resources following the preferences and requirements provided by the user.-The performance evaluator module, which is used by the performance monitor to periodically evaluate the application performance, usually by analyzing a performance profile generated by the running application or by a monitor started along with the application.
Job execution
Job execution is performed in three steps by the following modules: -The prologue module, which is responsible for creating the remote experiment directory and transferring the executable and all the files needed for remote execution, such as input or restart files corresponding to the execution architecture.These files can be specified as local files in the experiment directory or as remote files stored in a file server through a GridFTP URL.For the files declared by the user as shared, a reference is added to the remote GASS cache, so they can be re-used by other jobs submitted to the same resource.-The wrapper module, which is responsible for executing the actual job and obtaining its exit code.-The epilogue module, which is responsible for transferring back output files, and cleaning up the remote experiment directory.At this point, references to shared files in the GASS cache are also removed.
Grid scheduling policy for PSAs
GridWay achieves an efficient execution of PSAs by combining: adaptive scheduling, adaptive execution, and reuse of common files [19].In fact, one of the main characteristics of the GridWay framework is the combination of adaptive techniques for both the scheduling and execution [18] of Grid jobs: -Adaptive scheduling: Reliable schedules can only be issued considering the dynamic characteristics of the available Grid resources [2,3,6].In general, adaptive scheduling can consider factors such as availability, performance, load or proximity, which must be properly scaled according to the application needs and preferences.GridWay periodically gathers information from the Grid and from the running or completed jobs to adaptively schedule pending tasks according to the application demands and Grid resource status [19].-Adaptive execution: In order to obtain a reasonable degree of both application performance and fault tolerance, a job must be able to migrate among the Grid resources adapting itself to events dynamically generated by both the Grid and the running application [1,20,26].GridWay evaluates each rescheduling event to decide if a migration is feasible and worthwhile [17].Some reasons, like job cancellation or resource failure, make GridWay immediately start a migration process.Other reasons, like "better" resource discovery, make GridWay start a migration process only if the new selected resource presents a higher enough rank.In this case, the time to finalize and the migration cost are also considered [21].-Reuse of common files: Efficient execution of PSAs can only be achieved by re-using shared files between tasks [6,13].This is specially important not only to reduce the file transfer overhead, but also to prevent the saturation of the file server where these files are stored, which can occur in large-scale PSAs.Reuse of common files between tasks simultaneously submitted to the same resource is achieved by storing the executable file and some files declared by the user as shared in the GASS cache [19].
In the case of adaptive execution, the following rescheduling events, which can lead to a job migration if it is considered feasible and worthwhile, are considered [17,18]: -Grid-initiated rescheduling events: * "Better" resource discovery (opportunistic migration [21]).* Job cancellation or suspension.* Resource or network failure.
* Change in the application demands.
In this work, we do not take advantage of all the GridWay features for adaptive execution, since they are not supported by the application.In order to fully support adaptive execution, the application must provide a set of restart files to resume execution from a previously saved checkpoint.Moreover, the application could optionally provide a performance profile to detect performance degradations in terms of application intrinsic metrics, and it could also dynamically change its host requirements and preferences to guide its own scheduling process.We only consider adaptive execution to provide fault tolerance by restarting the execution from the beginning (see the following section).
Fault tolerance
GridWay provides the application with the fault detection capabilities needed in such a faulty environment: -The GRAM job manager notifies submission failures as GRAM callbacks.This kind of failures includes connection, authentication, authorization, RSL parsing, executable or input staging, credential expiration and other failures.-The job manager is probed periodically.If the job manager does not respond, then the GRAM gatekeeper is probed.If the gatekeeper responds, a new job manager is started to resume watching over the job.If the gatekeeper fails to respond, a resource or network failure occurred.This is the approach followed in Condor/G [11].-The standard output of prologue, wrapper and epilogue is parsed in order to detect failures.In the case of the wrapper, this is useful to capture the job exit code, which is used to determine whether the job was successfully executed or not.If the job exit code is not set, the job was prematurely terminated, so it failed or was intentionally cancelled.
When an unrecoverable failure is detected, GridWay retries the submission of prologue, wrapper or epilogue a number of times specified by the user and, when no more retries are left, it performs an action chosen by the user among two possibilities: stop the job for manually resuming it later, or automatically generate a rescheduling event.
Related projects
The AppLeS project [2] has previously dealt with the concept of adaptive scheduling on Grids.AppLeS is currently focused on defining templates for characteristic applications, like APST for parameter sweep and AMWAT for master/worker applications.Also, Nimrod/G [3] dynamically optimizes the schedule to meet the user-defined deadline and budget constraints.On the other hand, the need for a nomadic migration approach for adaptive execution on Grids has been previously discussed in the context of the GrADS project [20].The tools developed by the above projects have been successfully applied to several applications, like drug design with Nimrod/G [4], computational biology with AppLes [5], and numerical relativity with GrADS and Cactus [1].
The aim of the GridWay project is similar to that of the above projects: simplify distributed heterogeneous computing.However, it has some remarkable differences.Our framework provides a submission agent that incorporates the runtime mechanisms needed for transparently executing jobs in a Grid by combining both adaptive scheduling and execution.Our modular architecture for job adaptation to a dynamic environment presents the following advantages: -It is not bounded to a specific class of application generated by a given programming environment, which extends its application range.We would like to mention that the experimental framework does not require new system software to be installed in the Grid resources.The framework is currently functional on any Grid testbed based on Globus.We believe that this is an important advantage because of socio-political issues; cooperation between different research centers, administrators and users is always difficult.
Impact cratering simulations
The impact process can be described as a transfer of energy process.The initial kinetic energy of the projectile does work on the target to create a hole -the crateras well as heating the material of both projectile and target.We focus our attention in high-velocity impacts which can be separated into several stages dominated by a specific set of major physical and mechanical processes.
The main stages are contact and shock compression, transient cavity growth by crater material ejection, and finally, transient cavity modification.Impact cratering begins with a sufficient compression of target and projectile materials.The energy released by deceleration of the projectile results in the formation of shock waves and its propagation away from the impact point.The projectile's initial kinetic energy redistributes into kinetic and internal energy of all colliding material.The internal energy heats both the projectile and target and, for strong enough shock waves, this may result in melting and vaporization of material near the impact zone.
To describe the impact process we solve equations of motion for compressible media using a hydrocode.The standard set of equations of motion expresses 3 basic law: mass, momentum, and energy conservation.It must be combined with the equations of state (EOS), a system of relationships which allow us to describe the thermodynamic state for materials of interest.In its basic form, an EOS should define what is the pressure in the material at a given density and temperature.In an extended form, an EOS should define also the phase state of the material (melting, vapor, dissociation, ionization process) as well as all useful derivatives of basic parameters and transport properties (sound speed, heat capacity, heat conductivity, etc.).
Numerical simulations use the Eulerian mode of SALE-B, a 2D hydrocode modified by Boris Ivanov based on SALES-2 [12].The original hydrocode, Simplified Arbitrary Lagrangian-Eulerian (SALE), permits to study the fluid-dynamics of 2D viscous fluid flows at all speeds, from the incompressible limit to highly supersonic, with an implicit treatment of the pressure equation, and a mesh rezoning/remapping philosophy [7].The PDE solved are the Navier-Stokes equations.The fluid pressure is determined from an EOS and supplemented with an artificial viscous pressure for the computation of shock waves.SALES-2 can also model elastic and plastic deformation and tensile failure.
We deal in this study with vertical impacts, as they reduce to 2D problems using the radial symmetry.All simulations were conduced with spherical projectiles.The non-uniform computational mesh of the coarse simulations consists of 151 nodes in horizontal direction and 231 nodes in vertical direction and the total nodes describes half of the crater domain because of axial symmetry.The mesh size progressively increases outwards from the center with a 1.05 coefficient to have a larger spatial domain.The central cell region around the impact point where damage is greater, more extended than the crater area, is a regular mesh 80 nodes resolution in both x and y direction, and also describes half of the damaged zone.We use a resolution of 10 nodes to describe the radial projectile.
For a fixed water depth, we used 8 cases of projectile diameter in the range of 60 m to 1 Km, and 3 cases of impactor velocity: 10, 20 and 30 Km/s.Calculations were performed for 3 cases of water depth: 100, 200 and 400 m.Once fixed the projectile velocity and the water depth of the hypothetical ocean, we search to determine the range for the critical diameter of the projectile which can crater the seafloor [15].Therefore, in this study we have to compute 72 cases.Its execution on a Grid environment allows to obtain the diameter range of interest within the research cycle time Figures 3 and 4 show the timeframes of the opening cavities at 1 second time using the 60 and the 80 m impactor, respectively, with 200 m water depth and a velocity of 10 Km/s for the impactor.The shape difference between the 60 m case and the 80 m case illustrates the water effect.Due to the water layer, in that case, the impactor diameter has to be larger than 80 m to crater the seafloor.
GridWay programming model
The GridWay application programming and command line interface allow scientists and engineers to express their computational problems in a Grid environment.The capture of the job exit code allows users to define complex jobs, where each depends on the output and exit code from the previous job.They may even involve branching, looping and spawning of subtasks, allowing the exploitation of the parallelism on the work flow of certain type of applications [14].
Figure 5 shows a fragment of the job template used in the following experiments.Files are specified as "source destination" pairs separated by commas, where the destination file can be omitted if it has the same name as the source file.
The experiment files consist of the executable (∼0.5 MB), some parameter files (12 KB) for each task, and a table with values for the EOS equations (1.3 MB, when compressed) shared by all the tasks.The final name of the executable is obtained by resolving the variable $GW ARCH at runtime for the selected host.Similarly, the final names of the parameter files are obtained by resolving the variable $GW TASK ID at runtime for the current task.Once the job finishes, the standard output (0.5 MB) and the files with the figures of the simulation timeframes in PNG format (0.5 MB) are transferred back to the client.
The experiments have been performed with a resource selection script that queries MDS for potential execution hosts, attending the following criteria: -Host requirements are specified as a LDAP filter, which is used by the resource selector to query MDS and so obtain a preliminary list of potential hosts.In the experiments below, we impose a minimum main memory of 100 MB, enough to accommodate each task: (Mds-Memory-Ram-Total-sizeMB>=100) The resource selector also performs an user authorization filter (via a GRAM ping request) on those hosts.-A rank is assigned to each potential host following the preferences specified by the user in a ranking expression, which is a script that receives the monitoring data of the resources and outputs the rank value.Since our target application is a computingintensive simulation, the ranking expression benefits those hosts with less workload and so better performance.The following expression was considered: where F LOP S is the peak performance achievable by the host CPU, and CP U 15 is the average load in the last 15 minutes.
Computational results and performance evaluation
The execution time for each task is different and, what is more important, unknown beforehand, since the convergence of the iterative algorithm strongly depends on input parameters and the testbed resources are heterogeneous.Moreover, there is an additional difference generated by the changing resource load and availability.Therefore, adaptive scheduling is crucial for this application.Figure 6 shows the dynamic throughput, in terms of average turnaround time per job (i.e. the elapsed time divided by the number of completed jobs), as the experiment evolves.Total experiment time was 4.64 hours (4 hours, 38 minutes and 33 seconds), so the achieved throughput was 3.87 minutes (3 minutes and 52 seconds) per job, or likewise, 15.51 jobs per hour.
Figures 7 and 8 show the schedule performed by GridWay, in terms of number of jobs allocated to each resource and site, respectively.Most of the allocated jobs were successfully executed, but others failed and were dynamically rescheduled.Given these results, we can calculate the fault rate for each resource or site.The two failing resources (sites) show a fault rate of 25% and 45%, respectively, which result in an overall fault rate of 21%.These failures are mainly due to a known Globus problem (bug id 950) related to the NFS file systems and the PBS resource manager used in the clusters, which causes the job manager not to be able of getting the standard output and error of the job.This problem is mitigated, but not avoided, on babieca, where a patch related to this bug was applied.
Figure 9 shows the achieved throughput, also in terms of average turnaround time per job, by each site and by the whole testbed for the above schedule.In the right axis, the distributed or Grid speed-up, i.e. the performance gain obtained by each site, is also shown.We introduced Grid speed-up as a valuable metric for resource users and managers on each site in order to realize the benefits of being part of a Grid.Performance metrics like this can help to curb their selfishness sharing resources on the Grid [25].It is defined as follows: where T Grid is the Grid turnaround time,i.e. the waiting time from the application execution request until all the tasks are completed and all the results are available when all the resources in the testbed are used, and T site is the site turnaround time, i.e. the turnaround time when only the resources of a given site are used.This metric should be obtained or estimated in a distributed way for each site.
Table 2 shows the mean, standard deviation and coefficient of variance (CV) for the transfer and execution times on each resource.Table 3 shows the same statistics for the wall times on each resource.These values are nearly the same as for the execution time, since that is the dominating term.
In the case of the transfer time, we can see differences due to the heterogeneity of network links and resource configurations.For example, khafre runs the 2.2 version of the Globus toolkit, which has a polling period for the GRAM job manager of 30 seconds (whereas the 2.4 version polling period is 10 seconds).That produces a mean transfer time of one minute (two polling periods) with a very low standard deviation.The results also report a very high standard deviation in babieca, which has a nearly flat probability density function, that revealed several problems in the GRAM job manager.It is interesting to note that the best mean transfer time corresponds to platon, although it is located in a different site from the client.This is due to the file reuse policy implemented by GridWay, as platon is a SMP node with two processors that executes two simultaneous tasks sharing common files.
In the case of the execution time, there are two sources of variance: the dynamism and heterogeneity of the Grid resources and the different time needed by each task to converge.Processor speeds have the greater impact on the mean, while the use of RMS like PBS in some clusters or the existence of SMP nodes make the standard deviation to be greater.
Conclusions
The Globus toolkit provides a way to access the distributed resources needed for executing the compute and data intensive applications required in several research and engineering fields.However, the user is responsible for manually performing all the submission steps in order to achieve any functionality, and the adaptive execution of applications is not supported.The GridWay framework provides the runtime mechanisms needed for submitting applications and dynamically adapting their execution.
The suitability of a Grid environment based on the GridWay framework and the Globus toolkit has been demonstrated for the execution of a high throughput computing application that simulates impact cratering.The application comprises the execution of a high a number of tasks that exhibit different execution times due to both the heterogeneity and dynamism of the Grid resources and the convergence properties of the algorithm.Such computing platform will help to develop a search criteria for future investigations and exploration missions to Mars.Moreover, if they are successful in their hunt, they would also help to address the ongoing debate of whether large water bodies existed on Mars and, therefore, they would help to constrain future paleoclimatic reconstructions.
The Grid speed-up has been introduced as a valuable metric for resource users and managers in order to realize the benefits of sharing resources over the Grid.On the other hand, the study of the execution and transfer time provides a measure of the variability in the resource load and could be monitored when adjusting the components of a Grid in order to improve its performance.
Fig. 1 .
Fig. 1.Geographical distribution of the sites in Spain and interconnection network provided by RedIRIS.
Fig. 3 .
Fig. 3. Timeframes of the opening cavities at 1 second time using the 60 m impactor with 200 m water depth and a velocity of 10 Km/s for the impactor.
Fig. 4 .
Fig. 4. Timeframes of the opening cavities at 1 second time using the 80 m impactor with 200 m water depth and a velocity of 10 Km/s for the impactor.
Fig. 6 .
Fig. 6.Dynamic throughput, in terms of average turnaround time per job.
Fig. 7 .-
Fig. 7. Schedule performed by GridWay, in terms of jobs allocated to each resource.
Fig. 8 .
Fig. 8. Schedule performed by GridWay, in terms of jobs allocated to each site.
Fig. 9 .
Fig. 9. Throughput, in terms of average turnaround time per job (left-hand axis and values on top of columns) and distributed speed-up (right-hand axis and values inside columns), for each site and for the whole testbed (rightmost column, labelled as "All").
Table 1
Characteristics of the machines in the research testbed
Table 2
Mean, standard deviation and coefficient of variance (CV) for the transfer and execution times on each resource | 6,545 | 2005-01-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Geology"
] |
CANET: Quantized Neural Network Inference With 8-bit Carry-Aware Accumulator
Neural network quantization represents weights and activations with few bits, greatly reducing the overhead of multiplications. However, due to the recursive accumulation operations, high-precision accumulators are still required in multiply-accumulate (MAC) units to avoid overflow, incurring significant computational overhead. This constraint limits the efficient deployment of quantized NNs on resource-constrained platforms. To address this problem, we present a novel framework named CANET, which adapts the 8-bit quantized model to execute MAC operations with 8-bit accumulators. CANET not only employs 8-bit carry-aware accumulators to represent overflow data correctly, but also adaptively learns the optimal format per layer to minimize truncation errors. Meanwhile, a weight-oriented reordering method is developed to reduce the transfer length of the carry. CANET is evaluated on three networks in the ImageNet classification task, where comparable performance with state-of-the-art methods is realized. Finally, we implement the proposed architecture on a custom hardware platform, demonstrating a reduction of 40% in power and 49% in area compared with the MAC unit with 32-bit accumulators.
I. INTRODUCTION
Neural network quantization facilitates the efficient deployment of deep neural networks (DNNs) on resourceconstrained platforms, such as drones and Internet-of-Things (IoT) devices [1], [2].During inference, MAC operations consisting of multiplications and accumulations dominate the arithmetic cost.Quantization successfully maps floating-point parameters to the low-precision fixed-point numbers, drastically reducing the cost of multiplications [3].However, high-precision accumulators are necessary for deep feature computation, which requires the accumulation of thousands of products.The resulting computational overhead greatly limits the efficacy of quantization and prompts active exploration of inference based on low-precision accumulators [4].
Overflow is problematic when high-precision accumulators are replaced with low-precision counterparts.Because of overflow, out-of-range values are wrapped around to the The associate editor coordinating the review of this manuscript and approving it for publication was Wenming Cao .minimum representable value, resulting in severe data distortion [6].Some impressive methods were proposed to reduce or eliminate the risk of overflow associated with lowprecision accumulators [4], [6], [13], [29], [30].Xie et al. [4] reduced the risk of overflow in low-precision accumulators by adaptively scaling weights and activations bit widths.Ni et al. [6] mitigated the impact of overflow on accuracy by cyclic activation layers.De Bruin et al. [13] employed a heuristic method to determine the appropriate parameter precision per layer for fixed-precision accumulators.However, frequent overflow imposes tight constraints on the parameter bit width, impairing the model accuracy in such methods.
This paper introduces a novel framework named CANET, which aims to employ 8-bit accumulators within the 8-bit quantization framework without compressing the bit width of weights and activations.CANET incorporates a carry-aware accumulator (C2A) with a fixed-point format to rectify wraparound data.The error introduced by C2A is quantized to truncation and swamping error.Additionally, an adaptive algorithm is designed for learning optimal fixed-point formats to minimize truncation error.Meanwhile, a weight-oriented accumulation order is adopted to reduce the transfer length of the carry and mitigate the swamping error.Furthermore, an integer-only method is presented to eliminate other high-precision operations.CANET is trained and validated across three neural networks on ImageNet datasets, demonstrating comparable performance with state-of-the-art quantization methods.Finally, we evaluate the effectiveness of CANET in terms of power and area on a 55nm CMOS custom hardware platform.
In summary, the main advantages of this work are as follows: • As a novel quantization method, CANET uses the carry-aware algorithm to avoid overflow, enabling efficient inference using 8-bit accumulators in an 8-bit quantization framework.
• Based on the error analysis, CANET utilizes an adaptive format learning algorithm and a weight-oriented accumulation order to improve the accuracy of low-precision accumulation.
• Experiments conducted on three networks for ImageNet datasets demonstrate that CANET achieves comparable performance to state-of-the-art quantization methods while using 8-bit accumulators.
• We implement a MAC unit with the proposed architecture on a 55nm CMOS custom hardware platform and demonstrate the efficacy of CANET in terms of power and area.
The rest of the paper is organized as follows.Section II reviews and analyzes the related works.In Section III, the background of quantization and accumulator are briefly introduced.A detailed demonstration of CANET is presented in Section IV.Experimental results are demonstrated in Section V. Finally, Section VI presents the conclusion of this paper.
II. RELATED WORK A. NETWORK QUANTIZATION
Quantization has been widely used in network compression [31], [32] with two types of quantizers: uniform and nonuniform.The uniform quantizer linearly maps floating-point numbers to fixed-point domains through a fixed quantization resolution.The nonuniform quantizer allocates more quantization levels to critical value regions, leading to a nonlinear projection [28].Although nonuniform quantization usually achieves better performance than the uniform strategy, it incurs extra hardware overhead, such as look-up tables (LUT), owing to the complicated projection operations.On the other hand, uniform quantization methods are explored for efficient inference in the following directions.
Some researchers focused on extreme quantization, in which only binary or ternary weights and activations are involved [22], [23], [27].These methods used bit-shift logic instead of high-precision multiplications to achieve a significant acceleration but often result in substantial performance degradation.Another type of effort used integer-only quantization to accomplish inference by integer arithmetic operations [9], [14], [15], [16], [17].In integer-only methods, the full-precision multiplications caused by scale factor were emulated to int32 multiplications [15], int8 multiplications [9], or bit-shift logic [14].However, more effort is required to eliminate other high-precision operations, such as 32-bit accumulations, for a lower arithmetic cost.
B. LOW-PRECISION ACCUMULATOR
Several methods are proposed to use low-precision accumulators in network training or inference.Some of the research [4], [6], [13], [30] sought to reduce or avoid the risk of overflow in low-precision accumulators by limiting the range of parameters.However, the limited parameters range impairs model performance.On the other hand, variable bit widths may bring about inefficient utilization of hardware resources (e.g., 4-bit weights still need 8-bit MAC units in the 8-bit inference framework).Another line of work [5], [8] accelerated network training and inference by low-precision floating-point accumulators, which are challenging to deploy on resourceconstrained devices.
In summary, the feasibility and gains of the low-precision accumulator have been extensively demonstrated.However, few methods explored the low-precision-accumulator-based inference without constraining the range of parameters, leading to a limited design space.
III. BACKGROUND A. QUANTIZED MODEL INFERENCE
In fixed-precision quantization, features transfer through layers with a constant bit width via quantization and dequantization.Quantization maps the floating-point values to integers, as shown in (1).In contrast, dequantization recovers real values from quantization using (2).This paper focuses on 8-bit integer-only quantization to achieve a better trade-off between accuracy and acceleration by fixing weights and activations at 8-bit.In a convolution layer with weight tensor W ∈ R C o ×C i ×k×k , where C o , C i are the output and input channels, and k denotes the kernel size.During inference, MAC operations involving N times (N = C i × k 2 ) multiplications and additions, need accumulators with a bit width of (16 + log 2 N) to prevent overflow.Thus, given the layer shapes in typical network architectures, such as W ∈ R 512×512×3×3 , 32-bit accumulators are necessary.
However, the robustness of the network to reduced arithmetic precision facilitates the deployment of low-precision accumulators [5], [8].Recalling the specific process of inference, as illustrated in Figure 1 (a), the 16-bit partial sum p i resulting from the multiplication of 8-bit quantized (q8) weights and activations is accumulated to a 32-bit intermediate result, denoted by Acc.Acc is then converted to the fixed-point (fp) domain by dequantization and subsequently quantized to q8 output a o .The arithmetic operations are written as: where, s a , s w , and s o are the scale factors for a i , w i , a o .Bias is denoted by b i .
Assuming relu is adopted as the nonlinear activation function and p i (p i = a i w i ) represents the partial sum, (3) can be simplified as: The MAC operations N i p i yield high-precision intermediate results, which are quantized to 8-bit a o by S (S = s a •s w /s o ).The quantizer preserves only eight bits of the intermediate results as quantized output a o , leading to a partial waste of computational and storage resources.This motivates us to execute the accumulation process in 8-bit accumulators to minimize the bit width redundancy.A scheme is depicted in Figure 1 (b), where the dequantization maps a 16-bit p i to an 8-bit fixed-point before accumulation.The corresponding arithmetic operations are as follows: However, N i (s a • s w • p i ) still leads to high dynamic range intermediate results, which are prone to incur the overflow of 8-bit accumulators.The overflow results are affected by modulo operations, which discard the overflow bits and wrap the remaining bits to the minimum representable value, resulting in severe distortion [6].Therefore, correctly expressing the overflow value is crucial for inferences with low-precision accumulators.
IV. METHODS
In this section, we provide a detailed description of CANET, comprising three main components: (1) a carry-aware accumulator with fixed-point formats to rectify wraparound data, (2) an adaptive learning algorithm for optimal fixed-point formats, and (3) a weight-oriented accumulation order to reduce the transfer length of the carry.
A. CHUNK-BASED CARRY-AWARE ACCUMULATION WITH FIXED-POINT FORMAT
In carry-aware accumulation, a fixed-point format (IL, FL) is employed to represent the accumulated data Acc.Specifically, an 8-bit fixed-point accumulated value consists of a 1-bit sign, followed by an n-bit integer length (IL), and the remaining (7)-n)-bit for the fractional length (FL).The arithmetic operations involved in the accumulation chain are depicted as: where G represents the chunk size, corresponding to the parallelism of the Processing Element (PE) in hardware.G i p i denotes the high-precision partial sums.
Consider the initial format of accumulators as (IL, FL).Before accumulation, P j (P j = s a × s w × G i p i ) is mapped to an 8-bit fixed-point number via (IL, FL), introducing a truncation error.IL and FL are dynamically updated based on the value of c, which is the number of carries in the accumulation chain.During accumulation, if Acc exceeds the maximum representable value of the current format, c increases by one.Subsequently, the format (IL, FL) is updated to (IL+c, FL−c), partially swamping the fractional part of P j and Acc.When c > FL, Acc surpasses the upper bound and saturates at the maximum representable value.The proposed accumulation process is illustrated in Figure 2. In an accumulation chain of length N , the full-precision accumulated results S N is given by: We consider a case in which Acc is positive.When Acc ≤ 2 7−FL − 2 −FL , c= 0, P j = P ′ j + P j , the error is the summation of P j .Where P ′ j is the 8-bit fixed-point partial sum, and the truncation error, denoted by P j , is indicated by the red dashed line in Figure 2. When Acc∈ Ê(2 7−FL − 2 −FL , 2 7 −1], and c ∈ Ê(0, FL], the swamping error P j c is introduced, as labeled by the gray dashed box in Figure 2. The swamping error P j c increases exponentially with the value of c.When c > FL, Acc saturates at the 2 7 − 1. VOLUME 12, 2024 The error in carry-aware accumulation is related to the value domain of Acc.To quantify the error, we assume that the maximum value of c is K (K ≤ FL) in an N-length accumulation chain.Consequently, S N is divided into K sub-blocks, each maintaining the same value of c.Here, where n k corresponds to the length of the k th sub-block.The arithmetic operations in each sub-block are expressed as: Thus, S N is given by: where M m P ′′ m corresponds to the swamping error of Acc.In most cases, M ≪ N. Thus, (9) can be simplified as: In summary, errors in carry-aware accumulation can be categorized into swamping and truncation errors.
B. ADAPTIVE FIXED-POINT FORMAT LEARNING
Truncation from high to low precision leads to irreversible information loss.Preserving critical information in high-precision values is important to minimize this truncation error.One intuitive method is to represent P j in (6) with an appropriate format.The 8-bit fixed-point format takes Acc in the range [−128, 127], beyond which leads to saturation errors.In contrast, the lower bound (0, 7) offers a minimum resolution of 2 −7 , causing the full truncation of small values.Meanwhile, a fixed data format cannot adapt to partial sums with varying statistical characteristics.An improper format leads to an exponential growth in error, affecting the numerical accuracy of S N .Numerical tests on quantized networks show that (IL, FL) cannot provide sufficient resolution.In Figure 3, for a convolution layer in ResNet18, the format (IL, FL) yields a minimum relative accumulation error of 45%.Whereas, by scaling the original data using a scale factor of 2 4 , this error can be reduced to 2.8%.
Therefore, an adaptive fixed-point format learning method is proposed, incorporating an additional scale factor β to enhance the ability to represent small values.The scale factor ensures a minimum resolution of 2 −7−β through scaling P i by 2 β before accumulation and subsequently restoring the S N after completion.With β, the fixed-point format is rewritten as (IL, FL, β).In our framework, a coefficient α is introduced to update the fixed-point format based on the relative error of P j .Specifically, α is learned by: α+ = sign × lr α × log 2 (1 + max (e r /e m − 1, 0)), (11) where lr α is the learning rate, e r corresponds to the relative errors of P i , and e m is the target value, which is initialized to 2 −6 .sign is a customized symbol function, as shown in: Equation ( 11) updates α based on the relative error in low-precision accumulation.If e r > e m , (11) equals (sign×lr α ×log 2 (e r /e m )).sign indicates the error composition, where e a represents the absolute error of P i , e a (|e a ≥ 1/β|) denotes the saturation error in P j , and e a (|e a < 1/β|) corresponds to the truncation error.sign = −1 indicates saturation dominance, prompting a reduction in α to widen the integer representation range.If sign = 1, truncation is considered to be the major part, and α should increase for a greater fraction length.When e r ≤e m , α remains constant to enable stable training.Finally, if FL < 7, α updates FL to ⌊FL × α⌋.When FL ≥ 7, α modifies β to (⌊FL × α⌋ − 7).
C. WEIGHT-ORIENTED INCREMENTAL ACCUMULATION ORDER
We designate a single accumulation within the k th sub-block as a carry transfer of length k.As shown in (10), a reduction of the total carry transfer length diminishes the swamping error P k i .This reduction can be naturally achieved by incremental accumulation order [7], [12].However, the chronological generation of partial sums in hardware inference makes it challenging to precisely implement an incremental accumulation order.Considering the trade-off between the accuracy and hardware overhead, we propose a weight-oriented sorting method.
In MAC operations, the magnitude of weights plays a role in determining their contribution to the outputs, as weights with small magnitudes tend to produce small partial sums [21], [25], [26].Based on this insight, ( 13) is employed to derive the magnitude of each weight group, and the accumulation order is determined according to the magnitude.Specifically, the group with a smaller Mg(w l ) tends to yield a smaller partial sum, which should be accumulated first.Conversely, the group with a larger l ) should be accumulated later.Importantly, during inference, the order remains the same owing to the invariance of the weights.
Mg wg
Where W l denotes the weights in l th layer.Since the chunk size G corresponds to the hardware parallelism, the indexing does not affect the computation efficiency of MAC operations.
D. INTEGER-ONLY INFERENCE
Full-precision scale factors and BN layers should be tackled to save computational resources [14], [18].Specifically, CANET adopts a double-forward method for BN fusion.BN layers are first fused with the convolutional layers to compute the effective weight and bias, as expressed in ( 14) and (15), respectively.Subsequently, the first inference is performed to update the parameters in BN, such as µ and σ .During the second inference, BN remains invariant, and the output of the fused layer engages in back-propagation.The fused layers avoid additional hardware overhead of BN.
Additionally, power-of-two scale factors are employed to eliminate the 32-bit multiplication, as depicted in (16).Exponential Moving Average (EMA) is employed to update rmax and rmin.As a result, during inference, full-precision multiplication is substituted with a more efficient bit-shift logic.
where γ , µ, β, σ 2 are learnable parameters in BN layers, ϵ is a tiny constant, W conv denotes the weights of the convolutional layer.W fuse and b fuse correspond to effective weights and biases.rmax, rmin represent the maximum and minimum values recorded in the calibrator, respectively.
V. EXPERIMENTS
In this section, we first present the details of the experiment settings.Subsequently, numerical experiments are designed to validate the benefits of adaptive format learning and weight-oriented accumulation order for low-precision accumulation.Following this, CANET is compared against state-of-the-art methods that maintain 32-bit accumulators and other high-precision arithmetic operations.Remarkably, CANET is capable of achieving comparable performance.Finally, we demonstrate the effectiveness of CANET in terms of power and area on a 55nm CMOS custom hardware platform.
A. EXPERIMENTAL SETUP
CANET is evaluated on the image recognition task using ImageNet datasets [33].All images are resized to 256 × 256, then randomly cropped to 224 × 224, with random horizontal flipping and normalization.The evaluation is conducted on three different NNs: VGG-16bn [20], ResNet18 [19], and ResNet34 [19], using the Top-1 and Top-5 accuracy on the validation set as performance metrics.
We begin with training the full-precision models from scratch and then use the quantization-aware training (QAT) framework to train the quantized models [24].Afterward, CANET is adopted to fine-tune the quantized model.During pre-training, the SGD optimizer is used with 0.1 initial learning rate, 0.9 momentum, and a weight decay of 0.0005.In the QAT phase, the initial learning rate is set to 0.001, and the min-max calibrator is applied for weight and activation.We use the per-tensor and per-channel granularity scale factors for activation and weights, respectively.
The fixed-point formats for all layers are initialized to (2, 5, 0).Additionally, the learning rate lr α is set to 0.001 to update α, and the weight-oriented accumulation order is only used in the inference phase.For the residual blocks in ResNet, the inconsistent formats in shortcut connection operations incur numerical errors.To this end, we conduct format unification before accumulation.Specifically, we select the largest IL and smallest β between sub-layers as the unified IL and β, then determine FL based on the unified IL. truncation error introduced by the final fixed-point format.The experimental results show that, through adaptive learning, the relative truncation errors from the fixed-point format essentially are close to the ideal relative error e m .
Subsequently, Figure 5 (a) illustrates the adaptive learning process of conv11 in VGG16-bn.Test nodes capture the iterative process of α and truncation error for analysis.The initialized format (5, 2, 0) results in an average truncation error of 23%.Then, α is updated based on the truncation error, using varying step sizes.Eventually, α remains stable when the error is reduced below the set value.For conv11, the final format is determined to be (0, 7, 2), and the error eventually converges to 1.5%.Numerical results demonstrate the effectiveness of adaptive learning.The proposed method requires only two epochs to fine-tune the quantized networks.
C. CARRY TRANSFER LENGTH COMPARISON
As shown in (10), the accumulation order affects the transfer length of the carry, results in different swamping errors.Intuitively, a longer transfer distance results in a larger swamping error.The formula for the total carry transfer length (CTL) is given in (17).Figure 7 illustrates the CTL in each layer of VGG16-bn under the two accumulation orders, highlighting varying degrees of CTL suppression with the weight-oriented accumulation order (W2O).In comparison to the random order [7], the proposed method demonstrates improvements on all network layers with various accumulation lengths, achieving an average reduction of 12% on CTL.
D. BENCHMARK RESULT
CANET is compared with other high-precision baselines that retain 32-bit accumulators and other high-precision arithmetic operations within the 8-bit quantization framework.The experimental results of three NNs are presented in Table 1 to Table 3. BL denotes the full-precision baseline.Bits represent the bit widths of activations and weights, which are fixed at 8-bit in 8-bit quantization methods.Mul corresponds to the precision of multiplication, where 32 indicates the 32-bit multiplication, and shift denotes the use of bit-shift logic instead of multiplication.Acc represents the precision of accumulators and 32 corresponds to the full-precision accumulators.
In the VGG16-bn test, with only 8-bit accumulation and bit-shift logic, CANET experiences only a 0.1% accuracy degradation compared to the full-precision model.On the ResNet18 and ResNet34, CANET achieves 0.5% and 0.2% performance above the full-precision baseline.The experimental results show that with only 8-bit accumulators, CANET can obtain comparable performance with other highprecision baselines.
E. HARDWARE ANALYSIS
A MAC unit structure is implemented on SMIC 55nm CMOS to evaluate the effectiveness of CANET.The structure which is illustrated in Figure 7, comprises the following three parts: (1) an 8-bit multiply-accumulate arithmetic unit, primarily responsible for generating the partial sums G i p i ; (2) an 8-bit accumulator, consisting of 8-bit full adders and 8-bit registers (in regular structure, this part requires 32-bit full adders and 32-bit registers); and (3) additional control logic for carryaware logic, and the bit-shift circuits for scaling and rescaling.
We implement a MAC unit with the proposed architecture and a conventional MAC unit with 32-bit accumulators, comparing the power and area overhead through Design Compiler.The hardware analysis results are listed in Table 4. As-proposed 8-bit-accumulator-based MAC reduces power consumption by 40% and area by 49%, visually demonstrating a significant improvement in power efficiency.
Through further discussion, the proposed inference framework effectively optimizes the redundant intermediate information generated by accumulation.The conventional architecture requires a combination of 32-bit and 8-bit memory rather than a single 8-bit memory to overcome the mismatch of the bit widths between the intermediate results and quantizer.In contrast, CANET eliminates the mismatch and motivates the removal of the 32-bit intermediate buffer, allowing accumulation in the feature memory directly.The proposed framework enables a more flexible design boundary for efficient inference of neural networks.
VI. CONCLUSION
This paper presents a novel 8-bit network quantization methodology that, aims to achieve an efficient inference framework using 8-bit accumulators.Unlike previous approaches that sought to reduce the risk of overflow by compressing the parameter range or precision, the proposed framework uses 8-bit carry-aware accumulators to avoid overflow.Additionally, we employ adaptive fixed-point format learning and a weight-oriented accumulation order to improve accumulation accuracy.The experimental results demonstrate that CANET achieves comparable performance with other high-precision methods on three neural networks while using 8-bit accumulators.Furthermore, the evaluation of the customized hardware platforms confirms the effectiveness of CANET in power efficiency.
) where x represents the input feature map, during quantization, x corresponds to the floating-point features and x denotes the quantized integers in dequantization.The scale factor, denoted by s, determines the quantization resolution.Z represents the zero point.n and p are the positive and negative boundaries of the quantized integers.For the b-bit unsigned quantizer, n = 0, p = 2 b−1 , and for the signed quantizer, n = −2 b−1 , p = 2 b−1 − 1.Furthermore, ⌊•⌉ rounds scaled values to the nearest integers, and clip limits the quantized values to the range of [n, p].
B
. ADAPTIVE FIXED-POINT FORMAT LEARNING Adaptive fixed-point format learning aims to automatically select the appropriate format for different distributions of partial sums.The experiments are conducted on VGG16-bn, ResNet18, and ResNet34.The fixed-point formats determined by the adaptive learning algorithm are shown in Figure 4 (a), Figure 4 (c), and Figure 4 (e).The formats vary across layers, demonstrating the design of adaptive learning.Furthermore, β plays a role in deep layers, making fixed-point formats adaptable to variable data distribution.Figure 4 (b), (d), and (f) present the distribution of the average
FIGURE 4 .
FIGURE 4. (a) Fixed-point format for layers of VGG16-bn.(b) Relative error of each layer in VGG16-bn.(c) Fixed-point format for layers of ResNet18.(d) Relative error of each layer in ResNet18.(e) Fixed-point format for layers of ResNet34.(f) Relative error of each layer in ResNet34.
FIGURE 5 .
FIGURE 5. (a) Iterative process of learnable α and relative errors during adaptive learning in VGG16-bn conv11.(b) Adaptive tuning process of the fixed-point format in VGG16-bn conv11.
FIGURE 6 .
FIGURE 6.Comparison of the carry transfer length (CTL) for random and weight-oriented accumulation order (W2O) across layers in VGG16-bn.
TABLE 4 .
Hardware implementation results for two architectures of MAC unit in the 55nm CMOS process. | 5,775 | 2024-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Enhanced TOA Estimation Using OFDM over Wide-Band Transmission Based on a Simulated Model
This paper presents the advantages of using a wideband spectrum adopting multi-carrier to improve targets localization within a simulated indoor environment using the Time of Arrival (TOA) technique. The study investigates the effect of using various spectrum bandwidths and a different number of carriers on localization accuracy. Also, the paper considers the influence of the transmitters’ positions in line-of-sight (LOS) and non-LOS propagation scenarios. It was found that the accuracy of the proposed method depends on the number of sub-carriers, the allocated bandwidth (BW), and the number of access points (AP). In the case of using large BW with a large number of subcarriers, the algorithm was effective to reduce localization errors compared to the conventional TOA technique. The performance degrades and becomes similar to the conventional TOA technique while using a small BW and a low number of subcarriers.
Introduction
Orthogonal Frequency Division Multiplexing (OFDM) has emerged in recent years as an effective digital modulation technique that is at the heart of all major wireless standards used or in development today, such as the 4G Long-Term Evolution (LTE), Wi-Fi IEEE 802.11a, WiMAX and in Digital Video Broadcasting (DVB) systems [1][2][3].
OFDM is utilized in several communication systems due to its robustness to multipath channels and high transmission rate in wireless communications networks. However, for 1 3 reliable signal transmission, the OFDM technique requires precise timing and accurate frequency synchronization [4,5]. Other key disadvantages of OFDM are Inter-Symbol Interference (ISI) and Inter-Carrier Interference (ICI) [6].
ICI often occurs as a result of the loss of synchronization caused by frequency offset and time offset between oscillators at the transmitter and receiver's end, which in effect results in the orthogonality, so the transmitted signal cannot be completely separated at the receiver. Thus, ICI decreases the signal-to-noise ratio (SNR) and increases error probability [7,8].
Moreover, a large peak-to-average power ratio caused by the superposition of all subcarrier signals becomes a distortion problem [9]. The desirous advantages of OFDM have motivated its application for improved localization of targets, to provide accurate mobile station (MS) positioning within both outdoor and indoor environments.
Authors in [10] used the OFDM-Time of Arrival (TOA) algorithm for position estimation of a Long Term Evolution (LTE) signal. A similar algorithm was proposed to detect the arrival times of Wi-Fi signals [11,12]. In [13], a simple OFDM-TOA-based algorithm is proposed to estimate the Direct Path (DP). Similar research was conducted in [14], by combining path detector to interference cancellation it becomes possible to distinguish low-power DP from the nearby interference.
TOA localization can be done using lateration [15]. Each time measurement between target and access point (APs) leads to a distance estimation, by having three or more APs, the target's location can be inferred [16]. Localization using TOA has the advantage of being highly accurate, however; it is sensitive to multipath and the existence of direct path [17]. Also, precise synchronization between APs is required and the availability of huge bandwidth (BW) is important for accurate estimations [18]. TOA detection techniques can be classified into four main categories: correlation-based techniques [19], maximum likelihood techniques [20], subspace techniques [21], and inverse Fourier transform techniques [19].
This paper introduces a coded OFDM system to be used in the application of localization systems and enhance the TOA estimation over the wideband transmission. The localization of objects within the indoor environment is investigated over a wide-band spectrum using the multi-carrier OFDM modulation technique. It is attended that the large bandwidth can support the time resolution required to estimate the localized positions of the objects. Of course with the advantages that have been added by the OFDM, the whole process can improve the estimation accuracy compared to conventional methods applied to similar applications [22][23][24][25]. In this study, a fractional spectrum of the lower Ultra-Wideband (UWB) was utilized to mitigate the effects of indoor channel propagations due to the high number of multipath, in which several carriers are spread over the bandwidth based on the impulse response of the channels using Wireless-InSite (WI) and then suitable estimations of time arrivals are applied, followed by a numerical technique to estimate the position of the hidden objects.
The organization of the paper is as follows: the second section presents a mathematical model for TOA estimation, while in the third section, the simulated model is illustrated. Results and discussion are introduced in Sect. 4, and finally, conclusions are drawn in Sect. 5.
Mathematical Model of OFDM for Time of Arrival Estimation
The idea is to use simple lateration localization, based on TOA; however, time is estimated based on analyzing the OFDM signal. Starting with the OFDM system, given that the input signal g(t, ) is [6]: where f and t are frequency and time respectively. Assuming a linear time-invariant channel, the output can be simplified as: where a k , k , k and n are the kth multipath attenuation, phase shift, propagation delay, and the number of multipath rays, respectively. The spectrum bandwidth is divided by N subcarriers, the jth frequency sample can be expressed by: And the jth uniform frequency sample of U(2 f ) is given by: The sampling time is given by: The challenge is how to utilize the benefits of using the OFDM system in localization. Before proceeding into this point, it's worth mentioning that we have extracted the channel information through simulations that were conducted in WI software at 5 GHz. We first extracted the arrival rays' corresponding information including phase, time delay, and received signal strength. This information is taken to a MATLAB code which has three functions.
Firstly, the code constructs the OFDM system based on Eqs. 1-5. The code allows the user to select N and BW. After that, the N-inverse transform of the received signal is estimated. The inverse fast Fourier transform (IFFT) translates frequency domain data to timedomain samples, i.e. from the discrete frequency domain we can have discrete time-domain samples.
By choosing a sufficient number of subcarriers and wider BW, it is possible to have more resolution on time (Δt is less than the time between multipath). We then mapped the high-resolution time with the output of the IFFT, after that we choose the time with the highest value of IFFT as the TOA, which is the second task of the code, in other words, the TOA is estimated based on the mapping between the discrete-time domain samples with the highest output of the IFFT.
In the third task, the code performs localization by using the lateration technique based on Eqs. 6-11. For an electromagnetic wave, the elapsed time and travel distance are related by the speed of light, given the TOA, the distance between the transmitter and receiver is estimated.
Assuming we have several receiver points with known locations, and a transmitter with an unknown location, the TOA readings can be used to find the relative distances between the transmitter and each receiver, thus, by using lateration, it is possible to find the transmitter's location [20].
Lateration can be solved using the linear least square technique, which is helpful especially for Non-line-of-sight (NLOS) propagation [20]: where d i and R, are the distance between the i th receiver and transmitter and the number of receivers respectively, (x i , y i ) is the i th receiver coordinates and ( x , ŷ ) are the estimated transmitter coordinates. Rearranging Eq. 6 for the lth receiver: For all receivers, a matrix can be represented as Then, the transmitter location can be estimated as: The position error is calculated by estimating the Euclidean distance [26]: where (x, y) is the transmitter's true coordinates.
Simulation Model
The scenario environments were created using WI software. WI is a site-specific radio propagation software that provides user-requested outputs for different indoor and outdoor environments and has the ability to provide efficient and accurate predictions of channel propagation characteristics like received signal strength, propagation paths, complex impulse response, delay spread, and time of arrival/departure, etc. [27]. The software has been validated over the WLAN frequencies [28,28]. The obtained channel response is to be used with the OFDM multi-subcarrier system for our proposed localization method. The simulated model represents the 3rd floor of the Chesham building at the University of Bradford. The transmitter was placed in three locations.
Four receivers were distributed at fixed known locations and mounted 1.5 m above the floor. The transmitter was placed in both line-of-sight and NLOS with the receivers as were seen in Fig. 1. Coordinates of all transmitters and receivers are listed in Table 1. Transmitter and receivers' antennas used were omnidirectional and the operating frequency was 5 GHz with 2 GHz bandwidth. Settings for the Wireless InSite model are given in Table 2. Table 3 shows the complex impulse response received at Rx-1 with 10 multipath over the desired band including the LOS path. By convolving the channel impulse response with the OFDM multi-subcarriers spread over the bandwidth, the proposed TOA is estimated by taking the inverse fast Fourier Transform (IFFT) of the convolution. The maximum TOA is used to infer the transmitter's location.
The conventional TOA simply uses the arrival times of the path which are the values in the "Time" column in Table 3 for distance estimations.
Results and Discussion
In general, the accuracy of the TOA localization depends greatly on the available BW and the number of receivers. The accuracy of the proposed method depends on the number of sub-carriers and the allocated BW. Comparing to the conventional TOA (the first path arrival time), this approach minimized the error to centimeters. As seen in Table 4, when we used 4-receivers with 2 GHz BW and 2048 sub-carriers, the localization error (LE) was 0.97 m. Using the same settings with 3 receivers, LE was 3.32 m. Those results are still better than the conventional TOA results, where LE were 6.67 m using four receivers and 7.2 m using receivers. As seen, LE using the proposed algorithm was reduced compared to the conventional TOA approach.
In most cases, the average localization error using the proposed method was decreased to around 2 m for Tx-1, 3.32 m for Tx-2, and 4.03 m for Tx-3, while utilizing different sets of bandwidths and subcarriers. However, at low bandwidth and low number of subcarriers, the error increases to become approximately the same value as the conventional TOA approach as presented in Table 4. Figure 2 shows the estimated position of Tx-1 using 4-receivers with 2 GHz BW and 256 subcarriers. In this case, LE was 0.354 m which is 2 m less than the conventional TOA LE. Figure 3 presents Tx-2 localization using 3-receivers (Rx-1, Rx-2, Rx-3) with 1 GHz BW and 512 subcarriers where the LE was 0.1555 m. Figure 5 shows the LE values when estimating transmitter at location 1 using 2 GHz BW and various subcarriers. As seen, most LEs of the proposed algorithm are less than the minimum conventional TOA LE except for the 16 carriers case. The maximum LE values in the proposed method occurred when the number of subcarriers was 16, however; it remains less than the maximum value of the LE of the traditional method.
Conclusions
This paper describes the OFDM technique for wireless communications and desirous advantages of OFDM to improve the localization of targets, in order to provide accurate mobile station (MS) positioning within indoor environments, and provided an overview of different delimiting factors that affect the performance and capability of OFDM. The causes and effects of these drawbacks and the various methods to deal with these problems for improved performance of OFDM systems are covered in the paper.
The localization of objects within the indoor environment over a wideband spectrum adopting multi-carrier has been presented. The utilized proposed algorithm to estimate The error values of the estimated Tx-1 position using the proposed algorithm the position of several transmitter positions was also presented. The method was based on the TOA for various spectrum bandwidths and the number of carriers covering the LOS and the NLOS positions of the transmitter's positions. The method adopted the impulse response of the channel to estimate the transmitter position. It was concluded that the TOA approach over several wide bandwidths and different subcarriers was effective to reduce the localization errors compared to the typical TOA technique.
Data Availability Data are available upon request from the author.
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Raed Abd-Alhameed (M'02SM'13) is currently a Professor of electromagnetic and radiofrequencyengineering with the University of Bradford, U.K. He is also the Leader of radiofrequency, propagation, sensordesign, and signal processing; in addition to leading the Communications Research Group for years within theSchool of Engineering and Informatics, University of Bradford. He has long years' research experience in theareas of radio frequency, signal processing, propagations, antennas, and electromagnetic computational techniques. He haspublished over 600 academic journal and conference papers; in addition, he has co-authored four books and several book chapters.He is a principal investigator for several funded applications to EPSRCs and the leader of several successful knowledge Transfer-Programmes, such as with Arris (previously known as Pace plc), Yorkshire Water plc, Harvard Engineering plc, IETG Ltd., SevenTechnologies Group, Emkay Ltd., and TwoWorld Ltd. He was a recipient of the Business Innovation Award for his successfulKTP with Pace and Datong companies on the design and implementation of MIMO sensor systems and antenna array design forservice localizations. He is the chair of several successful workshops on energy-efficient and reconfigurable transceivers: Approachtoward Energy Conservation and CO2 Reduction that addresses the biggest challenges for the future wireless systems. He has beena Guest Editor of IET Science, Measurements and Technology Journal since 2009. His interest in computational methods andoptimizations, wireless and mobile communications, sensor design, EMC, beam steering antennas, energy-efficient PAs, and RFpredistorter design applications. He is a fellow of the Institution of Engineering and Technology and a fellow of the HigherEducation Academy and a Chartered Engineer. | 3,277.4 | 2021-10-31T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
CPANNatNIC software for counter-propagation neural network to assist in read-across
Background CPANNatNIC is software for development of counter-propagation artificial neural network models. Besides the interface for training of a new neural network it also provides an interface for visualisation of the results which was developed to aid in interpretation of the results and to use the program as a tool for read-across. Results The work presents the details of the program’s interface. Parts of the interface are presented and how they can be used. The examples provided show how the user can build a new model and view the results of predictions using the interface. Examples are given to show how the software may be used in read-across. Conclusions CPANNatNIC provides a simple user interface for model development and visualisation. The interface implements options which may simplify read-across procedure. Statistical results show better prediction accuracy of read-across predictions than model predictions where similar compounds could be identified, which indicates the importance of using read-across and usefulness of the program. Electronic supplementary material The online version of this article (doi:10.1186/s13321-017-0218-y) contains supplementary material, which is available to authorized users.
Background
In the past several years, there is an increasing interest in using in silico tools for risk assessment of chemicals. The reasons for higher interest can be found in Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) legislation in European Union which requires registration of a large number of chemicals in use. The legislation allows using read-across for toxicity assessment under certain conditions written in the regulation. Definition of read-across and its correct use are still rather unclear. Patlewicz et al. [1] gathered several definitions of read-across from different sources [e.g. United States Environmental Protection Agency (US EPA), European Chemical Agency (ECHA), The Organisation for Economic Co-operation and Development (OECD)]. Concisely, we may understand the definitions of read-across as an approach to predict a property of a chemical based on the same property of one or more similar chemicals. Different tools already exist which can be used for read-across, for example OECD QSAR Toolbox [2], ToxRead [3], TEST [4] and VEGA [5].
In this paper we present a new tool which can be used for development of counter-propagation artificial neural network (CPANN) models. The models can be later used either for direct prediction of the endpoint under consideration for new, i.e. untested compounds, or for read-across approach. The software provides a graphical user interface which was designed to facilitate readacross based on analogue or category approach using CPANN models. CPANNs are particularly suitable for these approaches because of their ability to group compounds according to their structural similarity. Although the software was initially built to facilitate read-across for toxicity assessment of substances, its usage is not limited to toxicity-related endpoints since the user describes compounds in the input data file(s) which may include numerical values of any property. Drgan et al. J Cheminform (2017) 9:30 Basis for read-across As mentioned above, the software uses CPANN models. The results of the predictions can be viewed in a simple graphical user interface with compounds placed on the map, called a "top-map", according to their similarity which can be used as the basis for read-across predictions. The learning principles of Kohonen and CPANNs are well established and can be found in detail elsewhere [6][7][8]. Some definitions are given below so that the user can better understand the results produced by the software.
Schematic representation of a CPANN is shown in Fig. 1. It is composed of Kohonen layer and output (Grossberg) layer. It can be visualized as a 3D matrix of values called weights (W). One column (vector) of weights is called neuron. The figure schematically shows how the results of predictions (R 1 -R 3 ) are obtained. First, the Euclidean distance between each neuron and the object is calculated using descriptor values and weights in Kohonen layer. Then the most similar neuron to the objects is identified as the neuron with the shortest Euclidean distance to the object, which is indicated on Fig. 1 with red colour. This neuron, excited by the object, is called "central neuron". To get the predictions from the output layer, the position of the central neuron is projected onto the output layer and the results are read from the corresponding position for each target (property/endpoint). For each descriptor and target, a 2D surface plot can be obtained from the weights which is called "level plot".
When all the training set and external set objects are tested one can obtain a top-map showing how the objects excited the neuron. The objects which are more close to each other are more structurally similar, and vice versa. This offers us a method which can be used for readacross; first similar compounds to our object are found and then experimental value of similar compounds can be used to predict property value of the selected compound. The neurons shown in Fig. 1 will be represented in the graphical user interface of the software as squares containing the compounds which excited the neurons (i.e. a "top-map" will be shown). When "level plots" will be shown, each square will correspond to one weight in the selected level which corresponds to a descriptor or target (2D surface plot). The compounds can be represented by identification number, class label or as a 2D structure of the compound. The Euclidean distances which will be reported together with other information related with the predictions for objects are those Euclidean distances calculated between the object and the neuron.
Implementation
The counter-propagation artificial neural network learning method presented in the article by Zupan et al. [8] was used for implementation. CPANNatNIC is entirely written in Java programming language. The program uses The Chemistry Development Kit (CDK) library (version 1.5.4) [9] for displaying 2D structures of compounds from SMILES strings. The program was written using NetBeans IDE 8.1 and Java JDK version 1.8 (64-bit).
Installation
Java version 8 is needed to run CPANNatNIC software. The software can be freely downloaded from http://www. ki.si/fileadmin/user_upload/datoteke-L03/SOM_ver/ v1_01/. The software is also available in Additional file 1 and its source files in Additional file 2. To install CPAN-NatNIC, unzip the downloaded file to a new folder. The folder will now contain two files. The file "CPANNat-NIC.zip" contains all necessary files to run CPANNat-NIC application and the file "example_input_data.zip" contains example input files. Unzip CPANNatNIC.zip file. The application "CPANNatNIC.jar" will be located in CPANNatNIC folder. To run the application, use command prompt and change current directory to the directory with the application and type java-jar "CPAN-NatNIC.jar". Alternatively, you can double click "CPAN-NatNIC.jar" in case your operating system can execute "jar" files in this way.
Limitations
The program was tested using Windows 7, 64-bit. Java 1.8 should be installed prior using the program. Successful execution of CPANNatNIC software is dependent on available Java heap memory. It is recommended that you have at least 8 GB of RAM installed on your computer. For example, you can allocate Java heap memory by executing command java-Xmx4096m-jar "CPANNatNIC. jar" to allocate 4 GB of Java heap memory for the application. There may be high memory requirements when saving large "top-maps" to PNG files, thus using smaller neuron sizes is preferred. Higher number of available processor cores may decrease the time needed to display 2D structures of compounds on the "top-map". The recommended screen resolution is at least 1280 × 1024 pixels. The description given within this text presumes that the user uses standard "right-handed mouse" where left mouse button is used for primary click (a "click") and the right mouse button is used for secondary click.
CPANN models are stored in text files where each column corresponds to a specific variable. When the user is using an existing model he/she should prepare an input file where the variables are stored in the same column order to obtain correct results. The software will produce warnings when the variable names in the input file are not the same as in the model file but will not stop the calculation.
The program structure
The main parts of the program were individually developed as Java classes. Figure 2 schematically shows hierarchy of these classes. The classes shown in Fig. 2 represent visible objects, such as frames, dialogs or panels. An exception is "MyInputData" class which is used for storing different variables used during program execution (e.g.: descriptor values, weights of CPANN model, variable names, predicted values for objects, position of excited neurons).
As shown in Fig. 2, the main class used is "Mainframe" which represents the main window of the application and is used mainly for model development. "AboutDialog" is used to show basic information about the program. "DialogTrainNN" is used for training of CPANN, "Dialogselectdescriptors" is used to select descriptors when The main window of the interface which is used for displaying of the results represents "DrawingFrame" class. The "DrawingFrame" class uses "DrawingPanel" for displaying neurons of the top-map. "DisplaySelectedNeu-ronDialog" is used within "DrawingPanel" for displaying individual neurons and "DisplaySelectedObjectDialog" is used when displaying an individual compound.
Graphical user interface
Before using the software, the data should be prepared in an appropriate format. The data which are required for each object are the values for independent variables (descriptors), dependent variables (targets), class and object identification number (object's ID). A detailed description of input dataset files is given in the user guide provided with the application so that the user can manually prepare input files in the required format. An example of Excel file which can be exported to tab-delimited text file (Additional file 3) used as a data input file is included within the article as Additional file 4.
Graphical user interface consists of the main window which opens when the application starts and a window which is used for graphical representation of the results and becomes available when the results of predictions are available from the main window. The main window provides functionality of the software which can be used for the development of new CPANN models and provides access to the interface for graphical representation of results. The options available in both windows are described in the following sections.
The main window is shown in Fig. 3. The central area of the main window is a text area window which is used for displaying relevant information generated during program execution. The data which are displayed in the text area are related to the datasets and models read by the program, the results of the predictions made by the program and information regarding certain errors which may occur during program execution. When the program is started from command prompt, some additional information may be displayed in the command prompt or in a file in case the output is redirected to a file which can be then used as a log file (for example by using command java-jar "CPANNatNIC" > logfile.txt).
Below the text area, there are several options which become accessible when there are certain conditions fulfilled during program execution. For example, "Train CPANN" button will not be available until appropriate data are read from a dataset file. Importing data from an input file should be the first step after an appropriate delimiter, used in the file, is selected from drop-down menu.
When the data are available in the program they can be viewed by clicking button "Check data values". This will show a table similar to the one shown in Fig. 4. When a CPANN model is available, a similar table will appear also for the model that will display values of CPANN weights. Each line in the table represents an object in the same order as it appears in the input file and each column represents one variable (the names of the variables are written as column labels). If the dataset is training set, it can now be used to build a new model. If the dataset has not been normalized, the program can be used to normalize descriptor data. This is convenient if we have several datasets and they should all be normalized using the same normalization factors. When a new model is generated using training set data which were normalized using the software, the normalization factors are automatically saved into the model file and can be later used for normalization of new datasets. The normalization is done only for independent variables (descriptors) using Eq. (1).
In Eq. (1), X normalized represents normalized value of X i which is the descriptor X of object i. X average represents an average of all descriptor X values in the dataset used for training CPANN, and s is standard deviation of these values.
A new model can be developed using imported dataset by pressing the button "Train CPANN". A window, such as shown in Fig. 5, will appear with default values of the required parameters shown in the window. After the "Train" button is pressed CPANN training will start. When a model has been successfully generated model validation can be performed using button "Model validation" or predictions can be made for currently imported data using button "Make predictions for the objects". In both cases, the results will be displayed in the text area of the main window. The results of the predictions will show for each object its identification number (ID), the neuron excited by the object, experimental value (the value written in the dataset file) and Euclidean distance of the object to the neuron. Additionally, information regarding root-mean-square error (RMSE) and correlation coefficient between experimental and predicted values will be given. Also, a textual representation of the top-map that is showing IDs or classes of the objects will be written. In the case of model validation, experimental and predicted property values, root-mean-square error of cross-validation (RMSEcv) and correlation coefficient of cross-validation (Rcv) will be reported. When the button "Model validation" is pressed a dialog, shown in Fig. 6, will open and the user may select between different options for model validation, such as: leave-one-out cross-validation, leave-many-out cross-validation, Y-scrambling, and repeated leave-many-out cross-validation. The procedures implemented for leave-one-out cross-validation and leave-many-out cross-validation keep the initial order of the training set object while the procedure for repeated leave-many-out cross-validation first shuffles the objects before each repetition and then performs leave-many-out cross-validation. When "Make predictions for the objects" button is clicked, a dialog box, such as the one in Fig. 7, will appear where the user may select the descriptors which are used to determine the position of the central neuron for all objects when making predictions. Usually, all the descriptors used during the training are selected. The user may change the selection to observe how different selection affects the grouping of objects. The button "Define applicability domain" becomes available after the predictions are made. When the user presses the button a dialog shown in Fig. 8 will appear where the user can select one or more datasets which can be used to define applicability domain. The applicability domain is defined according to the method [10]. The objects with the Euclidean distance to the central neuron which is smaller or equal to the limiting Euclidean distance are within the applicability domain. The user may also manually enter the value which he/she considers as appropriate for the limiting Euclidean distance. When new predictions are made after the applicability domain is defined, then in the prediction results in the text area of the main window it will be also written whether the object is in applicability domain or not.
Some results of the predictions made using CPANN model can be viewed in a graphical user interface which is shown in Fig. 9 and can be accessed using button "Draw results" from the main window shown in Fig. 3.
A top-map will be graphically displayed when the button "Draw" is pressed. The options that affect the appearance of the results and the content shown are accessible from the blue panel in Fig. 9. Some functions are available using left and right mouse clicks on the neurons shown on the map.
The top-map will initially show ID numbers of the objects (compounds) that excited the neurons on the map. The datasets which are used to build the map can be selected from the list of datasets labelled as "Select datasets to be used for the graph". Different colours can be defined for objects from different datasets or for objects belonging to different classes. This can be done using appropriate selection at the bottom of the blue panel which will open a new window where the user can define colours for datasets or classes. This may help to visually assess distribution of objects belonging to different classes or datasets.
Besides the presentation of ID numbers or classes the interface also supports displaying 2D structures of compounds on the map. To display 2D structures of compounds, "Show structures" check-box should be checked and a file containing a list of compounds' ID numbers and corresponding smiles should be opened. The content shown on the map can be changed using drop-down menu labelled as "Select content for the map". From the drop-down menu each descriptor and target level can be shown on a map as 2D surface which is coloured according to the weight values corresponding to the selected variable of the CPANN model. Classes or IDs of the compounds can be seen when "Show structures" check-box is not selected and the item "top-map (classes)" or "top-map (IDs)" is selected, respectively.
When there are many objects shown on the map, finding one particular object can be a tedious task. Thus, an option for locating an object on the map has been added. An object can be located by selecting the object's ID from the drop-down menu labelled as "Select object ID" and then pressing the button "Find selected object". The position of the neuron with the object will be shown in the text area below the button. A new window will appear that is showing the neuron which was excited by the object. The selected object shown on the neuron will be marked by a red rectangle. Any other neuron can also be shown in a new window by "double-clicking" on the desired neuron. Right-hand mouse button click on the object can be used to view any object shown on the neuron.
As mentioned before, CPANN training produces models which group similar objects close together on the topmap. This can be useful for the assessment of reliability of the prediction made for an object and also makes it possible to use the objects from neighbouring neurons for read-across. The interface gives the possibility to visually identify similar neurons using Euclidean distance or Tanimoto coefficient. Tanimoto coefficient is calculated using formula for continuous variables as reported in the literature [11,12]. To visualize Euclidean distances or Tanimoto coefficient between neurons, the user should right-click on the neuron which should be compared to other neurons. A menu will appear with a few options on the list. The user can select "Show map of Euclidean distances to the selected neuron" or "Show map of Tanimoto similarity coefficients to the selected neuron" which will show a map of Euclidean distances or Tanimoto similarity coefficients between the selected neuron and the other neurons.
Fig. 4 Table showing descriptor and target values
The map showing the Euclidean distances or Tanimoto coefficient can be saved by selecting "Save map of distances/similarities between neurons" from the menu, while the map showing the content selected from the dialog box can be saved using "Save to file" button below the map. When "Save to file" button is used, the program will also generate images of neurons in folder "resultingimages" representing neurons and "graphview.html" file for viewing the map in a web-browser. The files will be saved in the folder where the last input file was selected.
Results and discussion
The functionality of the program described in the previous section can assist in read-across process. Two datasets will be used below to show how the program may be used for read-across. Here, it should be stressed that the models used for read-across are the same as the ones used to obtain model predictions. The examples will be shown using one pre-built model for prediction of acute toxicity towards rainbow trout (Oncorhynchus mykiss) and one example will show how a model can be built using bio-concentration factor. The models and datasets supporting the conclusions of this article are included within the article as additional files.
As the first example, we show an example which requires smallest number of steps to obtain CPANN topmap that can be used for read-across assessment. In this example, we will use an existing model for acute toxicity which is available in the Additional file 5. The data used for the development and testing of the model are in Additional file 6, Additional file 7 and Additional file 8 which correspond to training, internal test and external validation set, respectively. The data in the files are normalized and can be thus directly used to obtain predictions using the model. After selecting and importing the training set (using the button "Select file with objects") and the model (using the button "Select CPANN model") the predictions for the training set can be made using the button "Make predictions for the objects". After the predictions are obtained, the checkbox "Save existing dataset data" is Fig. 9. The interface can now be used as mentioned in the previous section. The interface in Fig. 9 shows a part of the top-map which was obtained using the data for acute toxicity. To show 2D structures of the compounds "Show structures" checkbox was selected and the smiles from Additional file 9 were imported. Figure 10a shows the neuron which was excited by external set object with ID = 7 which will be used here for demonstration purposes. The same neuron is visible also on the topmap shown in Fig. 9. To show the neuron on Fig. 10a, the user should select 7 from drop-down list available under "Select object ID" and then press the button "Find selected object". After the button is pressed, a window showing the neuron will appear and the visible area of the top map will change so that the region of the top-map with the neuron will be visible. Figure 10b shows each of the compounds in its own window with the information regarding the compound.
The predicted value of −log(LC50) [log-common logarithm, LC50-concentration of the compounds which kills 50% of organisms (rainbow trout in our case)] for all the compounds was 1.99 ("pred.1" indicates prediction for the first target), which in this case matches the arithmetic mean of −log(LC50) values of two training set compounds that excited the neuron. The compound with ID = 7 is the only compound from external set that excited this neuron, therefore we may use other three compounds for read-across. When we look at the experimental values ("exp.1" indicates experimental value for the first target) of the compounds we can observe that the values are not the same. The bottom-left compound (ID = 145) has the highest experimental values 2.64, the upper-right compound (ID = 126) which has one methyl group less has experimental value 2.19, and the compound with two chlorines (ID = 26) has experimental value 1.79. For read-across, we selected compound 126 as the most similar to compound 7. Thus, we may say that −log(LC50) value predicted by read-across for the compound 7 is 2.19. Further, it can be observed that −log(LC50) value is smaller on the compound with one methyl group than on the compound with two methyl groups. Thus we could expect lower experimental value for the compound 7. The actual experimental value for compound 7 is 1.83. If we knew in this particular case that a linear relationship exists, we could use the compounds 126 and 145 for the linear regression where −log(LC50) depends on the number of methyl groups. The calculated linear regression would predict 1.74 for An inspection of the whole top-map shows that the compounds in the dataset used are structurally very different which makes the read-across method difficult to apply. From 69 compounds in the external set we could perform read-across for only 24 compounds. In the case of compound 7, the read-across value was slightly higher from the predicted one. However, when we performed some analysis of RMSE of -log(LC50) values predicted by the model and by read-across for the 24 compounds, we observed that RMSE was lower for read-across predictions. The highest error in read-across was made for acetaldehyde (ID = 80) based on acetone (ID = 74) data. In this case, the model made larger error. When the predictions for the acetaldehyde were not considered, the RMSE error calculated from predictions for 23 compounds was 0.80 for the model predictions and 0.49 for the read-across predictions. Among 24 read-across predictions, 18 predictions were made using a compound In the second example, the model will be built using bio-concentration factor (BCF) data obtained from the article written by Gissi et al. [13]. The descriptors used were the same as those reported in the supplementary material of the article for MLR method with 10 descriptors. Descriptors were calculated using Dragon 7.0 software for molecular descriptor calculation [14]. The data used can be found in supplementary material. The Additional file 11, Additional file 12 and Additional file 13 contain training set, internal validation (test) set and blind (external) set data, respectively. The smiles of the structures are available in the Additional file 15.
In this example, the number of neurons used will be small in comparison to the number of objects used in the training set. This will cause that the top-map will be densely populated while similar compounds will still be grouped together and will excite the same or similar neurons. In the input files "tab" is used as a delimiter, therefore the item "tab delimited" should be selected in the main window from the list used to define the delimiter for the file with object data. Training set should be imported as the first set. Then a check box "Use as normalization set" should be selected and the button "Normalize current descriptor data" should be pressed. In this way, the normalization factors are calculated from the training set data and the training set is normalized. Using these normalized data a new model can be built using "Train CPANN" button. The training parameters required and their values for this example (in brackets) are random seed (1234), number of neurons in x direction (9), number of neurons in y direction (9), toroid boundary conditions (Non-toroid NN), type of neighbourhood correction (Triangular), furthest neuron for correction (9), maximal learning rate (0.47), minimal learning rate (0.04), type of the best match (neuron with the weights most similar to the input) and number of epochs (161). The same parameters can also be found in Fig. 5 which shows a dialog box that is used to enter CPANN training parameters. The resulting model will be saved in file "modelweights.unw". For this example, the resulting model file is given as Additional file 14. After the training, we can perform model validation by pressing the button "Model validation". Then the predictions can be made and dataset data can be saved for further use by the software. After importing each of the other sets, the normalization of descriptor data should be done using training set data and then predictions can be made.
When the predictions are obtained for all the sets, a CPANN top-map can be shown. Additional file 15 should The same objects in separate windows where information about the objects and model predictions can be found. Each of the windows can be opened by right-hand mouse button click (when using "right-handed mouse") on the corresponding structures shown in a be selected when asked for the file with IDs and smiles. Using the model, we tried to perform read-across for the structures in the blind set. For approximately half of the compounds in the blind set we made read-across. The RMSE of 37 model predictions was 0.79, while the RMSE of read-across predictions for the same compounds was 0.55. Among 37 read-across predictions 30 predictions were made using a compound that excited the same neuron as the compound under consideration. The results of read-across predictions are included within the article in Additional file 16.
The interface can be used also to identify neurons which have for example large or small weight value for certain descriptor or response. Subsequently, compounds with similar descriptor or target values can be identified. For example, if we wish to identify compounds with high log(BCF) then we first draw response by selecting "tar.1 = logBCF" from drop-down menu on the blue panel an redraw the map. The neuron with the highest response can be found based on the available colour scale. In the same way as before we can now display the neuron in a new window and identify the structures which excite the neuron. As can be found from the response surface, the compounds which have highest log(BCF) and are of the highest concern in this dataset are polychlorinated biphenyls which are commonly abbreviated as PCBs. The response surface and the neuron corresponding to the highest response are shown in Fig. 11.
Similarity between the selected neuron and other neurons on the map can be evaluated using Tanimoto similarity coefficient or Euclidean distance between the neurons. This can be done using a right-click on the neuron and selecting the preferred similarity measure from the pop-up menu. An example of the resulting surface plot corresponding to the selected similarity measure is given in Fig. 12. The second item "Show map of Tanimoto similarity coefficients to the selected neuron" was selected from the pop-up menu, as shown in Fig. 12. The same neuron as before (i.e. the neuron with the highest response at the position [1,7]) was selected to calculate Tanimoto similarity coefficients to all other neurons. In Fig. 12, the most similar neurons to the selected neuron are shown in red colours which correspond to relatively high values of Tanimoto coefficients.
The two examples shown above were described in detail. Some additional tests were also performed using other datasets. For that purpose the Sutherland's eight datasets [15] were used and QuBiLS-MIDAS 3D-indices provided in the paper by García-Jacas et al. [16] were used to build CPANN models for the datasets. The eight datasets included datasets for angiotensin converting enzyme inhibitors (ACE), acetylcholinesterase inhibitors (ACHE), ligands for the benzodiazepine receptor (BZR), cyclooxygenase-2 inhibitors (COX2), dihydrofolate reductase inhibitors (DHFR), glycogen phosphorylase b inhibitors (GPB), thermolysin inhibitors Fig. 11 Response surface of the model and the neuron corresponding to the highest response value. The highest response value is at position [1,7]. The structures that excited the neuron are polychlorinated biphenyls with log(BCF) above 4 (THER), and thrombin inhibitors (THR). The same splitting of the data into training and external set was used as in the previous publications. The models were evaluated by repeated leave-many-out cross-validation and Y-scrambling. Y-scrambling validation was decisive for the selection of the models' size since correlation coefficient became higher when larger number of neurons was used in the model. The results obtained for the eight models and their use in read-across are available in Additional file 17. The models found did not show very good performance for external set objects. One of the possible reasons could be the splitting of the objects. It was found also that maximal and/or minimal values for the set under consideration were in most cases not included in the training set. Using the developed models, readacross was performed for external set objects and comparison was made between the model predictions and read-across predictions for the objects where read-across could be performed. For six datasets read-across showed better prediction performance, and for two datasets better prediction performance was obtained using model predictions.
Comparison with the Kohonen and CP-ANN toolbox
The software described within this paper is not the only one existing for development of CPANN models; nevertheless it offers unique possibilities for effective read-across on training/test data. The Kohonen and CP-ANN toolbox with similar functionality was recently developed in Milano Chemometrics and QSAR Research Group [17]. The software was developed as a toolbox to be used in Matlab. The learning algorithm used in the toolbox is essentially based on the same algorithm for Kohonen and counter-propagation artificial neural networks [18] as in this manuscript. One of the valuable properties of the toolbox is that its methods can be directly used through command prompt in Matlab apart of the provided GUI. This gives the user the possibility to use the methods in new Matlab applications. For the preparation of the data, the toolbox range scales the data and offers some additional options for data scaling. On the other hand, CPANNatNIC software accepts the data "as is" or offers standardization of independent variables based on the training set data. Both applications provide model weights and the possibility to visualize the results. The toolbox additionally gives the user an opportunity to analyse the weights of the model by using principal component analysis (PCA) to investigate the relationship between the variables used in the model. Such PCA analysis is not available in CPANNatNIC software. While both applications provide similar visualization of the results, CPANNatNIC software has different visualisation features and can also visualize 2D chemical structures from SMILES on the Kohonen map to help in the interpretation of the results and to facilitate read-across. Additionally, CPANNatNIC provides an option for locating an object on a top-map which may be needed when there are many objects on the top-map or the map has a large number of neurons. While the Kohonen and CP-ANN toolbox and CPANNatNIC are both freely available, the Matlab toolbox requires access to Matlab which is not freely available and CPANNatNIC requires freely available Java environment and CDK library.
Conclusions
We present a program for building counter-propagation neural network models with an interface for viewing topmaps, descriptor levels and response surface. 2D representations of compounds can be shown on the top-map. This is useful when performing read-across for identification of similar compounds. The program provides simple interface which can be used to quickly find neuron excited by the compound under consideration. Thus, similar structures can be quickly identified and also used for read-across. Since the user both provides the dataset for the modelling and can develop new models, the model predictions as well as read-across predictions are not limited to any specific endpoint.
CPANNatNIC will be further developed in the future. We are planning to add features, such as descriptor selection and optimization, which will simplify model development process. Also, the representation of the objects within the software will be modified so that new information regarding the objects can be added and displayed within the software. | 8,117.2 | 2017-05-22T00:00:00.000 | [
"Computer Science"
] |
Effect of service differentiation on QoS in IEEE 802.11e enhanced distributed channel access: a simulation approach
The enhanced distributed channel access (EDCA) protocol is a supplement to IEEE 802.11 medium access control (MAC), ratified by IEEE 802.11e task group to support quality of service (QoS) requirements of both data and real-time applications. Previous research show that it supports priority scheme for multimedia traffic but strict QoS is not guaranteed. This can be attributed to inappropriate tuning of the medium access parameters. Thus, an in-depth analysis of the EDCA protocol and ways of tuning medium access parameters to improve QoS requirements for multimedia traffic is presented in this work. An EDCA model was developed and simulated using MATLAB to assess the effect of differentiating contention window (CW) and arbitration inter-frame space (AIFS) of different traffic on QoS parameters. The optimal performance, delay, and maximum sustainable throughput for each traffic type were computed under saturation load. Insight shows that traffic with higher priority values acquired most of the available channels and starved traffic with lower priority values. The AIFS has more influence on the QoS of EDCA protocol. It was also observed that small CW values generate higher packet drops and collision rate probability. Thus, EDCA protocol provides mechanism for service differentiation which strongly depends on channel access parameters: CW sizes and AIFS.
coordination channel access (HCCA) [6][7][8]. The DCF protocol is a legacy channel access control protocol. Its distribution coordination mechanism is based on carrier sense multiple access with collision avoidance (CSMA/CA) [7,[9][10][11]. The DCF provides fair access to the contending terminal devices with no room for prioritization. Thus, the DCF does not guarantee QoS requirements for real-time services. This is because all the traffic is allowed to go through the same queueing and transmission processes. The EDCA is a state-of-the-art access control protocol that was designed to support differentiated services through prioritization. Nevertheless, the EDCA protocol cannot guarantee strict QoS to real-time applications. Whereas by using appropriate scheduler, the HCCA protocol can provide soft QoS guarantee to real-time applications [8].
The EDCA protocol of IEEE 802.11 supports priority of different classes of traffic which include background, data, video, and voice using differentiated parameters. The intention of prioritization of these traffic is geared toward meeting the QoS requirements of each traffic. In a quality enhanced network supporting EDCA protocol, traffic is categorized into four access categories (AC). Every AC has unique access parameters which include arbitration inter-frame space (AIFS), contention window minimum (CW min ), and contention window maximum (CW max ) [12]. The wireless stations use these parameters while contending for access to medium. The EDCA protocol allocates high priority to voice traffic while background is allotted the list priority. The effect is that in priority queueing, packets that occupied the highest prioritized queue are processed first, whereas the packets that occupied the lowest prioritized queue are last to be processed. With this protocol, higher priority traffic can reach the destination with less delay. Collision in this medium is resolved by granting the higher priority traffic an opportunity to transmit in all collision. In other words, as a result of differentiation which EDCA protocol supports, real-time traffic gets higher priority to win channel access during contention [12].
Amidst the enormous advantage of EDCA protocol, strict QoS support is not guaranteed. The best effort (BE) traffic is starved if the influx of real-time traffic is high [13]. Secondly, the number of collisions in the channel rises as the number of the stations contending to access the channel increase. At this point, access delay, traffic congestion and frame loss rate are high. Although the protocol implements binary exponential backoff algorithm to reduce collision, but ends up creating unnecessary idle state in most cases, leading to inefficient utilization of the available channel. As a consequence, throughput performance is highly degraded.
Several scholarly works have been proposed to support and enhance the performance of service differentiation in WLAN. In [6], OPNET simulator was used in analysis and evaluation of IEEE 802.11 that uses DCF access protocol. The simulation was carried out under different load condition, data rate, and fragmentation number of nodes parameter values. Results show that under heavily loaded network, DCF access mechanism performs below optimal. This could be attributed to the inability of DCF access mechanism to support service prioritization.
Fairness and QoS issues are analyzed in IEEE 802.11e WLAN. The IEEE 802.11e was compared with the legacy IEEE 802.11 on the basis of QoS and fairness [14]. The results of the simulation show that IEEE 802.11e outperforms IEEE 802.11 with regards to QoS. The reason is that IEEE 803.11e supports service differentiation and has improved architecture for queue management.
In IEEE 802.11e WLAN standard, two tier protection and guarantee mechanism are proposed for video and voice traffic [15]. This method performs better in channel utilization and in guaranteeing QoS for video and voice traffic. However, under high network load, the technique leads to the starvation of best effort traffic.
A differentiated service EDCA model that provides strict priority and fair service with weight was proposed for IEEE 802.11e [16]. The high priority traffic was strictly prioritized over the lower priority by adjusting the backoff interval of the lower priority traffic. This adjustment is in accordance with distributed scheduling discipline. Also, the same work proposed a tiered allocation method for IEEE 802.11e wireless LAN. Both the access point and mobile terminals are assigned different amount of channel. Simulation result with NS-2 shows that the proposed differentiated service EDCA model outperformed the IEEE 802.11e with EDCA. However, the work only adjusted the backoff interval of the lower priority traffic without considering the impact of other EDCA parameters such as AIFS and CW on the QoS requirements of the entire differentiated services.
Though these works have made significant progress in supporting service differentiation, but strict QoS could not be addressed due to lack of in-depth analysis of EDCA protocol and inappropriate tuning of the medium access parameters. Thus, EDCA physical model was converted to a simulation model in MATLAB to assess the impact of differentiating channel access parameters (AIFSN and CW size) on different traffic QoS of IEEE 802.11e WLAN. This work has demonstrated how QoS performance of a specific traffic in the network can be enhanced by varying the aforementioned access parameters. The study will assist WLAN vendors to meet up with the QoS requirements of multimedia traffic. It will also remedy channel access starvation of lower priority traffic resulting from poor management of channel access parameters.
Methods
The WLAN architecture presented in Fig. 1 was adopted for this work and the simulation model was based on the topology. The WLAN implemented four work stations (WSTA) transmitting video (VI), voice (VO), background (BK), and best effort (BE) traffic. These stations wirelessly connected to a central server interface, WLAN AP [14,17]. The central server system connects the WSTA to the ethernet switch interfaced to the VSAT via router. The AP uses the EDCA protocol to control access to the channel. It allows WSTA to transmit packets when it senses idle channel [18]. Once any WSTA starts transmitting, other stations having packets to transmit wait for the medium to return to the idle state before contention [19]. The packet generation is based on different distribution while arrival pattern is distributed following exponential law. The inter-arrival time and service time patterns follow the exponential service distribution. Each WSTA intending to transmit, decrements its binary exponential counter to zero before sending packets, after sensing the WLAN AP to be idle. This work is based on the activities that take place at the WLAN WSTA and AP; therefore, the model is based on WLAN AP which operates on CSMA/CA principles.
Enhanced distributed channel access (EDCA)
The EDCA supports service differentiation by traffic prioritization. The access categories, EDCA parameters, which include AIFS, CW min , CW max , and TXOP, are discussed next.
Access categories (ACs)
The EDCA extends the DCF protocol by differentiating traffic into 4 access categories to support 8 user priorities as specified in Table 1. Packets from distinguishable traffic types are mapped onto distinct ACs based on the traffic QoS requirement. The 4 ACs of the traffic are AC_BK, AC_BE, AC_VI, and AC_VO, for BK, BE, VI, and VO, respectively. The AC_BK possesses the lowest priority while AC_VO possesses the highest priority to access channel. A station accesses the medium on the basis of the AC of the packet that is to be transmitted.
EDCA parameters
The prioritization is a function of the channel access parameters which includes AIFS, CW min , CW max , and TXOP. An EDCA function contends for medium based on AIFS, CW min , CW max , and TXOP associated to an AC [12]. The default EDCA values for all the ACs are presented in Table 2. Arbitration inter-frame space The arbitration inter-frame space (AIFS) is the period the medium is sensed to be idle before the backoff or transmission is initiated. AIFS is obtained from the expression presented in Equation (1); where AIFSN depends on the access category and the SlotTime value relies on the physical layer of the 802.11e employed [23]. The value of SIFS is specified as one of the simulation parameters in Table 3.
Contention window minimum and maximum (CW max and CW min )
The CW max and CW min limit of EDCA can be varied. The variability depends on the ACs. Thus, ACs with higher priority value have lesser CW max and CW min values to the ACs with lower priority. The CW max and CW min default values for each of the four considered ACs are presented in Table 2 and among simulation parameters in Table 3.
Transmission opportunity limit (TXOP)
The TXOP is the maximum duration which WSTA is granted an opportunity to transmit packets after winning access to the available medium. The allowable duration for transmission covers all the frame exchange sequence which include intermediate SIFS periods, request to send (RTS), ACKs, and clear to send (CTS). Different ACs with their default TXOP limits are presented in Table 2. A non-zero value of TXOP limit specify that multiple frames can be transmitted by EDCA function in a TXOP, as long as the transmission period is lower or equal to the TXOP limit and the frames belong to one AC [25].
EDCA physical model Figure 2 is the IEEE 802.11e contention-based physical model. The model consists of four AC that represent a virtual station, EDCA MAC access controller that uses CSMA/CA protocol, and destination sink. The protocol differentiates traffic into 4 ACs to support 8 user priorities as specified in Table 1. Each of the traffic types uses the parameters of the AC which is periodically advertised by AP to access the medium. Every AP maintains four transmission queues, one per AC as shown in Fig. 2. The EDCA protocol implements an independent back-off entity for each AC. Each queue work independently and uses its own parameter set (AIFS, CW min , CW max , and TXOP limit). An AC with packet to transmit waits for a period of AIFS before accessing the medium [14,26]. If at the end of AIFS and the medium is still busy, the station initiates back-off algorithm. It computes its exponential value of the back-off counter and keeps decreasing the value of its back-off counter after the medium is sensed to be idle but freezes once the medium is sensed busy again. The AC starts transmission once the back-off timer is zero. If the sink receives a correct packet, after a short inter-frame space (SIFS), it sends a positive acknowledgment (ACK). The CW size is initially set to its minimum value before an attempt to transmit the first packet is done. Each time the packets collide, the size of the CW is doubled before the next transmission attempt. However, the adjustment of the CW size cannot exceed the CW max size. The number of packets transmitted from its queue is a function of TXOP limit. The time relationship for EDCA function is presented in Fig. 3. The probability of gaining access to the channel is determined by CW size and AIFS while the TXOP determines the duration of channel occupancy [25]. If two or more ACs competes for an access to a medium, a virtual collision occurs. This collision is resolved by allowing the higher priority packet to win the contention while lower priority packets make additional attempt after a waiting period elapse, using the exponential back-off algorithm [18]. In EDCA function, the backoff parameters of the 4 ACs provides a prioritized and differentiated channel access to each type of traffic in transit. The AIFS duration of ACs with higher prioritization value is shorter, which thus have a higher probability of accessing the channel than the lower ACs. This is to say that AC with higher priority is assigned a shorter CW in order to ensure that in most cases, higher-priority AC will be able to transmit before the lower-priority AC. In addition, there exist two types of contention: internal contention among the various EDCAFs/ ACs inside indistinguishable station and external contention among the various stations. The model developed in this research experiences only the internal collision as the analysis focused on what happens in a virtual station. The internal collision in EDCA access mechanism is presented in Fig. 4.
MATLAB Simevent EDCA simulation model
This work favored simulation model due to the fact that there is no single analytical model that can be traceable and still handle all the QoS parameters as seen from all the models touched. As a result of the advantages offered by computer simulation techniques, an EDCA network model was therefore designed and implemented in MATLAB Simevent environment. The network model in Fig. 2 was converted to a simulation model in MATLAB environment as shown in Fig. 5. It is divided into three blocks: the sources, access point, and sink. The sources are made up of packet generator's arrival rate, packet length, set attribute, and first in first out (FIFO) queue blocks. The AP consists of input switch, get attribute, FIFO queue, server, and output switch blocks. It also contains signaling loop that informs the sources about the busy and idle state of the input switch. These blocks are replicated into four places with each representing a different AC. For clarity, Fig. 5 is further presented in subunits as shown in Figs. 6, 7, 8, and 9. The packet generation and arrival pattern depict bursty ON-OFF. Sometimes traffic arrivals are recorded. In some other times, the arrivals are sparingly while some others witnessed intense arrivals. These show the random nature of packet arrival patterns which depict Poisson process. The simulation was run for 3600 s in order to achieve normalization. During the simulation, source rate was varied in steps of 100 kbps from 0 to 1000 kbps. Simulation model is applied in this research to verify access parameters as it affects some QoS parameters in IEEE 802.11 wireless LAN EDCA protocol. The IEEE 802.11e simulation parameters used are shown in Table 3. The model performance was evaluated using throughput and delay. Probes were strategically placed to collect data for calculations. The data collected was analyzed by relating the point of the responses to the objectives of the study.
Results and discussion
In the first simulation scenario, four ACs were modeled using default EDCA access parameter. The result was used to test the service differentiation ability of the developed model. Because important factors in research methodology includes validity of research data, ethics, and the reliability of design, this result was validated using Bahi Hour et al.'s simulation parameters [27]. In the second scenario, AC1 (BE) and AC3 (VO) only were modeled using default EDCA access parameters. They were configured with the same AIFS value but different CW size and vice versa to analyze the impact of AIFS number and CW size on QoS parameters. At the end of each scenario, results were obtained and presented in graphical form. The graphs present the relationship between QoS parameters (throughput and delay) and source rate for different traffic classes.
First simulation scenario Figure 10 is the throughput graphs of the simulation model and validation. It illustrates the effect of network load on the throughput of the four ACs. As observed, VO and VI traffic recorded 38% and 29% while BE and BK traffics recorded 19% and 14% of the mean throughput. The EDCA parameters provided optimal performance at 600 kbps source rate. The validation result shows that VO and VI traffics recorded 36% and 28% while BE and BK traffic recorded 20% and 15% of the mean throughput. The validation parameters provided optimal performance at 500 kbps source rate. In comparison, validation parameters provide almost the same throughput as the simulation model. It recorded 0.01 Mbps throughput higher than the simulation model. Figure 11 presents the delay results of the simulation model and validation. The VO and VI traffic recorded 4.2% and 16.3% mean delay while BE and BK traffic recorded 33.6% and 45.9% of mean delay. The validation result shows that VO and VI traffic experienced 8% and 20% of mean delay while BE and BK traffic recorded 30% and 42% of mean delay.
Second simulation scenario
Impact of AIFS number on throughput Figure 12 shows the throughput graph of three different cases obtained by differentiating the AIFS number of VO and BE traffic at a fixed CW size. In comparison, the combined effect of the result of these three different cases enables us to examine closely the contribution of each of the arbitration values to service differentiation. As identified, VO achieved 61% while BE recorded 39% of the mean throughput. However, the effect of an increase in BE traffic AIFSN from 3 in the first case to 4 in the second case decreased the throughput result of BE traffic by 4% and increased VO throughput by the same percentage. In the third case, VO throughput further increased by 3% while BE throughput decreased by the same percentage. This result shows that high AIFSN for BE traffic improves the throughput of VO traffic and vice versa. It also shows that these changes have a negligible effect on the optimum performance of the communication system as it remained stable at 700 kbps source rate. Table 4 presents the AIFS tuning numbers used to show the impact of AIFS numbers on throughput of both VO and BE traffic at fixed CW size. Impact of CW size on throughput Figure 13 shows the throughput graph of three different cases obtained by differentiating the CW size of VO and BE traffic at a fixed AIFS number. The mean throughputs for the three CW variations are compared. The results show that VO traffic at CW min1,3 = 15,3 and CW max1,3 = 63,7 recorded highest throughput, followed by CW min1,3 = 23,5 and CW max1,3 = 363,11, and lowest at CW min1,3 = 31,7 and CW max1,3 = 1023,15. On the other hand, BE throughput was lowest at the first case and highest at the last case. In other words, an increase in CW size decreases VO throughput and favors BE throughput as a result of reduced backoff and collision in the network. It was also observed that, as the CW size of the first case increased, the throughput showed just 2% reduction for VO traffic and the same percentage increase for BE traffic. When the CW size of the second case increased to CW min1,3 = 31,7 and CW max1,3 = 1023,15, a significant increase in throughput values of BE traffic and decrease in VO traffic was observed, respectively. This result is the effect of significant change in CW max of BE traffic from 363 to 1023. Table 5 presents the CW tuning numbers used to show the impact of varying CW on the throughput of both VO and BE traffic at a fixed AIFS number. Figures 12 and 13, revealed the relationship between AIFSN and CW size, and by comparison, it shows that effect of AIFS of service differentiation is more severe. However, by increasing AIFSN of AC3 by one has more effect on optimal throughput performance than CW size in Fig. 13. This is because it was stable at 700 kbps source rate against 600 kbps source rate recorded in the later parameter. Figure 14 shows the delay graph of three different cases obtained by differentiating the AIFS number of VO and BE traffic at a fixed CW size. The graph X-rays the impact of AIFS number on delay. A compromise of the AIFSN variation shows that, first, increase in source rate results to increase in the delays of both class of traffic. This can be justified on the basis that, first, under unsaturated condition, the level of collision is sufficiently low resulting to collision probability that is below 0.1 and the queue does not gradually increase. Consequently, the queuing delay is small and the MAC layer service time dominates the delay. When the number of competing traffic increases, the collision increases and so does the MAC layer service time. Secondly, the delay experienced by VO traffic type is much smaller compared to the BE traffic. This is because VO traffic has been prioritized over the BE traffic and their access to channel is dependent of the priority value of the individual traffic. Thirdly, when the value of the AIFSN of AC1 changed from 3 to 4 and AC3 fixed at 2, the total delay of VO traffic decreased by 10 ms approximately 9% while that of BE increased by 14.5 ms which is 9% increment. At the same AC3 parameters, AC1 AIFSN was ones again changed from 4 to 5. The mean delay of the BE traffic increased by 13 ms (8%) while VO traffic delay reduced by 11.8 ms which also amount to 8% increase. It is worthy to note that when network is working under unsaturated condition, the delays experienced by BE and VO are sufficiently small to satisfy their specified QoS [5,22,23,25]; the transmission delay for VoIP and VI must be less than 400 ms, and should be if possible less than 150 ms. The AIFS tuning numbers used to show the impact of AIFS numbers on the delay of both VO and BE traffic at a fixed CW size are presented in Table 4.
Impact of CW size on delay
The forgone simulation considered only the impact of AIFSN on the differentiation of VO and BE in terms of delay. The IEEE 802.11e standard also defines service differentiation by using different CW size. In Fig. 15 CW size especially under saturation condition improves the performance of lower priority traffics as it decreases the collision in the network. At low CW size, VO traffic experienced the least delay due to the collision effect. We also noted the 3% impact introduced by high reduction of CW max of BE traffic from 1023 in the first case to 63 in the third case. This impact is advantageous to traffic with higher priority and against the traffic with lower priority. A comparison of the graphs shown in Figs. 12 and 13, however, reveals the relationship between the impacts of AIFS number and CW size on mean delay. Table 6 presents the CW tuning numbers used to show the impact of varying CW on the delay of both VO and BE traffic at a fixed AIFSN.
Conclusions
This work, presented a MATLAB simulation model that can be used to evaluate the service differentiation effect on QoS parameters in IEEE 802.11 EDCA protocol. The model implemented four WSTAs accessing an isolated AP based on EDCA MAC protocol that employs CSMA/CA mechanism. Throughput and delay are used as metrics for performance evaluation of the EDCA protocol. The simulation model was validated and analyzed graphically. The result shows that EDCA protocol provides mechanism for service differentiation which strongly depends on channel access parameters (AIFS and CW sizes). The improvement comes at a cost of reducing the performance of traffic with lower priority up to the starvation point. The protocol is, therefore, not considered efficient for networks that records high volume of BE traffic. The default setting of channel access parameters was varied and simulated at different intervals to X-ray the impact of CW size and AIFS number. The result of the study shows that AIFSN has more effect on the QoS performance of the protocol. Relatively, this is an indication to show that a patch has been successfully designed for the platform.
It is recommended that tuning AIFS (small value) has to be cautiously done, so as not to starve best-effort traffic. The CW size has to be tuned dynamically in response to varying load. For a network that involves high influx of real-time service or BE, smaller AIFSN is advised to be used for such traffic while larger CW size is advised to reduce the collision. At a very high network load, admission control or appropriate scheduling scheme is needed to guarantee channel access to real-time traffic while at the same maintain some level of fairness at which the data traffic access the same channel. Authors' contributions GOU designed, performed the experiment, and wrote the paper. UNN analyzed the generated data and edited the paper. MAA significantly contributed in editing the paper. CIA supervised the entire work. All the authors read and unanimously approved the final manuscript.
Not applicable
Availability of data and materials Data used during simulation were obtained from published works and the data sources are referenced appropriately. | 5,951.4 | 2022-01-04T00:00:00.000 | [
"Computer Science"
] |
Employees’ Acceptance of AI Integrated CRM System: Development of a Conceptual Model
Artificial Intelligence (AI) integrated Customer Relationship Management (CRM) systems can maximize firms’ value by identifying and retaining best customers. The success of such advanced technologies depends on employee’s adoption. However, research on examining employee’s acceptance of AI integrated CRM systems is scarce. Therefore, this study has taken an attempt to propose conceptual model to predict the use-behaviour of employees to use AI integrated CRM system in organizations. This study adapted meta-UTAUT model as theoretical lens and extended the model with constructs such as compatibility, CRM quality, and CRM satisfaction specific to the organizational context. Future researchers can empirically test the proposed model with data gathered from employees using AI integrated CRM system
Introduction
Customer Relationship Management (CRM) is considered as an effective tool that can help organization's to understand the customers in a more systematic way by "identifying a company's best customers and maximizing the value from them by satisfying and retaining them" [1]. CRM can achieve customers' satisfaction and organizational performance [2,3]. CRM ability is measured by the capability of this tool to analyse the customers' huge amount of data accurately and to proceed accordingly [4]. However, analysing such huge volume of customers' data by human endeavour is difficult and here comes the need of application of modern Information and Communication Technology (ICT) that calls for Artificial Intelligence (AI) application in CRM known as AI integrated CRM [5][6][7]. It is thus perceived that business organizations would emphasize to use AI integrated CRM to achieve best results. Report transpires that AI integrated CRM system would ensure to earn a revenue of $1.1 trillion from 2017-2021 [8]. Moreover, with the help of AI integrated CRM system, Organizations can analyse huge volume of customers' data with less cost and ease [9]. Analysis of customers' data by the organizations provides effective inputs to the organizations to strengthen their CRM quality [10]. Since such data is huge in volume, accurate analysis is ensured quickly through AI as is observed in other studies [11,12]. With the help of AI, it is possible for the organizations to arrive at an accurate decision by analysing huge volume of customers' data easily [13,14].
Organization's successful AI integration with their CRM system depends on employee's motivation to use such systems. The employees responsible for analysing customers data using AI integrated CRM system need to be sincere. This will help the organizations to accurately realize the likings, habits, disliking of the customers [15,16]. The users of AI integrated CRM system in organizations would exhibit their attitude and intention towards using the system if they feel the technology is compatible with existing technology and helps them to use the new system [17,18]. However, there is limited understanding on various factors influencing employees use behaviour towards integration of AI with CRM systems in organization context. To this end, this research proposes a conceptual model based on review of dominant technology acceptance theories/models to provide holistic understanding on factors determining employee's acceptance of AI integrated CRM system.
The remaining parts of the paper are arranged as follows. Section 2 provides overview on dominant technology acceptance theories/models. After that, section 3 provides background to meta-UTUAT model and how it is extended to propose the conceptual model. The subsequent section 4 provides overview on the proposed research methodology and data analysis for empirical validation of the proposed model. The paper ends with conclusion in section 5.
Overview of Technology acceptance Theories and Models
Understanding individual acceptance of information technology (IT) is considered as one of the mature streams within the information systems(IS) research arena [19,20]. Efficient implementation of any Information System principally depends on the acceptance of the users [21]. In recent times, in the domain of IS, psychology, and sociology, it is observed that a plethora of theoretical models have been developed for exploring and predicting users' acceptance of IS. Among these models, many researchers advocated in favour of Technology Acceptance Model (TAM) [22][23][24][25]. But on the contrary, some scholars observed that TAM has some specific drawbacks [26]. It does not provide sufficient insights towards individuals' perspective concerning a new system, it directly investigates the external variables like perceived usefulness and perceived ease of use neglecting the indicators, it is found to have ignored the linkage between use and attitude as well as use and intention [27,28].
UTAUT Theories
In the quest to address the limitations of existing technology acceptance models such as TAM, many competing theories emerged towards the end of 20th century, such as diffusion of innovation (DoI) theory, Innovation Diffusion Theory (IDT), and model of personal computer utilization to explain individual adoption of IS/IT. This multitude of contexts and theories presented new challenge of plurality to IS researchers [29]. Venkatesh, Morris, Davis and Davis [17], developed comprehensive model -Unified Theory of Acceptance and Use of Technology (UTAUT) based on thorough review of eight dominant technology adoption models to overcome limitations of existing theories [see 17]. UTAUT model postulates performance expectancy, effort expectancy, and social influence as direct determinants of individuals behavioural intention towards using focal technology that together with facilitating conditions affects their use behaviour. The focal phenomenon of UTAUT was organizational users of technology primarily driven by their extrinsic motivation emphasizing on the utilitarian value. Since then, UTAUT model has been extensively used in different contexts including field communication technology [30], home-health services [22], mobile-health [31] and so on. The UTAUT model has effectively contributed the exploration towards technology acceptance and usage. Despite the comprehensiveness and popularity, many researchers were doubtful about UTAUT model ability to analyse the individuals' technology acceptance behaviour [18,32]. It has been criticized by many scholars on different grounds [33,34]. Recently, Li [35] observed that, for gaining high variance (R2), the UTAUT model considered four moderators which are impractical and not necessary and it was observed that good predicting power would have been achieved using simple model by applying appropriate initial scoring procedure. Besides, many researchers felt necessity to extend the UTAUT model by dropping some factors and including some other factors according to the contextualization [32,[36][37][38][39]].
Meta-UTAUT model
Researchers have acknowledged the inherent limitations of UTAUT both explicitly and implicitly during their empirical investigations. Dwivedi, Rana, Jeyaraj, Clement and Williams [18], re-examined the model using combination of meta-analysis and structural equation modelling (MASEM) techniques to address some of those limitations. Henceforth, this study will refer to the re-examined model as meta-UTAUT. The findings revealed UTAUT model lacked individual differences variable attitude that could be influential in explaining their dispositions towards the use of focal technology. In meta-UTAUT model, attitude was found to partially mediate the effects of all four UTAUT exogenous variables (i.e. performance expectancy, effort expectancy, social influence, and facilitating conditions) to behavioural Intention and had direct effect on use behaviour. In addition, the study found significant association between facilitating conditions and behavioural intention that was not part of the original UTAUT model [see 18 for model]. Finally, meta-UTAUT excluded moderators as they are relevant only if significant variation exist among individuals examined in same context making the model more parsimonious and easier to use [39]. Meta-UTAUT model based on MASEM is a robust alternative to examine individual technology adoption and use as it addresses the shortcoming of UTAUT [18].
Proposed extension to meta-UTAUT
Attitude plays significant role on individual intentions towards performing underlying behaviour especially during early stages of technology adoption [40]. Employee's adoption of AI integrated CRM systems in organization's are still at the early stages. Therefore, this study deemed meta-UTAUT model as appropriate theoretical lens to evaluate antecedents in relation to employees use of AI integrated CRM system. Strength of an individual's intention in the context of performance of a specific behavior is construed to be the measure of Behavioral Intention.
Fishbein and Ajzen [41] Attitude (ATT) It is associated with a conception that people can be ambivalent to an object through jointly exhibiting positive or negative feelings towards the same object.
Wood [42] Compatibility (COM) It is defined as the extent to which an innovation is perceived to be consistent with the existing values and access with the help of previous experience.
Rogers [43]; Wang, Cho and Denton [44] CRM Quality (CRQ) CRM quality refers to the employees as to how valuable information that the employees get from the CRM. AI CRM should help the decision-making process by automating the user recommendation field. To get the accurate and good quality CRM output, the data input to the AI CRM tool must be of good quality.
Battor and Battor [45]; Chatterjee, Ghosh and Chaudhuri [12]; Nyadzayo and Khajehzadeh [46] CRM Satisfaction (CRS) CRM satisfaction refers to the employees' delight that they are expected to get once the employees start using AI integrated CRM system in their organization.
Chatterjee, Ghosh, Chaudhuri and Nguyen [6]; Kalaignanam and Varadarajan [47]; Phan and Vogel [48]; Winer [49] Prior research suggests researchers should focus on including attributes specific to the context rather than having the urge to replicate the entire baseline model [50]. It is argued that in the context of this study, since the organizations would adopt AI integrated CRM system, question of influencing the employees of the organizations by the society and question of voluntariness of the employees have become redundant. As such, it is thought cogent to drop social influence. Besides, this study added three new exogenous variables to meta-UTAUT model such as compatibility, CRM quality, and CRM satisfaction. This idea has been supplemented by another study where compatibility was included as a factor while dealing with UTAUT model [51]. The inclusion of other two exogeneous contextual variables CRM quality and CRM satisfaction were based on the premise that they would better explain adoption and use behaviour. This is in consonance with the observation that the UTAUT based models can be extended from the light of other contextual constructs which may be deemed to explain better adoption and usage behaviour of individuals [18]. The synopsis of all the constructs is shown in a tabular form in Table 1. With all these information and discussions, the proposed conceptual to examine employee use behaviour towards AI-CRM in the organization is shown in figure 1.
Research methodology
Researchers can employ quantitative survey methodology to empirically validate the proposed conceptual model as validated scales are readily available to measure the latent constructs [52,53]. Partial Least Square (PLS) -Structural Equation Modelling (SEM) can be employed for data analysis once the data is collected. PLS-SEM approach is helpful to analyse an exploratory study like this [54] . In addition, a complex model with comparatively small sample size (as it involves organizational users) can be best analysed by PLS-SEM approach [55]. Besides, PLS-SEM approach is known to have yielded better results for such studies that cover marketing issues [56,57].
Conclusion
This study offers several inputs to the extant technology acceptance literature. The proposed model extended the meta-UTAUT model with context specific variables such as compatibility, CRM quality, and CRM satisfaction to analyse the use-behaviour of the employees of organizations to use AI integrated CRM system. Context effects can be broadly defined as the set of factors surrounding the focal phenomenon that exerts direct or indirect influence on it [58]. The proposed new endogenous mechanism which refers to new associations between external variables (compatibility, CRM quality, CRM satisfaction) and any of the three meta-UTAUT endogenous variables such as attitudes, intentions, and usage offers better adaptation of meta-UTAUT in the context of AI integrated CRM system [39]. The proposed model reveals that attitude could directly impacts intention as well as use behaviour of employees to use AI integrated CRM system in organizations. This implies that the managers of organizations have bounden duty to shape the attitude of the employees towards intention and use behaviour to use AI integrated CRM system. However, such assumptions require empirical validation of the proposed conceptual model. Therefore, future research in ttechnology acceptance area can further examine meta-UTAUT alongside other variables to further contribute to employee's adoption of AI integrated CRM system. The proposed model could also be empirically validated to satisfy the conditions of generalisability of the model for varied samples [59,60,61]. | 2,823.8 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
Cryopreservation of mouse resources
The cryopreservation of sperm and embryos is useful to efficiently archive valuable resources of genetically engineered mice. Till date, more than 60,000 strains of genetically engineered mice have been archived in mouse banks worldwide. Researchers can request for the archived mouse strains for their research projects. The research infrastructure of mouse banks improves the availability of mouse resources, the productivity of research projects, and the reproducibility of animal experiments. Our research team manages the mouse bank at the Center for Animal Resources and Development in Kumamoto University and continuously develops new techniques in mouse reproductive technology to efficiently improve the system of mouse banking. In this review, we introduce the activities of mouse banks and the latest techniques used in mouse reproductive technology.
Introduction
A genetically engineered mouse is a powerful tool to elucidate the complex communications between genes or organs in health and diseases [1]. Moreover, humanized mouse models derived from immunosuppressed mice are helpful to bridge the gap of discovery and the development of new medicines between human and animal experiments [2]. Therefore, it is important to enhance the availability and accessibility of mouse resources to conduct research projects efficiently using valuable mouse models.
A mouse bank plays a vital role in archiving and supplying mouse resources [3]. In Kumamoto University, the Center for Animal Resources and Development (CARD) was established as a research center for genetics and biomedical science using genetically engineered mice and as a quality center of mouse resources as a mouse bank in 1998 [4,5]. The CARD provides services of production, cryopreservation, and supply of genetically engineered mice and established a searchable database of the archived mouse strains known as the CARD Resource Database (CARD R-BASE, http://cardb.cc.kumamoto-u.ac.jp/transgenic/). Till date, the international collaboration of mouse banks (International Mouse Strain Resource: IMSR) has successfully collected more than 60,000 strains of genetically engineered mice ( Table 1). The archived mouse strains can be browsed through the IMSR website (http://www.findmice.org/) [6]. Researchers can obtain live mice, cryopreserved embryos, or the sperm of choice from those mouse banks.
In Asia, an international association of mouse research centers and mouse banks known as the Asian Mouse Mutagenesis and Resource Association (AMMRA, http://ammra.info/) was organized and has been functioning since 2006 [7]. The AMMRA aims at producing original mouse resources and promoting international collaboration in Asia. At the AMMRA conference, strategies are discussed to improve science using our resources, technology, and network, and workshops are held to educate students, technicians, and young researchers. Furthermore, the AMMRA participates in the Global Mouse Models for COVID-19 Consortium to support research fighting the coronavirus pandemic.
In a mouse bank, reproductive technology plays key roles in the efficient production, preservation, and transport of genetically engineered mice. Our center continuously refines the mouse reproductive technology to enhance the function of the mouse bank system. Till date, we have overcome several problems in mouse reproductive technology and have efficiently archived mouse resources by sperm and embryo cryopreservation to produce eggs and embryos using the techniques of ultrasuperovulation and in vitro fertilization and to establish the worldwide shipment of cryopreserved or cold-stored embryos and sperm [5]. Our techniques are used widely in mouse repositories and transgenic facilities [8][9][10][11]. In this review, we introduce the latest techniques used in the CARD mouse bank.
Sperm cryopreservation
Sperm cryopreservation is the most cost-effective method to preserve mouse strains [12,13]. Cryopreserved sperm can be preserved permanently in a liquid nitrogen tank and animals can be reproduced using in vitro fertilization and embryo transfer techniques. Potentially, more than 2000 pups can be produced from the cryopreserved sperm collected from a male mouse. Cryopreserved sperm can be transported in a dry shipper at − 196°C or a shipment box containing dry ice at − 79°C [14]. Prof. Nakagata developed the fundamental system of mouse sperm cryopreservation using a cryoprotectant composed of 18% raffinose pentahydrate and 3% skim milk (Nakagata method) [15].
However, there was a critical problem concerning the low fertility (0-20%) of cryopreserved sperm in C57BL/6 mice [16,17]. To overcome this problem, we improved the raffinose-and skim-milk-based cryoprotectant by adding 100 mM L-glutamine (modified R18S3) [18]. We also developed a system of in vitro fertilization using frozen-thawed sperm to enhance the fertilization rate by treating with methyl-β-cyclodextrin (MBCD) and reduced glutathione (GSH). During sperm preincubation, MBCD (0.75 mM) increased the fertilization rate of frozen-thawed mouse sperm by stimulating cholesterol efflux from the sperm membrane [19]. In the fertilization medium, 1.0 mM of GSH or cysteine analogs supported sperm penetration through the zona pellucida and increased the fertilization rate by dissecting the disulfide bonds of the zona pellucida [20,21]. Combining these techniques, we developed an optimized protocol for the cryopreservation of mouse sperm and in vitro fertilization using the frozen-thawed sperm [22]. A review describing the history of technology development in mouse sperm cryopreservation was written by Prof. Sztein [23].
Embryo and oocyte vitrification
The vitrification of mouse embryos is useful to preserve mouse resources and readily reanimate homozygote mutant mice [24,25]. Vitrified embryos can be preserved permanently in a liquid nitrogen tank at − 196°C [26]. A standardized protocol in mice consists of a simple vitrification method using 1 M dimethyl sulfoxide (DMSO) and a mixture of 2 M DMSO, 1 M acetoamide, and 3 M propanediol (DAP213) used as the vitrification solution [27]. More than 90% of vitrified-warmed embryos can survive and 30-50% of the survived embryos can develop into pups via embryo transfer. The vitrification of mouse oocytes is helpful for the emergent use of in vitro fertilization when there is a shortage of oocytes owing to superovulation failure or the delayed transport of cold-stored sperm. The simple vitrification method is also applicable to the cryopreservation of mouse oocytes [28,29]. However, it has been observed that the prolonged exposure of hyaluronidase to remove cumulus cells from oocytes decreased the fertilization rate of cryopreserved mouse oocytes [30]. Treatment with N-acetyl cysteine (NAC) was found to recover the fertilizing ability of vitrified-warmed mouse oocytes by alleviating zona hardening [31].
The vitrification of mouse oocytes in the pronuclear stage was found to be useful for the production of genetically modified mice by genome editing techniques. Fertilized oocytes were produced by in vitro fertilization. At 6.5 h after insemination, the fertilized oocytes were cryopreserved by the simple vitrification method [32]. After warming, the oocytes can be readily used for microinjection or electroporation to edit the target gene using the TALEN or CRISPR-Cas9 system [32][33][34][35][36].
Superovulation
Superovulation is a useful technique to obtain a large number of oocytes via the administration of hormones [37]. Ovulated oocytes are used for cryopreservation, in vitro fertilization, or mating to obtain fertilized oocytes in vivo. To induce superovulation, equine chorionic gonadotropin (eCG) and human chorionic gonadotropin (hCG) are administered routinely to female mice [38]. The average yield using the eCG and hCG method is 25 oocytes/female mouse [39]. In 2015, we refined the superovulation technique by the coadministration of inhibin antiserum (IAS) and eCG (IASe or ultrasuperovulation), which was able to produce more than 100 oocytes/female mouse [40]. IAS blocked the negative feedback of inhibin on the secretion of follicle-stimulating hormone (FSH), resulting in the production of excess levels of FSH and the promotion of follicular development [41,42]. The coadministration of IAS and eCG was found to be effective in stimulating follicular development by endogenous and exogenous FSH. The ultrasuperovulation technique was helpful in reducing the number of oocyte donors and achieved a rapid and mass production of genetically engineered mice. With IASe treatment, 4-week-old C57BL/6 J female mice produced the largest number of oocytes at the age between 3 and 50 weeks [43]. The yield of ovulated oocytes using the IASe treatment was different between inbred mice (A/J: 24.9 oocytes/ female; BALB/cByJ: 90.3 oocytes/female; C3HeJ: 52.0 oocytes/female; DBA/2 J: 68.8 oocytes/female; and FVB/NJ: 25.6 oocytes/female) and outbred mice (CD1: 33.7 oocytes/ female) [44]. Among the highly immunosuppressed mouse (nonobese diabetic/Shi-scid IL2rγnull mouse), female mice aged 12 weeks produced the largest number of oocytes (70.0 oocytes/female) [45]. Therefore, the optimal age of female mice to induce ultrasuperovulation using IASe treatment also depends on the mouse strain.
Cold storage of sperm
The cold storage of sperm is applicable to the shipment of genetically engineered mice as an alternative to the shipment of live animals [46]. The shipment of cold-stored sperm can be done easily using inexpensive shipment and avoids the risks of spreading infectious diseases and the escape or death of live animals during the shipment. Regarding the shipment of sperm, we collected the cauda epididymis in a preservation solution and shipped it in a cold-transport kit [46]. We observed that the fertilizing ability of cold-stored mouse sperm decreased in a timedependent manner [47]. However, the preservation solution of Lifor perfusion medium and in vitro fertilization using MBCD and GSH were found to be effective in preventing the reduction of the fertilizing ability of cold-stored sperm [46,48]. Furthermore, the addition of DMSO and quercetin to the preservation medium prolonged the storage period of cold-stored sperm for 10 days [49]. The cold-stored sperm could be cryopreserved and later used to recover animals by in vitro fertilization and by embryo transfer [50]. Today, we generally receive cold-stored sperm to produce embryos or live animals or to archive cryopreserved sperm in the CARD mouse bank. The new transport system using the cold-stored sperm facilitated the domestic and international transportation of mouse resources.
Cold storage of two-cell embryos
The cold storage of two-cell embryos has also been found to be useful for the shipment of genetically engineered mice [51]. The transported two-cell embryos can be used to produce animals by embryo transfer at the receiving facility. An advantage of the cold-transport of embryos is that it is a simple procedure without the need for cryopreservation and avoids the potential risks involved in the shipment of live animals. The developmental ability of cold-stored embryos could decline in a time-dependent manner. The preservation M2 medium containing 1.5 mM NAC was found to prolong the storage period of cold-stored embryos for 4 days [51][52][53]. Cold-stored embryos can also be cryopreserved using the simple vitrification method [54]. The shipment of cold-stored embryos is practical for performing embryo transfer beyond the facility or when there is a lack of mice recipients on the date of embryo transfer.
Mouse reproductive technology workshop
To share our knowledge and techniques of mouse reproductive technology with our community, we have been organizing the CARD Mouse Reproductive Technology Workshop in Japan and abroad since 2000 (Table 2). In this workshop, we provide lectures, demonstrate the latest techniques, and perform hands-on training on the preparation of glass pipettes, oocyte handling, sperm cryopreservation, cold storage of sperm, in vitro fertilization using fresh, frozen-thawed, and cold-stored sperm, oocyte washing and observation, cryopreservation of oocytes and in vitro fertilization using vitrified-warmed oocytes, two-cell embryo collection, embryo cryopreservation, cold storage of embryos, operation of vasectomized mice, surgery of embryo transfer, and nonsurgical transportation of embryos. More than 700 students have participated in our workshops. Owing to the prevailing coronavirus pandemic, we have postponed the hands-on training and plan to set up an online course to overcome the limitations of international travel. Moreover, we intend to update new techniques on our website regarding the online manual of mouse reproductive technology (http://card.medic.kumamoto-u.ac.jp/ card/english/sigen/manual/onlinemanual.html).
Conclusions
The cryopreservation of mouse resources is an important strategy to accumulate valuable mouse characteristics useful for the scientific community. The optimal combination of reproductive technology will provide the best standards of cryopreserved mouse resources to researchers. We have provided a picture of the mouse bank system in Fig. 1. The advanced mouse bank system will provide a seamless archive and supply of mouse resources beyond facilities and countries. In addition, an international resource network will provide a robust research infrastructure to facilitate international collaborations. We have described the latest techniques of mouse reproductive technology in this review article. Details of the techniques can be mastered via the hands-on workshop or our online manuals. We hope that this review article would be helpful in improving the management and availability of mouse resources at your facility. | 2,814 | 2020-09-17T00:00:00.000 | [
"Biology",
"Medicine"
] |
Image Dehazing Based on Improved Color Channel Transfer and Multiexposure Fusion
,
Introduction
With the popularization and rapid development of computer technology, computer vision is widely used in various felds such as object detection [1][2][3][4], image segmentation [5,6], and face recognition. Afected by smoggy weather, the images acquired by camera equipment usually show color shift, low visibility, and decreased contrast and saturation, which seriously afect the development of subsequent computer vision tasks. Terefore, dehazing the images is an important research direction in computer vision tasks. In recent years, many researchers have studied image dehazing algorithms from multiple directions. It is mainly divided into three parts: dehazing algorithm based on image enhancement, physical model, and deep learning.
Te dehazing algorithm based on image enhancement improves the quality of the image by enhancing the contrast and strengthening the edge information and detail information of the image, but there is a phenomenon that the image information is lost due to excessive enhancement. Tis kind of method is mainly divided into two categories: global enhancement and local enhancement. Among the globally enhanced methods, there are algorithms based on histogram equalization, homomorphic fltering, and Retinex theory. In the localization enhancement method, the wavelet transform algorithm decomposes the image, and the image is localized through local features to make the image enhanced at multiple scales and then amplify the useful information [7].
Te dehazing algorithm based on the physical model often relies on the atmospheric scattering model [8], which mainly focuses on the solution of the parameters in the model, and through the mapping relationship, the inverse operation is performed according to the formation process of the foggy image to restore the clear image. Te atmospheric scattering model is the cornerstone of the subsequent physical model-based dehazing algorithm, and many researchers have carried out extensive and in-depth research on the basis of the atmospheric scattering model to continuously improve the level of image dehazing.
In recent years, the dehazing algorithm based on deep learning has shown better performance. At present, there are two types of deep learning-based dehazing algorithms that are widely studied: one is to use deep learning methods to estimate some parameters of atmospheric physical models to restore images [9] and the other is to use the neural network to directly restore the input foggy image to obtain the dehazing image, which is often referred to as end-to-end dehazing in deep learning [10,11]. Diferent from the existing dehazing methods based on atmospheric scattering models, the proposed method adopts Laplace pyramid decomposition based on Laplace pyramid decomposition to retain the structural information of the image. In order to obtain a fog-free image, the area with the best visual quality is collected from each image for image fusion, and the color channel transfer algorithm is used to efectively retain the color information in the image.
Te main contributions of this paper are as follows: (a) In order to prevent color dull and distortion that may occur in the image after dehazing, we propose a color transfer module to compensate for the color loss of the dehazing image. Te color transfer module converts the data of the image from RGB space to lαβ space and then uses the color channel transfer between images to restore the color information of the dehazing image. (b) An image dehazing algorithm based on Laplace pyramid fusion scheme via local similarity of adaptive weights is proposed, which frst artifcially underexposes haggy images through a series of gamma correction operations. With a multiscale Laplace fusion scheme, multiple exposure images are combined into a fog-free result, extracting the best quality areas from each image and merging them into a single fog-free output. (c) In order to prove the dehazing performance of the proposed method, extensive experiments were carried out on the dataset of indoor/outdoor synthetic foggy images and natural foggy images, and better results were achieved in both subjective and objective aspects.
Related Work
Foggy images lead to blurry image details, low contrast, and loss of important image information, and preprocessing of foggy images can often improve dehazing performance. Te literature [12] proposes color channel shifting, which utilizes a reference image from a source image to transfer information from an important color channel to an attenuated color channel to compensate for the loss of information. However, this method needs to be combined with other dehazing methods to improve the dehazing performance of these methods in special color scenes.
Te establishment of the atmospheric scattering model [13] explains the formation process of images in foggy weather and lays a foundation for the subsequent defogging work [14,15]. He et al. [8] proposed a dark channel prior principle (DCP) based on the atmospheric scattering model and using prior knowledge. In general, DCP has a good efect on dehazing natural scene images, but this theory is inefective in bright areas, such as the sky, water, and the surface of white objects, resulting in inaccurate transmission rates calculated and excessive enhancement of the recovered image and a darker efect. After that, He et al. [16] proposed a guided fltering algorithm, which focuses on simple box blurring and will not be afected by the degree and radius of blurring, so the real-time performance has been greatly improved, which is a conformal fltering algorithm. In the feld of image deraining and denoising, guided fltering can also achieve good results. Raanan [17] proposed a dehazing method based on color lines from the perspective of image color lines, assuming that the transmission in a local area is consistent, and the color lines in the nonfoggy area need to pass through the origin and move along the ambient light; this characteristic is used to estimate local transmission and global ambient light.
Compared with traditional methods, the methods of deep learning mainly learn the transfer rate by labeling and training datasets or directly learn the mapping of foggy images to the corresponding fog-free images. For example, Proximal Dehaze-Net [18] frst designed these two priori iterative optimization algorithms using proximity operators and then expanded the iterative algorithm into a dehazing network by using convolutional neural networks to learn neighboring operators. DehazeNet [9] uses a deep architecture based on convolutional neural networks to estimate the transmission rate in atmospheric scattering models. Ren et al. [19] proposed a multiscale deep convolutional neural network for recovering foggy images. Tis process often wastes a lot of calculation time; if the depth estimation of the dehazing scene is not accurate, the image after dehazing is prone to artifacts in the edge area, or it appears as color distortion, afecting the visual efect. Zhang and Tao [20] proposed a multiscale convolutional neural dehazing network FAMED-Net with a global edge, which can quickly and accurately calculate haze-free images end-to-end at multiple scales. FFA-NET [21] is an end-to-end feature fusion attention network in which attention mechanisms focus on more efcient information. Hong et al. [22] proposed a knowledge distillation network (KDDN) that uses the teacher network as an image reconstruction task and enables the student network to simulate this process. LKD-NE [23] improves the performance of the convolution kernel by increasing the size of the convolution kernel to use a larger acceptance domain, thereby enhancing the efect of network dehazing. Te deep learning-based dehazing method has shown excellent performance and has achieved great success. However, training deep learning models for good performance is cumbersome. Not only is a large labeled dataset required, but also the training process is time-consuming. Moreover, the debugging of models in deep learning is also relatively difcult, which increases the difculty of work.
Proposed Method
In this paper, an image dehazing algorithm based on color channel transfer and multiexposure fusion is proposed as shown in Figure 1, which efectively restores the saturation and contrast information of the image while retaining the color characteristics of the image. Te algorithm frst uses k-means to cluster and color transfer the pixel intensity of the image in the lαβ color space. Second, guided fltering is introduced into the multiexposure image obtained by gamma correction, and the dehazing image is obtained by Laplace pyramid fusion. Finally, contrast and saturation are corrected by an improved adaptive histogram equalization and spatial linear saturation adjustment, respectively.
Improved Color Channel Transfer Method.
In the process of image dehazing, in order to avoid the interference of a certain spectrum, the proposed method in this paper establishes a reference image by transferring the color channel of the input image, and the formula is as follows: where G(x) is the uniform grayscale image (50%) and D(x) is the detail layer of the input image, in order to calculate the signifcance mapping of the input image. We employ an efective technique proposed by Aganta [24] to introduce a bias against the dominant color between the feature map and the initial image, helping to restore the initial color. Te detail layer D(x) is obtained by subtracting the Gaussian blurred image from the input image, as shown in the following equation: where I whc (x) is the image of the original image processed by a Gaussian kernel of 5 × 5, I μ is the mean vector of the initial image, and ‖ ‖ is the L2 norm. Color channel shifting is used for dehazing image preprocessing, with the most pronounced efect in extreme conditions such as multilight sources, underwater images, and night images. In order to improve the efect of preprocessing of color channel transfer on daytime images, this paper introduces the k-means algorithm to adjust the standard deviation of the source image and the reference image in the color channel transition and then cluster the pixel intensity of each image in the color space and fnally use the Euclidean distance to determine the centroid between the two most similar images and only calculate the statistics in each region.
Gamma Correction.
In computer vision, pixel intensity values are proportional to the exposure level, so gamma correction can adjust the image exposure level by using diferent coefcients c [25]. Te gamma correction is shown as follows: where ε and c are the coefcients in gamma correction. When the coefcient c < 1, as shown in Figure 2, overexposure makes the hue of high-brightness objects in the image too bright, and the smoothness of the edges of the object is prone to degradation. When the factor c > 1, as shown in Figure 3, the contrast of the underexposed image is enhanced, and more detail can be obtained in the image. Terefore, we choose c values of 2, 3, 4, and 5 to artifcially generate underexposed images.
Laplace Pyramid Decomposition and Energy Local
Features. Laplace pyramid is a simple and efective, multiscale, multiresolution image processing method, which is based on the Gaussian decomposition of the image and contains the information diference between the adjacent layers of two Gaussian pyramids. Te dehazing algorithm using Laplace pyramid fusion can better improve the dehazing efect [26] and contain higher spatial resolution and image detail, as shown in the following equation: where K is the number of available images with diferent exposures in E k (x) and J(x) is a well-exposed image produced by a combination of diferent correctly exposed regions in E k (x). Normalize the weights E k (x).
In the paper, a fusion method based on local energy features is used to assign the weight values in the Laplace pyramid. Te local energy features are defned as follows: For the position of the image at (i, j), the local energy of the point is the sum of squares of the pixel values in the m window centered on the point. Local energy features can efectively represent areas of an image with rich detail. In general, areas of the image that contain detailed details have a lot of energy. In the process of regional fusion, if the energy diference between the two is too large, it means that the matching degree is small, so we only choose the larger part. Te specifc steps are as follows:
Local Similarity Fusion Method Based on Adaptive
Weights. Te fusion method based on local similarity is a multiexposure image fusion method. According to this method, if two pixels have similar local neighborhood in diferent images, they can be regarded as the same pixel, and they can be fused into a high dynamic range (HDR) image [27,28]. In this paper, a local similarity fusion method based on adaptive weights is adopted. By adding adaptive weights to the local similarity fusion method, the weights can be adjusted adaptively according to the gradient information of diferent pixels, so as to better balance the contributions of diferent images and make HDR images more balanced and natural. Te details of the algorithm are as follows: (1) For each pixel, the Manhattan distance metric method is selected to calculate its local neighborhood in multiple images, and the gradient information of pixel value is used to calculate the weight, so as to better preserve the image details (2) Te mean square error method is used to calculate the similarity of each pixel's local neighborhood in diferent images (3) Te pixel with the highest similarity in diferent images is selected for fusion, and the weighted average method is used to obtain the fnal pixel value Te adaptive weighting method takes into account the gradient information of each pixel when calculating the weight. Supposing that the gradient value in the Nth image is G i , then the weight calculation formula of this method is as follows: where i represents the frst image, j represents the position of the pixel, and G i,j represents a total of N images. ϵ is a small positive number, which is used to avoid the case of zero divisor. α is a hyperparameter that controls the degree of nonlinearity of the weight, and G i,j represents the gradient value of the ith image at position j. Te greater the fnal weight w i,j , the greater the contribution of the ith image in position j.
On the basis of local similarity fusion, adaptive weight method can be introduced to further improve the fusion efect. In this method, gradient information is used to calculate the weight in order to better preserve the image details. In addition, the fusion method based on local similarity can be combined with other extension methods, such as multiscale fusion and local tone mapping, to further improve the fusion efect.
Multiscale CLAHE Method.
In order to further retain more detailed information of the dehazed image, this paper uses the CLAHE algorithm to process the dehazed image. In CLAHE, multiscale processing can further improve its enhancement efect. By analyzing the image at diferent scales, extracting feature information at diferent levels can efectively improve the contrast and details of the image after defogging, while avoiding excessive noise enhancement [29].
Te details of CLAHE are as follows: (1) Te original image is divided into multiple scales, which can be layered using methods such as the Gaussian or Laplacian pyramid. At the bottom of the pyramid, the size of the image is the largest and more detail can be obtained, while as the number of layers increases, the size of the image gradually decreases and the details gradually become blurred: where F is the original image, F i,j is the ith subimage in the jth scale, and h i,j is the Gaussian kernel function of scale I and subimage j. (2) CLAHE processing was carried out for each scale image. First, the image is divided into small blocks, and then, the pixels within each block are histogram equalized, and fnally, the values of pixels within the block are interpolated: where C i,j is the cumulative distribution function of pixel values in the subimage and K is the maximum value of the histogram. (3) In the locally enhanced subimages, the boundaries of each subimage are smoothed using an interpolation method: where F i,j ′ is the smoothed subimage, w i,j is the interpolation weight, and S i,j is the interpolation window. (4) Te fnal enhanced image is obtained by combining the enhanced results of all scales: where, F ′ is the fnal enhanced image, n is the number of scales, and m is the number of neutron images at each scale.
3.6. Spatial Linear Saturation Adjustment. Te multiscale CLAHE method can take into account the detailed information of images at diferent scales, making the contrast enhancement more balanced and natural. At the same time, multiscale processing can also avoid problems such as excessive enhancement or distortion that may occur during the processing of the CLAHE algorithm. However, multiscale processing will increase computational complexity and storage space, which is what we will address next. According to the CAP dehazing algorithm [15], it can be seen that, with the change of fog concentration, the brightness and saturation diference of the image also change. Based on this theory, Zhu [30] proposed a method to enhance image dehazing performance and robustness and balance its color saturation during the dehazing process, as shown in the following equation: where V F � m,n i�1,j�1 v F ij and V I � m,n i�1,j�1 v I ij , in which v F ij and v I ij are the brightness of pixel (i, j) in fused image F and the brightness of pixel (i, j) in foggy image, respectively; S F � m,n i�1,j�1 s F ij and S I � m,n i�1,j�1 s I ij , where s F ij and s I ij are the saturation of pixel (i, j) in the fusion image F and the saturation of pixel (i, j) in the foggy image F, respectively; ωF is the diference between brightness and saturation of the fusion image F; and ωI is the diference between brightness and saturation of foggy image I.
Parameter Settings and Datasets.
Te experimental computer is confgured with Intel (R) Core (TM<EMAIL_ADDRESS>GHz 16.00 GB RAM. In the improved color channel transition algorithm, equation (5) uses a Gaussian kernel and takes the k value of 5 to initialize the cluster center to ensure that the cluster center is in the data space. During the gamma correction phase, the selection value of artifcial exposure is fxed at c ∈ 2, 3, 4, 5 { }. It is difcult to collect real fog-free and contrasting foggy images in the research of the dehazing algorithm. In order to solve this problem, artifcial synthesis of fog images is usually required. In this paper, D − hazy artifcial synthesis of fog data set [31] and fog images collected in real scenes are mainly used to test and compare the performance of this algorithm on outdoor images. D − hazy contains 35 pairs of images with fog and corresponding outdoor images without fog (ground reality). Te variation range of atmospheric light is 0.8∼1.0, and the variation range of scattering parameters is 0.04∼0.2. To compare with the previous state-ofthe-art methods, we used PSNR, SSIM, GMSD, and FSIM indicators for comparision tests on a dataset containing 500 indoor images and 500 outdoor images.
Compared with rows 2 and 7 of Figure 4, it can be seen that the CAP method can show better dehazing performance in the mist area, but in rows 1, 10, and 11 of Figure 4, with the increase of fog concentration, the dehazing performance of the CAP method gradually decreases, the texture details of white objects (row 9) become blurred, and some details in the image are difcult to read, such as the texture of branches (such as rows 3 and 4). From the second and ninth lines of Figure 4, it can be seen that the FADE method is accompanied by color distortion and loss of detail while dehazing, Advances in Multimedia which reduces the visual efect of the image. Te AMEF and CODHWT methods can efectively reconstruct sharp images from foggy images. Trough the sky area in rows 6 and 8 of Figure 4, the background color of the image after dehazing by the AMEF method is closer to the original image compared to the CODHWT method. Both the MAME and DePAMEF methods achieved better performance in detail visibility and preservation of fog-free areas, but the image after DePAMEF dehazing had a mutilated haze, resulting in an increase in color artifacts in the area where the house and sky were connected.
Te algorithm proposed in this paper compensates for the loss between each channel through the color channel transfer method before dehazing and efectively reduces the interference between each channel, and the essence of the image is clearly restored after dehazing, and the buildings and vehicles in the distance are clearly visible and the details are obvious. Spatial linear saturation adjustment and contrast correction are applied to multiexposure image fusion, and the image after dehazing is more in line with human visual observation.
Zhang et al. [38] proposed FSIM, arguing that the human visual system mainly understands images based on low-level features and combines phase consistency, color features, gradient features, and chromaticity features to measure the local structural information of images. GMSD was discovered by Xue [39] in 2014 which showed that gradient maps are sensitive to image distortion, and distortion images with diferent structures have diferent degrees of quality degradation, so as to propose an image full reference evaluation method, which has the characteristics of high accuracy and low amount of calculation.
PSNR evaluates image quality by calculating the pixel error between the original image and the dehazing image. Te PSNR value is more signifcant when the error between the dehazed image and the original image is smaller. Te calculation of PSNR is shown in the following equation: where MSE is represented by the mean squared error and MAX 2 is the maximum pixel value of the original image. SSIM was used to measure the similarity between the original image and the dehaze image. SSIM uses the mean value to estimate brightness, standard deviation to estimate contrast, and covariance to measure structural similarity, as shown in the following equation. Te more signifcant the SSIM value, the less distorted the image, indicating better results after dehazing.
where μ x and μ y represent the means of x and y and μ 2 x and μ 2 y represent the variance of x and y, respectively. σ xy is represented as the covariance between x and y, and C 1 and C 2 are represented as constant coefcients.
FSIM is based on phase consistency and gradient amplitude. Te larger the value is, the closer the dehazing image is to the original image. GMSD is designed primarily to provide credible evaluation capabilities and use metrics that minimize computational overhead.
We calculate the PSNR of diferent methods for processing images. In Table 1, it can be seen from Figure 5 that both MAME and the proposed method can achieve good results in removing dense fog, and compared with MAME, our proposed method can efectively remove dense fog while restoring the color information of the sky area. In addition, compared with other images, the method proposed in this paper has achieved better results.
Te SSIM values for the image in Figure 5 are shown in Table 2. As can be seen from the table, AMEF, CAP, and the proposed method obtain higher SSIM values. It can be seen from Table 2 that the SSIM value of the proposed method reaches 0.9073, which has the best performance. For the Tiananmen image in Figure 5, the SSIM value of the method in this paper is 0.9192, second only to CAP. As shown in Table 3, the method proposed in this paper is superior to other dehazing methods in recovering image structure. Tis is because the multiexposure melting dehaze method fuses images with diferent exposure levels and better preserves the structural features of the image. Table 4 shows the calculation results of FSIM values. It can be seen from the table that the dehazing image proposed by this method has a high similarity with the original hazefree image, and the FSIM score is greater than 0.90. Tis is because we use gamma correction to acquire images with diferent exposure levels and multiscale fusion using the classical Laplace pyramid method. Te method proposed in this article attempts to obtain the best exposure for each area, so the FSIM score of the image is high.
Conclusion
In this paper, an artifcial multiexposure image fusion algorithm for single image dehazing is proposed. First, the color channel transfer method based on k-means is used to compensate for the channel with serious information loss. Ten, artifcial gamma correction obtains a series of underexposed images and fuses them into dehazing images with the improved Laplace pyramid fusion scheme, and fnally, in order to obtain better visual efects after dehazing, contrast and saturation correction are applied to enhance the dehazing images, so as to retain more image details. Trough comparative experiments with other mainstream dehazing methods, the results show that the proposed method can obtain good dehazing efect in light fog and dense fog images, and the method achieves good results in various evaluation performance indicators. In future work, it is necessary to further optimize the complexity of the algorithm and improve the practicability of the algorithm. In addition, it is also possible to start with fog and haze images in various scenarios and perform targeted defogging processing to obtain better efects.
Data Availability
Te data used to support the fndings of this study are available from the corresponding author upon request.
Conflicts of Interest
Te authors declare that they have no conficts of interest.
Authors' Contributions
Shaojin Ma and WeiGuo Pan conceptualized the study; Huaiguang Guan performed data curation; Songyin Dai and Bingxin Xu performed formal analysis; WeiGuo Pan and | 5,975.2 | 2023-05-15T00:00:00.000 | [
"Computer Science"
] |
Building a Hydrodynamics Code with Kinetic Theory
We report on the development of a test-particle based kinetic Monte Carlo code for large systems and its application to simulate matter in the continuum regime. Our code combines advantages of the Direct Simulation Monte Carlo and the Point-of-Closest-Approach methods to solve the collision integral of the Boltzmann equation. With that, we achieve a high spatial accuracy in simulations while maintaining computational feasibility when applying a large number of test-particles. The hybrid setup of our approach allows us to study systems which move in and out of the hydrodynamic regime, with low and high particle densities. To demonstrate our code's ability to reproduce hydrodynamic behavior we perform shock wave simulations and focus here on the Sedov blast wave test. The blast wave problem describes the evolution of a spherical expanding shock front and is an important verification problem for codes which are applied in astrophysical simulation, especially for approaches which aim to study core-collapse supernovae.
Introduction
Hydrodynamics is a frequent approach to describe the evolution of matter via its thermodynamic properties. Hereby, the physical particles the medium is comprised of have mean-free-paths λ which are much smaller than a characteristic length scale of the studied system L. If the socalled Knudsen number K n is small with K n = λ/L 1, the continuum approximation can be applied and the evolution of matter is described by the Navier-Stokes equations. The latter are a set of coupled differential equations and are derived from the conservation of mass, momentum, and energy density as well as the assumption of local thermal equilibrium [1]. The Navier-Stokes equations can be solved numerically either by the Eulerian, i.e. grid-based, method or the Lagrangian approach, also known as Smooth Particle Hydrodynamics [2]. Numerical hydrodynamics is a widely applied tool in science and engineering. However, if the mean-freepath of physical particles becomes large, resulting in K n 1, the continuum approximation breaks down. As a consequence, the Navier-Stokes equations cannot be applied anymore to evolve the system and transport equations have to be solved instead.
Kinetic Theory
In transport theory the many-body problem is solved via modeling the evolution of the particle phase-space density function f ( x, p, t) with position x, momentum p, and time t. Hereby, the equation of motion of the n-body density matrix is rewritten into a one-particle evolution, coupled to an equation of motion for the n-body correlation function. Truncation of the latter to the three-body level leaves a system of coupled non-linear equations of motion for the one-particle density function and the two-body correlation function. To derive the transport equations, the Wigner transform is applied to the particle density function and is then expanded to sums over all possible single particle states. Rearranging the equation of motion to a collisionless one-body propagation and collision terms results in the so-called Boltzmann transport equation [3]: The left-hand side of eq.(1) represent the collisionless motion of an ensemble of particles in an external force field F while the change in the particles' phase space distribution function f = f ( x, p, t) due to binary collisions is given by the collision integral on the right hand side. The probabilities of particle interaction depend hereby on their interaction cross-sections σ which are connected to the particle mean-free-paths via σ ∝ 1/λ. Numerically, the Boltzmann transport equation can be solved by the test-particle method. Hereby, the phase space distribution function is represented by a sum of delta functions [4]: whereas N is the number of test-particles in the simulation. Inserting f ( x, p, t) into eq.(1) results in 2N coupled first-order differential equations of motion: Hereby, C( p i ) represents the effects of two-body collisions on the i th test-particle's momentum. Kinetic methods are typically applied for rarefied gases with large Knudsen numbers. Examples for transport models applications are studies of hypersonic flow [5], nano-scale devices [6], particle production in heavy ion collisions [7,8,9,10], the dynamics of inertial confinement fusion (ICF) capsules [11], and astrophysics [12]. However, application areas can be extended to describe systems that can move in and out of the continuum limit. Especially heavy-ion collisions and core-collapse supernovae contain baryons which are in hydrodynamic equilibrium, as well as particles with mean-free-paths which can be very large. In heavy-ion collisions the latter are e.g. π mesons while for core-collapse supernovae, neutrinos can be trapped, freestreaming, or diffusing in the baryonic matter. In core-collapse supernovae, neutrinos are typically evolved by coupling solvers for the neutrino transport equations to the evolution of the hydrodynamics equations. However, while for one-dimensional core-collapse supernova simulations, the Boltzmann transport equation can be solved exactly [13], multi-dimensional calculations are currently computationally not feasible and can only be performed with approximations to the neutrino transport [14]. On the other hand, Monte Carlo neutrino transport has been suggested as an alternative approach since it is expected to scale better for multi-dimensional simulations [15]. Transport codes are able to handle matter in different states, from low density rarefied gases to high density matter in the continuum regime [16]. However, since the simulation of matter in the continuum regime requires the frequent interaction of particles with each other, large values of N are computationally challenging. One of the most efficient approaches to solve the collision integral is the Direct Simulation Monte Carlo (DSMC) method [17]. Hereby, the simulation space is partitioned into a scattering grid and interaction partners are chosen randomly amongst particles within a cell. The resulting simulation times scale as O(N log N ) and can be reduced even further by performing the calculations in parallel on multiple processors. However, a consequence of the probabilistic choice of interaction partners is that spatial details cannot be resolved to smaller scales than the size of a grid cell. Furthermore, the finite separation between collision partners and the instantaneous character of interactions can lead to causality violations in a relativistic regime [10]. An approach which minimized the distance between interaction partners is the Point-of-Closest-Approach (PoCA) method. Hereby, scattering partners are determined as test-particles whose path crosses within one timestep. The PoCA method has been successfully applied in modeling physical systems with limited number of constituents, for example the simulation of heavy-ion collisions [18,19,20]. While such simulations have very good spatial resolution, the PoCA method typically requires the comparison of each particle with every other particle of the simulation and has a computational scaling of O(N 2 ). The time to search for interaction partners can be reduced by a spatial sort of particles. Nevertheless, for large values of N , PoCA methods are computationally very expensive.
Combination of the DSMC and PoCA methods
In our approach we combine the advantages of the DSMC and the PoCA techniques to solve the collision integral of the transport equations. Our final goal is to study astrophysical systems such as core-collapse supernovae and matter in ICF capsules via the kinetic scheme. Similar to the DSMC method we divide the simulation space into a grid. However, interaction partners are not chosen randomly but are determined from test-particles in a grid cell and the neighboring cells via the PoCA method. With that, we ensure a spatial accuracy which is higher than in usual DSMC simulations. To avoid a computationally expensive spatial sorting we connect and propagate particles in their corresponding cell via a linked lists. Furthermore, the setup of the scattering grid allows us to use multiple processors and determine interaction partners for different cells in parallel. With that, though computationally more expensive than DSMC methods, our algorithm has a much smaller computational time than traditional PoCA methods and can thereby describe systems with a large number of interacting particles. Collision partners are determined via three steps [21]. First, possible interaction partners are selected as particles reaching their point of closest approach during the timestep ∆t. For that we determine the sign of the crossing number χ: Hereby, r rel and v rel are the relative position and velocity vectors for particles A and B at the current as well as the next timestep: A negative value of χ indicates that within ∆t, particles A and B reach their distance of closes approach and their paths cross. As a next step we determine the collision time via [21]: If the distance of closest approach between particles A and B is larger than the sum of their effective radii r eff the collision time t c has imaginary values and the interaction does not take place. These first two steps typically leave several potential interaction partners for one particle of interest. The final collision partner is then determined from this sample as the particle with the shortest distance to the particle of interest. After collision partners have been assigned to each other, we perform the scattering in the center of mass frame of each collision pair. Previous approaches showed that a limitation of interactions to unambiguously determined scattering partners can lead to a significant underestimate of the collision rate [10]. In our work, we allow different particles to have the same collision partner and perform the corresponding interactions sequentially during one time-step ∆t [21]. We find that typically all test-particles interact with each other when matter is in the hydrodynamic regime. Once all collisions have been performed each particle's position is updated according to : Hereby, t min is the time when particles reach their distance of closest approach. This is an improvement of the approach which was used in [21]. Large particle densities and small meanfree-paths can lead to negative collision times t c (see eq. (6)). In [21] this case was treated as an instantaneous collision. However, by allowing particles to reach their point of closest approach during ∆t, we achieve a better localization of shock fronts which is important in e.g. astrophysical simulations of core-collapse supernovae.
Sedov shock test
Shock wave studies are very well suited to demonstrate a code's ability to model hydrodynamic behavior. To reach the continuum regime, i.e. K n 1, we set the particle mean-free-path to be very small with respect to the width of a scattering grid cell. Since analytic shock wave solutions are generally obtained for an ideal gas equation-of-state, they require a simple description of particle interactions in from of elastic scattering. Hereby, to facilitate a simple treatment of particle interactions we assign particle degrees of freedom f via the number of dimensions in the simulation. This affects the equation-of-state of the studies matter via its heat capacity ratio γ = 1 + 1/f which becomes γ = 2 for a two dimensional simulation and γ ∼ 1.67 in the case of three dimensions. It has already been demonstrated that the kinetic approach can reproduce the Sod and Noh shock problems [21]. The current work is focused on the Sedov blast wave test. While the Sedov test is a standard test for hydrodynamic codes it has not been discussed for kinetic schemes. The test describes an expanding spherical shock front and is therefore of special interest for codes which aim to simulate astrophysical problems, such as core-collapse supernovae. The initial setup of the Sedov test is a simulation space filled with matter at uniform density n in and vanishing pressure. An explosion is set up via a point-like deposition of energy E blast [22,23]. The result is a spherical shock wave that moves outwards leaving behind matter with decreasing densities towards the center. While the bulk velocity of matter has a radial dependence v b ∼ r/t, with t being the time and r the radial distance from the center of the simulation space, the position of the blast wave can be written as: with a peak density of n(r) = n in (γ + 1)/(γ − 1). Here, d is the number of dimensions in the simulation and α is a constant of the order one. As the shock wave moves away from the center it leaves behind matter at vanishingly low densities. This can be a challenging problem for simulations operating with a finite number of particles since densities cannot become arbitrarily small [24]. Furthermore, as in all numerical studies, a general problem is the finite spatial resolution of a simulation which typically leads to a broadening of shock fronts. For the Sedov test, the latter impacts the density profile reducing its peak values. We perform the Sedov test in two dimensions with N = 3.5 × 10 7 test-particles distributed randomly in 250 × 250 bins over −0.04375 ≤ x, y ≤ 0.04375. The blast wave is initialized around x = y = 0 with a radius of r blast = ∆x. The mean-free-path is chosen to λ = 0.001 ∆x. While the particle velocities outside the blast region are initialized with absolute values of v in = 3, we deposit E blast by setting v in = 37180 for particles with radial distance r ≤ r blast . For this study we chose an adaptive timestep with ∆t = ∆x/v max , whereas v max is the maximum absolute particle velocity at the current timestep. With a number of particles in the blast region of 1840 the resulting explosion energy is E blast ∼ 1.27176×10 12 . Figures 1(a)-(c) show the the density, radial velocity, and pressure profiles at timestep 420 with t = 2.57877 × 10 −5 , averaged over r together with the analytic solution. The latter is obtained by the publicly available sedov3.f code [25]. Figures 2(a)-(c) show the particle numbers, bulk velocity, and pressure per bin in the simulation space at the same timestep. We find that the density of matter at the shock front is slightly reduced. However, as previously explained, this is an expected result of simulation due to the finite spatial resolution. The particle number in the center of the simulations has higher values than the analytic solution which can be attributed to the finite minimal number of test-particles in one cell. The small value of the latter on the other hand leads to large fluctuations in radial velocity at the center of the simulation as seen in Figs. 1(b) and 2(b) as well as in the pressure profile shown in Fig. 1(c). The pressure is obtained from the stress tensor (see [21] for more details) via a radial average of the pressure per bin, shown in Fig. 2(c). Overall, we see a very good agreement between the analytic solution and the kinetic simulations.
Summary
In this work we present a transport code which combines the computational advantages of the Direct Simulation Monte Carlo technique with spatial resolution of the Point-of-Closest-Approach method to solve the collision integral of the Boltzmann equation. This enables us to simulate systems with a large number of test-particles in the hydrodynamic regime with a high spatial accuracy. To ensure that the hydrodynamic regime can be successfully reproduced we perform shock wave studies whereas the present work is focused on the Sedov shock wave test. To facilitate the comparison of our simulation to analytic solutions of shock wave propagation, we model particle interactions via elastic scatterings with the number of degrees of freedom given by the number of dimensions in the simulation. The current work verifies that our code is able to handle matter in the hydrodynamic regime and reproduce the Sedov shock with high accuracy. To our knowledge, there are no particle methods that addressed the Sedov shock test and our work represents the first of such studies. In the future we aim to apply our code to study the dynamics of core-collapse supernovae and matter in inertial confinement fusion capsules. | 3,631.6 | 2013-05-18T00:00:00.000 | [
"Physics"
] |
Direct access to the moments of scattering distributions in x-ray imaging
The scattering signal obtained by phase-sensitive x-ray imaging methods provides complementary information about the sample on a scale smaller than the utilised pixels, which offers the potential for dose reduction by increasing pixel sizes. Deconvolution-based data analysis provides multiple scattering contrasts but suffers from time consuming data processing. Here, we propose a moment-based analysis that provides equivalent scattering contrasts while speeding up data analysis by almost three orders of magnitude. The availability of rapid data processing will be essential for applications that require instantaneous results such as medical diagnostics, production monitoring and security screening. Further, we experimentally demonstrate that the additional scattering information provided by the moments with an order of higher than two can be retrieved without increasing exposure time or dose.
The scattering signal obtained by phase-sensitive x-ray imaging methods provides complementary information about the sample on a scale smaller than the utilised pixels, which offers the potential for dose reduction by increasing pixel sizes. Deconvolution-based data analysis provides multiple scattering contrasts but suffers from time consuming data processing. Here, we propose a momentbased analysis that provides equivalent scattering contrasts while speeding up data analysis by almost three orders of magnitude. The availability of rapid data processing will be essential for applications that require instantaneous results such as medical diagnostics, production monitoring, and security screening. Further, we experimentally demonstrate that the additional scattering information provided by the moments with an order higher than two can be retrieved without increasing exposure time or dose. V C 2018 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/ licenses/by/4.0/). https://doi.org/10.1063/1.5054849 In the context of phase-sensitive x-ray imaging techniques, scattering refers to the contrast channel arising from sample inhomogeneities that are smaller than the utilised pixels. The utilisation of such sub-pixel signals allows for increasing the pixel size while maintaining the signal and simultaneously decreasing dose and/or scan times significantly. The sensitivity towards sub-pixel information has been established for different x-ray imaging methods, such as analyser-based imaging (ABI), 1,2 grating interferometry (GI), 3-5 speckle-based imaging, 6-8 and edge-illumination (EI). 9,10 The potential of x-ray scattering is investigated for mammography, 10-12 bone structure determination, 13 and the diagnosis of several pulmonary diseases in both small [14][15][16] and large animals. 17 Commonly used data analysis procedures provide a single contrast related to sub-pixel information, which is called the dark-field 3 or the scattering signal. 10 An alternative deconvolution-based approach that provides multiple and complementary scattering contrasts was originally developed for GI 18 and extended to tomography 19 and recently translated to EI. 20 In some applications, it was shown that deconvolution can provide a higher contrast to noise ratio and improved dose efficiency. 21,22 It was also demonstrated that the complementary contrasts can be exploited for quantitative imaging 23 without the need for additional scans required by other approaches. 5,24,25 While the deconvolution-based analysis is suitable for ABI, GI, and EI, the approach proposed below is not directly applicable to GI due to the sinusoidal nature of the provided signal. Thus, we will introduce the approach for EI and note that all results are directly applicable to ABI.
EI is a non-interferometric, phase-sensitive x-ray imaging technique that uses a pair of apertured masks (Fig. 1). The pre-sample mask confines the incident x-rays into smaller beamlets, which are broadened by the sample due to scattering. The broadening is transformed into a detectable intensity variation by the detector mask that features apertures covering most of the detector pixels. The comparably large structure sizes of the optical elements (typically tens of microns) allow for simple mask fabrication 23 and render EI robust against vibrations and thermal variations. EI is readily compatible with laboratory-based x-ray tubes due to the achromaticity of the optical elements and the entire x-ray spectrum contributes to the signal. 26,27 Accessing multiple scattering contrasts by deconvolution is based on the following approach. Scanning the presample masks laterally by a fraction of its period provides a Gaussian-like intensity curve in each detector pixel. Repeating the scan with and without the sample yields the signals s(a) and f(a), respectively. Here, the scattering angle a is defined in a plane perpendicular to the line apertures of the utilised mask ( Fig. 1). Scattering in the orthogonal direction does not change the detectable signal and, thus, can be omitted for the rest of the discussion. The angularly resolved scattering distribution g(a), which represents the sample's scattering signal within one pixel, is then implicitly defined by 9,10,20,28,29 sðaÞ ¼ f ðaÞ gðaÞ; (1) where denotes the convolution operator. The scattering distribution g(a) can be accessed from experimental data by deconvolving s(a) with f(a) and iterative Lucy-Richardson deconvolution 30,31 has been established as a reliable method. 20,21 The kth iteration step of the deconvolution is performed by computing where f denotes f mirrored at the origin. Usually, the sample signal is chosen as the starting value: g 0 ¼ s. constraint and is guaranteed to converge to the maximum likelihood solution if the experimental noise is given by Poisson statistics, which is commonly the case in x-ray imaging. 32,33 In order to retrieve multiple contrasts relating to the shape of g, a moment analysis can be applied to the scattering distributions. Depending on normalisation and centralisation, different definitions of the moments need to be distinguished. The un-normalised, un-centralised moments of an arbitrary function h(a) are given by where n is an integer denoting the order of the moment. Dividing by M 0 yields the normalised, un-centralised moments M n ðhÞ ¼ M n =M 0 for n ! 1; (4) and shifting by M 1 leads to the normalised, centralised moments Þ n hðaÞ da=M 0 for n ! 2: It has been experimentally demonstrated that the first moment of the scattering distribution M 0 (g) corresponds to absorption, M 1 ðgÞ to the differential phase signal, andM 2 ðgÞ to scattering strength. 20 The relation of these moments to sample properties is provided in Ref. 23.
Given typical noise levels in experiments, about 1000 iterations steps are required to ensure convergence of the Lucy-Richardson deconvolution, which may lead to cumbersome data processing times. For example, data processing of the dragon fly in Ref. 20 took around 1 h for a 400 Â 300 pixel field of view on a standard desktop PC. This renders iterative deconvolution unsuitable for time sensitive applications.
Therefore, we propose an alternative data analysis approach that uses the known moments of convolutions, 34 the derivation of which is briefly sketched in the following. The moments defined in Eq.
where the symbol^denotes the Fourier transform and q the variable in Fourier space. Since s is given by a convolution, its Fourier transform corresponds to a product Inserting Eq. (7) into Eq. (6) and dividing by M 0 lead to M n ðsÞ ¼ X n k¼0 n k M k ðf Þ M nÀk ðgÞ; with the binomial coefficient n k . Similar equations hold true for the normalised, centralised momentsM n , 34 which can be solved for the moments of g. The result for the first five moments is M 2 ðgÞ ¼M 2 ðsÞ ÀM 2 ðf Þ; M 3 ðgÞ ¼M 3 ðsÞ ÀM 3 ðf Þ; M 4 ðgÞ ¼M 4 ðsÞ ÀM 4 ðf Þ À 6M 2 ðf ÞM 2 ðgÞ: First moment terms do not appear in equations with n > 1 becauseM 1 ¼ 0. For the scattering widthM 2 , the above equation is in agreement with published results. 36 Since the moments of s and f can be directly calculated from experimental data, Eqs. (9)-(13) provide direct access to the moments of the scattering distribution g without the need for time consuming iterative deconvolution. In order to experimentally compare the results of deconvolution and direct moment analysis, we used an EI-based imaging system at University College London. A Rigaku MM007 rotating anode with a Mo target was used as an xray source and operated at a 25 mA current and a 40 kVp voltage. The pre-sample mask consisted of a series of Au lines on a graphite substrate with a pitch of 79 lm and an opening of 10 lm, while the detector mask had a pitch of 98 lm and an opening of 17 lm. Both masks were manufactured by Creatv Microtech (Potomac, MD). The x-ray detector was a Hamamatsu C9732DK flat panel sensor featuring a binned pixel size of 100 lm. The sample to detector distance was 0.32 m, and the total setup length was 2 m.
The sample was a dragon fly, which was known to provide a sufficient signal for the first five moments. The sample mask was scanned over one pitch with 32 steps and an exposure time of 25 s per step. The same dataset was used for the deconvolution [Eq. (2)] and moment analysis (Eqs. 9-13). The resulting scattering contrasts (Fig. 2) show an excellent visual agreement between the two approaches, while data processing for direct moment analysis was about 600 times faster than for deconvolution. Furthermore, direct moment analysis eliminates the number of iteration steps as a necessary parameter for deconvolution. Table I presents a performance comparison of deconvolution and moment analysis. The high degree of visual agreement between the approaches is confirmed by the correlation factors (!0.9 for all contrasts). Columns 3 and 4 compare the standard deviation of the signals in a 50 Â 50 pixel background area as a measure of the noise level in the two analysis approaches. With the exception ofM 2 (details discussed below), both approaches deliver similar noise levels.
For the 2nd moment, the scatter plot of values retrieved by deconvolution and moment analysis (Fig. 3) reveals a discrepancy for scattering strengths that are small compared to the width of the flat-field scan (M 2 ðgÞ 0:05 ÂM 2 ðf Þ). In this case, the deconvolution approach [Eq. (2)] does not retrieve the correct d-shaped signal for g due to the presence of noise, 22 but will retrieve a signal withM 2 ðgÞ > 0. The moment analysis, on the other hand, is not subject to such a restriction. The difference in bias between the two approaches is also reflected in the mean of the background areas, which areM 2 ¼ 2:1 Â 10 À11 rad 2 for deconvolution andM 2 ¼ 2:8 Â 10 À12 rad 2 and, thus, an order of magnitude smaller for direct moment analysis. However, Table I shows that deconvolution providesM 2 values with a smaller standard deviation than moment analysis in the background area. Nevertheless, moment analysis would be the preferred option for quantitative data analysis. For large scattering values (M 2 ðgÞ ! 0:1 ÂM 2 ðf Þ), the two approaches deliver the same sensitivity (bracketed entries forM 2 in Table I).
Finally, we investigated the influence of the acquired number of sample points on the functions s(a) and f(a) (i.e., number of images per scan). Since at least n þ 1 scan points are required for the linear independence of nth moment, increasing the number of scan points increases the number of accessible and complementary scattering information.
To this end, we acquired an additional dataset, where we varied the number of scan points from 5 to 11, while keeping the total exposure time constant (200 s). We used the standard deviation of the different scattering contrasts in a background area retrieved by direct moment analysis to quantify the dependency. As can be seen in Fig. 4, the noise levels vary within a small 15% interval, which implies that the sensitivity of the different contrasts does not change significantly with the number of scan points. In essence, this means that moment analysis provides the additional scattering contrasts (i.e., moments with order higher than 2) without the need to increase total exposure time or dose.
In conclusion, we have established a direct moment analysis as an alternative approach for retrieving multiple scattering contrasts for EI. We also suggest that this approach can be readily extended to ABI. Direct moment analysis delivers results equivalent to previously utilised deconvolution, while speeding up data processing by almost three orders of magnitude and providing unbiased values for small or absent scattering signals. Furthermore, we have experimentally demonstrated that increasing the number of scan points while keeping total exposure time and dose constant provides additional scattering information without losing sensitivity. Fast data processing that provides reliable scattering contrasts will be crucial for applications demanding rapid feedback, such as medical diagnostics, production monitoring, and security screening. | 2,935.8 | 2018-12-20T00:00:00.000 | [
"Mathematics"
] |
AN INTENSIONAL VIEW OF JUDGMENT IN KANT’S KRV
This paper presents an elucidation of Kant’s notion of judgment, which clearly is a central challenge to the understanding of the Critic of Pure Reason, as well as of the Transcendental Idealism. In contrast to contemporary interpretation, but taking it as starting point, the following theses will be endorsed here: i) the synthesis of judgment expresses a conceptual relation understood as subordination in traditional Aristotelian logical scheme; ii) the logical form of judgment does not comprise intuitions (or singular representations); iii) the relation to intuition is not a judgment concern; iv) the response to the question about the ‘x’ that grounds the conceptual relation in judgments must be sought in transcendental aspects: 1) on construction in pure form of intuition, 2) in experience and 3) in the requirements to experience, respectively to mathematical, empirical, and philosophical 1 IFSUL – Instituto Federal de Educação, Ciência e Tecnologia Sul-rio-grandense / CNPq – Conselho Nacional de Desenvolvimento Científico e Tecnológico, Brazil<EMAIL_ADDRESS>Evandro C. Godoy 132 Manuscrito – Rev. Int. Fil. Campinas, v. 44, n. 1, pp. 131-148, Jan.-Mar. 2021. judgments. The overall purpose is to build up an understanding of judgment that supports a latter assessment of Kant’s theoretical philosophy. The aim of this paper is to elucidate Kant’s conception of judgment. A correct interpretation of the central proposal of the critical program, the enquiry concerning the possibility of synthetic a priori judgments, obviously depends on a correct interpretation of judgment, which in turn depends on determining how predication, and the logical subordination of the subject to the predicate, may (or must) be conceived. As these things are not quite clear in the Critique of Pure Reason2, it seems productive to search for more elements in the logical and historical contexts. The contemporary interpretation tends to fluctuate between two approaches to judgment, one supported by a conception derived from analytic philosophy, and the other from Port-Royal Logic. The analytical interpretation supposes, implicitly or explicitly, that it is possible to read the ‘function of unity among our representations’ (KrV, A69/B94) as the subsumption of an object under a function, in FregeRussellian style3. On the other hand, the reading from PortRoyal adopts a historically more acceptable point of view, explaining judgment as predication in the scheme of Aristotelian logic, plus some novelties of modernity. The view from Port-Royal takes judgment as, or at least involves, the subordination of a singular under a universal representation4. Indeed, the dispute of the readings relies on 2 Kritik der reinen Vernunft, henceforth KrV, quoted and referred to in its two editions, A and B, as usual. 3 See, for instance, Schulthess (1981) and Strawson (1999). 4 See Pariente (1985), Brandt (1995) and Longuenesse (2000). Hanna (2018) takes a position closer to this, with respect to the An Intensional View of Judgment in Kant ́s KRV 133 Manuscrito – Rev. Int. Fil. Campinas, v. 44, n. 1, pp. 131-148, Jan.-Mar. 2021. the predicative relation established between a singular representation (an intuition) and a general representation (a concept) from an extensional perspective. This paper starts from a critique of these views and presents a reading of judgment as subordination only of general representations, based on exegetical and historical analysis, whence the prominence of the intensional aspect imposes itself. The analytical interpretation offers a view of subordination in conformity with the reading of subsumption in the model of predicate calculus: roughly speaking, as an object ‘x’ falling under a function ‘F’, described in the expression ‘F(x)’. Beyond the explicit connection to Aristotelian logic, stated in the introduction of KrV (B viii), it seems problematic to assume this interpretation for at least two more reasons: the unrestricted transitivity of subordination must be denied in this approach, and it imposes constraints on immediate inferences of categorical propositions. One Aristotelian (The Categories V, 3b, 5, In: Aristotle (1962)) basic thesis about predication is that it is a transitive relation in the sense that if P is predicated of S, then whatever S is predicated of, P must also be predicated of. This feature is the first and principal characteristic of the classical theory of predication (Angelelli, 2004). On the other hand, subordination in contemporary logic cannot be transitive, and its understanding of predication differs from this scheme. Indeed, it distinguishes between subsumption, an object falling under a concept, and a concept falling under a concept of a higher order, and none of these relations can be transitive. The second reason is that this interpretation of subordination marks a difference among the ranges of valid inferences: in classical Aristotelian logic, the immediate predicative judgments, but, on the other hand, flirts with analytical perspective, when explaining disjunctive and hypothetical judgments as having essentially truth-functional form. Evandro C. Godoy 134 Manuscrito – Rev. Int. Fil. Campinas, v. 44, n. 1, pp. 131-148, Jan.-Mar. 2021. inference from universal categorical propositions to particulars of the same quality is possible for all terms (e.g., All S is P implies Some S is P). However, in the interpretation from predicate calculus, this inference is blocked in the cases where S does not refer (e.g., when S is a ‘unicorn’, it is not a good inference from Ɐx (Sx → Px) to Ǝx (Sx ^ Px)). Church (1965) addresses this point and reveals an existential commitment in the modern account that is absent from the ancient scheme. On the other hand, the interpretation from Port-Royal approaches the question by taking judgment to be the ordination of representations, under the scheme of traditional predication (S is P). In this reading, predication manifests at least three aspects: it is transitive, as explained above; it is a subordination, given that it classifies representations as superior/genus or inferior/species; and it has an intensional and an extensional aspect. Since the Aristotelian Organon (Prior Analytics, I, xxvii, 43a, 25f, (In: Aristotle, 1962, p. 337)), predication has been presented as a relation, such that if P is a predicate of S, P is superior to S, and S is inferior to P. Taking the hierarchizing further, P is genus and S is species, and so it is appropriate to look for the highest genus and the minimal species (species ínfima). The distinction between intensional and extensional aspects of subordination was introduced only later in history, in the Port-Royal Logic (Arnauld and Nicole, 1992), where it is presented as “l’éntendue” (extension) and “la compréhension” (comprehension) of universal ideas. The extension refers to the subjects to which an idea applies, which are its inferiors, in relation to which the idea is superior (e.g. the idea of a triangle that has in its extension all the different species of triangles). The comprehension refers to those attributes that an idea involves in itself, e.g. the idea of triangle includes the ideas of figure, three lines, three angles, etc. (Arnauld and Nicole, 1992, p. 51-53). An Intensional View of Judgment in Kant ́s KRV 135 Manuscrito – Rev. Int. Fil. Campinas, v. 44, n. 1, pp. 131-148, Jan.-Mar. 2021. Next, the authors (Ibid.) propose that restriction of comprehension destroys the idea, while extension can be restricted in two different ways. An idea can have its extension restricted by the union (joignant) with another distinct idea, for instance, the union of the general idea of triangle with having a right angle, and so lead to the idea of triangle rectangle. The second way to restrict the extension is by adding an indistinct, undetermined idea of some (partie), which recalls what Kant called the form of particular judgment. The ‘destruction’ of an idea by restriction of its comprehension is not quite clear, but it expresses (in a context where the logic is chiefly psychological, but not subjective) the understanding that concepts can have an empty extension but cannot have an empty comprehension.5 Later, Leibniz gave a formulation to this distinction that sounds more like the formulation of the law of inverse reciprocity between intension and extension found in Kant’s lectures on logic. In the New Essays (Leibniz, 1900, p. 523), Theophilus states that, in the judgment ‘All man is animal’6, 5 It is reasonable to suppose that even “le supreme de tous les generes”, the supreme genus, explained as a genus that cannot be a species, may have some mark in it, although not over it. At the other end of the hierarchy, according to La Logique (Arnauld and Nicole, 1992, p.53) there are ideas that cannot be genus, the ‘singular ideas’ that are the minimal species (“espèce dernière” or “species infima”). 6 In the original French: ‘tout home est animal’. To preserve a link with the Aristotelian tradition, here and henceforth the singular form will be maintained in categorical propositions. As Ian Wilks notes, “[w]hile it is comfortable and intuitive to an English speaker to pluralize both subject and predicate terms in a proposition like ‘All men are animals’, the practice initiated by Aristotle and transmitted by Boethius is to formulate the proposition with both terms in the singular: ‘All man is animal’ (Omnis homo est animal). This way of speaking can be taken quite literally as representing the content of the proposition”. (WILKS, 2008, p. 85). Evandro C. Godoy 136 Manuscrito – Rev. Int. Fil. Campinas, v. 44, n. 1, pp. 131-148, Jan.-Mar. 2021. “animal includes more individuals than man, but man includes more ideas or more formalities; the one has more examples, the other more degrees of reality; the one more extension, the other more intension.7” (NE, IV, xvii, 8) In turn, Jäsche’s text formulates it as “the content (Inhalt) and extension (Umfang) stand in inverse relation to one another. The more a concept contains under itself (unter sich enthält), namely, the less it contains in itself (in sich enthält), and conversely.”8 (Log, AA 09:95.31-33) These two aspects, content (Inhalt) and extension (Umfang), and their inverse relation appear internalized in Kantian sources and lectures9. Although the reading from Port-Royal fits chronologically with and finds connections within the Kantian texts, we still have a serious question, which is about the predication of a universal to a singular. In order to establish it, let us take as an example the discussion about the distinction between the analytic and synthetic judgments from Allison (2004, p. 92) and Longuenesse (2000, p. 86f, and p. 107). According to Allison, it is in judgment, and by means of judgment, that concepts are applied “to given data”, and he uses the 7 “L’animal comprend plus d’individus que l’homme, mais l’homme comprend plus d’idées ou plus de formalités ; l’un a plus d’exemples, l’autre plus de degrés de réalité ; l’un a plus d’extension, l’autre plus d’intension.” (NE, IV, xvii, 8) 8 “Inhalt und Umfang eines Begriffes stehen gegen einander in umgekehrten Verhältnisse. Ie mehr nämlich ein Begriff unter sich enthält, desto weniger enthält er in sich und um gelehrt.” (Log., §7
the predicative relation established between a singular representation (an intuition) and a general representation (a concept) from an extensional perspective. This paper starts from a critique of these views and presents a reading of judgment as subordination only of general representations, based on exegetical and historical analysis, whence the prominence of the intensional aspect imposes itself.
The analytical interpretation offers a view of subordination in conformity with the reading of subsumption in the model of predicate calculus: roughly speaking, as an object 'x' falling under a function 'F', described in the expression 'F(x)'. Beyond the explicit connection to Aristotelian logic, stated in the introduction of KrV (B viii), it seems problematic to assume this interpretation for at least two more reasons: the unrestricted transitivity of subordination must be denied in this approach, and it imposes constraints on immediate inferences of categorical propositions.
One Aristotelian (The Categories V, 3 b , 5, In: Aristotle (1962)) basic thesis about predication is that it is a transitive relation in the sense that if P is predicated of S, then whatever S is predicated of, P must also be predicated of. This feature is the first and principal characteristic of the classical theory of predication (Angelelli, 2004). On the other hand, subordination in contemporary logic cannot be transitive, and its understanding of predication differs from this scheme. Indeed, it distinguishes between subsumption, an object falling under a concept, and a concept falling under a concept of a higher order, and none of these relations can be transitive. The second reason is that this interpretation of subordination marks a difference among the ranges of valid inferences: in classical Aristotelian logic, the immediate predicative judgments, but, on the other hand, flirts with analytical perspective, when explaining disjunctive and hypothetical judgments as having essentially truth-functional form.
Evandro C. Godoy
134 Manuscrito -Rev. Int. Fil. Campinas, v. 44, n. 1, pp. 131-148, Jan.-Mar. 2021. inference from universal categorical propositions to particulars of the same quality is possible for all terms (e.g., All S is P implies Some S is P). However, in the interpretation from predicate calculus, this inference is blocked in the cases where S does not refer (e.g., when S is a 'unicorn', it is not a good inference from Ɐx (Sx → Px) to Ǝx (Sx ^ Px)). Church (1965) addresses this point and reveals an existential commitment in the modern account that is absent from the ancient scheme.
On the other hand, the interpretation from Port-Royal approaches the question by taking judgment to be the ordination of representations, under the scheme of traditional predication (S is P). In this reading, predication manifests at least three aspects: it is transitive, as explained above; it is a subordination, given that it classifies representations as superior/genus or inferior/species; and it has an intensional and an extensional aspect. Since the Aristotelian Organon (Prior Analytics, I, xxvii, 43ª, 25f, (In: Aristotle, 1962, p. 337)), predication has been presented as a relation, such that if P is a predicate of S, P is superior to S, and S is inferior to P. Taking the hierarchizing further, P is genus and S is species, and so it is appropriate to look for the highest genus and the minimal species (species ínfima). The distinction between intensional and extensional aspects of subordination was introduced only later in history, in the Port-Royal Logic (Arnauld and Nicole, 1992), where it is presented as "l'éntendue" (extension) and "la compréhension" (comprehension) of universal ideas. The extension refers to the subjects to which an idea applies, which are its inferiors, in relation to which the idea is superior (e.g. the idea of a triangle that has in its extension all the different species of triangles). The comprehension refers to those attributes that an idea involves in itself, e.g. the idea of triangle includes the ideas of figure, three lines, three angles, etc. (Arnauld and Nicole, 1992, p. 51-53).
Next, the authors (Ibid.) propose that restriction of comprehension destroys the idea, while extension can be restricted in two different ways. An idea can have its extension restricted by the union (joignant) with another distinct idea, for instance, the union of the general idea of triangle with having a right angle, and so lead to the idea of triangle rectangle. The second way to restrict the extension is by adding an indistinct, undetermined idea of some (partie), which recalls what Kant called the form of particular judgment. The 'destruction' of an idea by restriction of its comprehension is not quite clear, but it expresses (in a context where the logic is chiefly psychological, but not subjective) the understanding that concepts can have an empty extension but cannot have an empty comprehension. 5 Later, Leibniz gave a formulation to this distinction that sounds more like the formulation of the law of inverse reciprocity between intension and extension found in Kant's lectures on logic. In the New Essays (Leibniz, 1900, p. 523), Theophilus states that, in the judgment 'All man is animal' 6 , "animal includes more individuals than man, but man includes more ideas or more formalities; the one has more examples, the other more degrees of reality; the one more extension, the other more intension. 7 " (NE, IV,xvii,8) In turn, Jäsche's text formulates it as "the content (Inhalt) and extension (Umfang) stand in inverse relation to one another. The more a concept contains under itself (unter sich enthält), namely, the less it contains in itself (in sich enthält), and conversely." 8 (Log, AA 09:95.31-33) These two aspects, content (Inhalt) and extension (Umfang), and their inverse relation appear internalized in Kantian sources and lectures 9 .
Although the reading from Port-Royal fits chronologically with and finds connections within the Kantian texts, we still have a serious question, which is about the predication of a universal to a singular. In order to establish it, let us take as an example the discussion about the distinction between the analytic and synthetic judgments from Allison (2004, p. 92) and Longuenesse (2000, p. 86f, and p. 107). According to Allison, it is in judgment, and by means of judgment, that concepts are applied "to given data", and he uses the 7 "L'animal comprend plus d'individus que l'homme, mais l'homme comprend plus d'idées ou plus de formalités ; l'un a plus d'exemples, l'autre plus de degrés de réalité ; l'un a plus d'extension, l'autre plus d'intension." (NE,IV,xvii,8) 8 "Inhalt und Umfang eines Begriffes stehen gegen einander in umgekehrten Verhältnisse. Ie mehr nämlich ein Begriff unter sich enthält, desto weniger enthält er in sich und um gelehrt." (Log., §7 AA, 09:95.31-33) 9 The distinction between content (intension) and extension is in: distinction between intension and extension to overcome the problem of logical and phenomenological explanation of analytic and synthetic judgments. He proposes that, for Kant, an analytic judgment rests upon his conception of a concept as a set of marks (themselves concepts), which are thought together in an 'analytic unity' […]. These marks collectively constitute the intension of a concept. One concept is contained in another, just in case it is either a mark of the concept or a mark of one of its marks.
[…] thus, unlike most contemporary conceptions of analyticity, Kant's is thoroughly intensional. (Allison, 2004, p. 92) On the other hand, synthetic judgments are associated with the extensional relation, where scholars take singular representations (intuitions) to also be included. Referring to Logic (Log, AA 09:111.1-14), Allison (2004, p. 92) finds the 'x' that grounds the synthesis of judgment in intuition. Based on the same text of Logic, Longuenesse (2000, p. 86) proposes a conception of judgment as "extensional subordination of concepts". She states, "what ultimately makes the combination of concepts possible is always their relation to an 'x' of judgment." (Longuenesse, 2000, p. 87). In addition she affirms that "Kant then specifies that two concepts a and b can be said to belong to the same x in two ways: either b is already contained in concept a, (…) or b belongs to the x thought under a, without being contained in a" (Longuenesse, 2000, p. 107). Her suggestion is that a judgment is analytic if it can be solved without resorting to the 'x' (the intuition under the extension of concepts) and synthetic if otherwise. What she wants to emphasise is "the place of sensible intuition right in the logical form of judgment itself, by means of the term 'x' -a sensible intuition that provides the cement holding concepts together" (idem, ibidem). Though she denies the association of Kant's conception of extension with the contemporary Russellian notion of the class of individuals, she proposes that "the extension of a concept consists of representations thought under it, whether these representations are universal or singular." (Longuenesse, 2000, p. 383, and note 97).
Although this may be a good interpretation of Port-Royal Logic's conception of extension, which admits a species that cannot be a genus, the "espèce dernière" or "species ínfima" (Arnauld and Nicole, op. cit. p. 53), it is not a good interpretation of Kant's conception. First, this is because the text states that logic abstracts from all content of knowledge (KrV A54/B78), and the content of knowledge is given in intuition (KrV A92/B125), which is singular and refers immediately to objects (KrV A320/B376-7.). Thus, general logic must abstract from all intuition -even transcendental logic only deals with pure thought of an object (KrV A55/B79-80) -and represents it from mere understanding (Log, AA, 09:16.4-12) -so, the logical form of judgment cannot comprise the predication of a concept to an intuition. Second, the complete distinction between faculties, actions and products (understanding, thinking, and general, discursive and spontaneous representations, but, on the other hand, sensibility, perceiving, and singular, intuitive and passive representations) indicates that the relation between them cannot be explained only by logical means. Indeed, it is a transcendental question, whose fundamental core is explained in the Transcendental Deduction and the Schematism. Third, according to all this, the assumption of species infima is denied by the text: Hence every genus requires different species, and these subspecies, and since none of the latter once again is ever without a sphere, (a domain as a conceptus communis),' reason demands in its entire extension that no species be regarded as in itself the lowest; for since each species is always a concept that contains within itself only what is common to different things, this concept cannot be thoroughly determined, hence it cannot be related to an individual, consequently, it must at every time contain other concepts, i.e., subspecies, under itself. 10 (KrV, A655-6/B683-4, emphasis added) As there cannot be a singular (an intuition) in the extension of concept, judgment cannot even encompass the predication of a concept to an intuition. Keeping these considerations in mind, it seems reasonable to consider judgment as subordination consisting exclusively of universal representations.
The first challenge that this view faces is the objection from singular judgments. Nevertheless, the reduction of the singular form of judgment to the universal is asserted as justified from the merely logical point of view, since, in both forms, the subject is entirely included in the extension of the predicate (KrV, A71/B96). The basis of the association of singular judgment with the predication of a concept to an intuition seems to be found in Logic: "in the singular judgment, finally, a concept that has no sphere at all is enclosed, merely as part then, under the sphere of another." 11 (Log, AA, 09:102.16-18) Regarding the concept without extension, however, the same text considers that, For even if we have a concept that we apply immediately to individuals, there can still be specific differences in regard to it, which we either do not note, or which we disregard. Only comparatively for use are there lowest concepts, which have attained this significance, as it were, through convention, insofar as one has agreed not to go deeper here. 12 (Log. AA 09:97.24-29) Therefore, the concept that does not have a sphere is still a concept. The basic reason for not taking the 'lowest concepts' as individuals is grounded on the distinction between concepts and intuition: Since only individual things, or individuals, are thoroughly determinate, there can be thoroughly determinate cognitions only as intuitions, but not as concepts; in regard to the latter, logical determination can never be regarded as completed. 13 (Log,AA,(9)(10)(11)(12)(13)(14)(15)(16) Thus, it is reasonable to accept this restriction as a consequence of the distinction in kind (Beiser, 1992, p. 26) between representations (intuitions and concepts) and to assume that even the subject of singular judgments must potentially be a genus. Now it is time to look into the logical scheme of extensional and intensional relations, in order to understand how subordination can be seen in this scenario. Taking judgment to be subordination only of universal representations (concepts) shows some differences between extensional and intensional subordination. Indeed, these differences help to explain the distinctions between synthetic a posteriori, synthetic a priori and analytic judgments.
From the point of view of extensional subordination, 'S is P' means that the extension of S is under the extension of P. However, this is not sufficient to determine P as superior or a genus of S, because the truth of a universal affirmative judgment, 'All S is P', may be consistent with 'All P is S' and 'Some P is S'; or else, with 'Some P is S' and 'Some P is not S'. In other words, from the mere standpoint of extension, subordination does not determine the symmetry (i.e., 'All S is P' does not imply 'All P is S') nor the asymmetry (i.e., it does not exclude 'All P is S'), so it does not suffice to order concepts in the genus/species hierarchy.
This feature of extensional relation may be better visualised in terms of examples. Let us begin with two true synthetic a priori judgments: 'All Triangle is Trilateral' and 'All Trilateral is Triangle', from which we cannot classify the concepts as genus and species. The same seems to happen with some empirical judgments (synthetic a posteriori), such as 'All Greek is Man' and 'Some Man is Greek'; the first puts Greek under Man and the second a part of Man under Greek. Which is a species or genus is not clear at all.
On the other hand, thanks to the intensional relation, the situation is different. One important thing in order to establish how the inclusion of marks (concepts) must be understood is to pay attention to the distinction between coordinate and subordinate marks in the introduction of Logic. The combination of co-ordinate marks (nach einander) forms the whole of a concept and restricts the extension but, as a mere aggregate, cannot determine which is superior or inferior. Only in subordination (unter einander) the series of subordinate marks terminates a parte ante, or on the side of the grounds, in concepts which cannot be broken up, which cannot be further analysed on account of their simplicity; a parte post, or in regard to the consequences, it is infinite, because we have a highest genus but no lowest species. 14 (Log AA 09:59. 17-21) The conceptual subordination that establishes the intensional series (unter einander), therefore, structures concepts as superior/inferior or genus/species. Thus, if in the judgment, 'S is P', P is included in S, simultaneously, as a mark and a superior concept (engendering an intensional series), so PS must be a different series from SP. For instance -passing over the difficulty of linguistic expression -the intensional series 'Man Greek' characterizes a different concept of 'Greek Man'. If Man is superior, Greek is a species of Man, aside from others, like Barbarian, and under it, there would also be other possible concepts, like Athenian and Spartan. On the other hand, if Greek is superior to Man (as in 'Some Man is Greek'), the concept is not Greek as a species of Man, but something like 'Greek thing', of which 'Greek Man' is a species, alongside 'Greek jewels', 'Greek ruins', etc.
The subordination of marks that constitutes an intensional series must be asymmetric. That is, if P is included in S, S is not included in P, unless S and P stay in the same place in the hierarchy of concepts, as Triangle and Trilateral do. All marks that fit in one fit in another, although 'All Triangle has three angles' is an analytic judgment, and 'All Triangle is Trilateral' is synthetic. When two terms occupy the same position in the intensional series, it leaves open what aspect of the concept will be highlighted in its expression, e.g., 'Triangle': three angles and 'Trilateral': three sides. Two concepts being in the same place in the intensional series is the same as both having in themselves the same marks and under themselves the same species.
As judgment is interpreted as subordination of concepts exclusively, the intensional aspect is important for the ein höchstes genus, aber keine unterste species haben. (Log,AA,(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21) structuring into genus and species. When a judgment does not determine the hierarchy, it also does not determine the extensional relation at all. In some analytic, some synthetic a priori, and synthetic a posteriori judgments (e.g. 'All Triangle has three angles', 'All Triangle is Trilateral' and 'All Male has a pair of XY chromosomes'), symmetric inversion is possible. Therefore, they enlarge 'intensive distinction' but do not imply 'extensive restriction', which is indeed an obvious constraint on the law of inverse relation between intension and extension. Now, returning to the question about the 'x' that makes subordination in judgment possible, it may be useful to look forward in KrV, to the Transcendental doctrine of method (A708-94/B736-822), in order to summarise the answer, and see that it has a triple aspect: mathematical, empirical and philosophical knowledge. In geometry, it is the 'ostensive construction' 15 that grounds the universal and necessary inclusion of the mark, 'Trilateral' in the 'Triangle' (as well as the inverse), in a universal, affirmative, categorical, and apodictic judgment: 'All Triangle is Trilateral'. This is a synthetic a priori judgment, and it is synthetic because the predicate, Trilateral, cannot be linked to the concept, Triangle, only by logical means. The knowledge of mathematics is explained by Kant as being grounded in the 'construction' of its concepts on the a priori form of appearances (time and space), from which is derived the apriority of its judgments. Therefore, although there is a universal, necessary relation between angles and sides of triangles, it is not grounded in merely logical analysis, but in the spatial structure of human perception.
Synthetic a posteriori judgments like 'All Man is Biped' include 'Biped' in 'Man' grounded in something different, which is experience. This is a synthetic judgment because the connection between 'Biped' and 'Man' is not necessary, and, as the 'x' is a posteriori, it is a universal, affirmative judgment, albeit assertoric. The synthetic a priori judgments of philosophical knowledge must have an a priori grounding too, but not in mathematical construction. A central Kantian thesis is that these judgments guide the constitution of experience, and so they are principles of pure understanding. The Transcendental Analytic pursues proof of these principles, but, unfortunately, this discussion exceeds the scope of this paper.
In conclusion, recalling the main theses, the analysis of the central nucleus of Kant's conception of judgment under exegetical, logical and historical burdens reveals the import of the hierarchical organization of some system of conceptual representations for knowledge. The epistemic import of a judgment relies on the grounding that the understanding uses to depict its conceptual schemes, which are properly organized intensionally. The relation of a singular representation (an intuition) with a concept, or of an object with a concept, is not a logical issue; it is a transcendental matter and depends on other elements revealed in the Transcendental Deduction and the Schematism, just as an unbiased reader would expect. Finally, beyond logical, epistemic import, this conception of judgment helps to elucidate the Metaphysical Deduction and even the Transcendental Deduction, a task that must be reserved for future papers. | 7,175 | 2021-03-01T00:00:00.000 | [
"Philosophy"
] |
THE OPEN-SOURCE TURING CODEC : TOWARDS FAST , FLEXIBLE AND PARALLEL HEVC ENCODING
The Turing codec is an open-source software codec compliant with the HEVC standard and specifically designed for speed, flexibility, parallelisation and high coding efficiency. The Turing codec was designed starting from a completely novel backbone to comply with the Main and Main10 profiles of HEVC, and has many desirable features for practical codecs such as very low memory consumption, advanced parallelisation schemes and fast encoding algorithms. This paper presents a technical description of the Turing codec as well as a comparison of its performance with other similar encoders. The codec is capable of cutting the encoding complexity by an average 87% with respect to the HEVC reference implementation for an average coding penalty of 11% higher rates in compression efficiency at the same peak-signal-noise-ratio level.
INTRODUCTION
The ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) combined their expertise to form the Joint Collaborative Team on Video Coding (JCT-VC), and finalised the first version of the H.265/High Efficiency Video Coding (HEVC) standard (1) in January 2013.HEVC provides the same perceived video quality as its predecessor H.264/Advanced Video Coding (AVC) at considerably lower bitrates (the MPEG final verification tests report average bitrate reductions of up to 60% (2) when coding Ultra High Definition, UHD, content).HEVC specifies larger block sizes for prediction and transform, an additional in-loop filter, and new coding modes for intra-and inter-prediction.This variety of options makes HEVC encoders potentially extremely complex.Therefore, in order to achieve low complexity requirements and improved coding efficiency, practical HEVC encoders should be carefully designed with tools to speed up the encoding as well as architectures that allow parallel processing, reduced memory consumption, and scale according to the available computational resources.This paper introduces the Turing codec 1 , an open-source software HEVC encoder.The design of the codec has been mainly driven by the distribution of UHD content in 8 and 10 bits per component, 4:2:0 format and up to 60 frames per second (fps).As such, particular attention was devoted to design the codec architecture so that the memory footprint is reduced, and many tools were developed specifically for the codec to reduce the complexity of highly demanding encoding stages.This paper continues with an overview of the related background associated with open source HEVC encoders.The focus will then move to describe the main features, tools and architecture of the codec.After this description an evaluation of the compression performance and complexity will be performed and compared with other software HEVC open source codecs.
RELATED BACKGROUND
Many efforts are being spent towards the development of efficient HEVC software encoders.During the definition of HEVC, JCT-VC developed an open-source reference software encoder and decoder under the name of HEVC test Model (HM) (3).The HM encoder includes almost all the tools considered in HEVC, with the objective of providing a reference to be used during the development, implementation and testing of new coding algorithms and methods.An HEVC encoder needs to select the optimal partitioning of a Coding Tree Unit (CTU) in Coding Units (CUs), the best prediction mode for each CU, and the optimal inter-or intra-prediction for each Prediction Unit (PU) (1).The HM reference software performs most of these decisions by means of brute-force searches where many options are tested, and the optimal configuration is selected in a Rate-Distortion (RD) sense.As such, HM is optimised for efficiency, and requires too high computational complexity to be considered in practical scenarios.Nonetheless, HM is mentioned in this paper due to its completeness as an upper-bound benchmark in terms of efficiency.
In 2013, the x265 encoder project was launched with the goal of developing a fast and efficient HEVC software encoder, reproducing the successful development model of the x264 AVC encoder.The codec was developed using the HM software as a starting point, where the code functionalities and structure were heavily improved to increase performance and allow for various tools and enhancements.x265 includes many options desirable for a practical encoder implementation, such as parallel encoding, speed profiles, spatiotemporal prediction optimisations, etc.A complete description of the encoder functionalities is out of the scope of this paper, but many details can be found on the project official website (5).
A few other similar projects for HEVC open source software codecs are available, at various stages of maturity.Kvazaar ( 6) is an academic project with the main goal of providing a flexible modular structure to simplify the data flow modelling and processing, while at the same time supporting some parallelisation and optimisation tools.At the time of writing, Kvazaar supports Main and Main10 profiles and provides support for the majority of HEVC tools.Finally, the libde265 project (7) is currently being developed mainly as an HEVC decoder, distributed under the LGPL3 licence.
OVERVIEW OF THE TURING CODEC
The backbone of the Turing codec were developed following a parallel approach: on one side, the code foundations have been developed making full use of state-of-the-art C++11 constructs and assembly optimisations; on the other side, advanced algorithms were designed to improve demanding encoding processes.The final goal was that of providing a fast HEVC software encoder suitable for a variety of applications.The codec is at an advanced maturity stage, while it is also under active development to implement additional functionalities and further improve the codec performance.Recently it has been made available as open source.The codec was named after Alan Turing, one of the most influential scientists in the development of the foundations of theoretical computer science.
Main Features
The Turing codec is compliant with the Main and Main10 profiles of HEVC (mainly defined for content distribution).A partial list of the functionalities currently supported is as follows: -Encoding of slice types I, P and B; -Fixed Group of Picture (GOP) structures, with GOP length of 8 or 1; -Configurable intra-refresh period; -All CU sizes and PU types specified in HEVC; -All 35 directions for intra-coding, with or without strong intra smoothing filtering; -Inter-coding with uni-or bi-prediction from List 0 (L0) or List 1 (L1); -Rate Distortion Optimised Quantisation (RDOQ); -Deblocking filter; -Rate control; -Support for field (interlaced) coding; -Shot change detection.Alongside these functionalities the codec offers specific features designed to provide the encoder with high coding efficiency and low computational complexity.
Encoding Process Optimisations and Speed Presets
The flexibility provided by HEVC with its large number of coding modes is responsible for the high computational complexity associated with the standard.Encoders must select the optimal configuration for each image area.Performing all decisions with an exhaustive RD search is clearly not ideal.For this reason, during the development of the Turing codec, great importance was given towards evaluating the impact of HEVC tools on compression efficiency and complexity, with the goal of defining a set of requirements (in terms of parameters and tools) to guide the development of the encoder.Many experiments were performed for this purpose, as detailed in (8).In particular the HM reference software was used (Version 12.0) to encode 16 UHD sequences (spatial resolution of 3840 × 2160 luma samples), frame rate of 50 or 60 Hz, 4:2:0 format, 8 bits per component, according to the Common Test Conditions (CTCs) (9) used by the JCT-VC under the Random Access Main (RA-Main) configuration.Four Quantisation Parameters (QP) per sequence were selected to uniformly span quality across the test-set.Compression efficiency and encoder complexity were then measured.For compression efficiency, the Bjøntegaard Delta-rate (BD-rate) was used (10), where negative BD-rate values correspond to compression gains.For complexity, the encoding running time was used, where average time reduction across all QPs for a sequence is considered as the Encoder Speedup (ES).
Table 1 presents a summary of the results of some of these experiments.Avoiding the use of large CTU sizes leads to high BD-rate penalties, while limiting the Residual Quad-Tree CTU by either avoiding testing CUs at minimum or maximum depths.The depth selected in neighbouring blocks is used at this purpose to predict the maximum depth likely to be used in the current CTU and consequently to limit the range of depths to test on the CTU.Finally, the Fast Decision for All Modes (FDAM) algorithm analyses the residuals found after quantisation when testing each inter-prediction mode.If all residuals are zero, then transform and quantisation are avoided while testing all subsequent inter-prediction modes, which means the prediction block is used as final reconstruction.
Currently, the Turing codec supports three speed presets: slow, medium and fast.The presets control the pre-defined values of options and tools and enable/disable the above algorithms, to provide flexibility between computational complexity and encoder efficiency.
A detailed list of the effects of the three speed presets is presented in Table 2.
Memory Usage, Parallel Processing Optimisations and Programming Optimisations
The Turing codec benefits from several performance optimisations that leverage parallel CPU features, reduce memory usage bandwidth and complexity, and minimise branch instructions in critical parts.The codec uses a thread pool with a controlled maximum number of active threads.By ensuring there are never more active threads than logical CPU cores, the likelihood of pre-emption and associated cost of context switch is greatly reduced.The thread pool is task-based with every encoding activity: RD searches, reconstruction, in-loop filtering, bitstream manipulation, etc. Tasks are nodes in an implicit directed dependency graph, and an efficient paradigm was designed to manage this in a generic fashion, extensible beyond a single CPU and proven to operate well when distributed between asynchronous nodes collaborating to encode a single sequence.
Current practical software codecs make use of SIMD instructions via assembly code or intrinsics in C/C++ functions.The Turing codec is no exception but uses a Just-In-Time (JIT) assembler to assemble optimised functions at runtime.This approach has two main benefits.First, it allows developers to use C++ templates, loops, conditions and function calls instead of ad-hoc assembler macros.Second, JIT assembly allows run-time parameters such as bit depth and raster stride to be embedded into the machine code, reducing the need for additional parameters and stack manipulation.
Moreover, the Turing codec introduces a novel approach to the management of temporary data in the form of the snake data structure.Video codecs have to make a large number of RD decisions, and in doing so intermediate variables are generated, discarded, recreated or copied for each alternative.The snake memory is a one-dimensional memory capable of representing two-dimensional data.It is employed in several modules where neighbouring sample data, modes or coefficients are required.While conventional codecs require metadata memory size of order width × height, the snake has a characteristic size of only width + height.This results in faster fill operations, better cache utilisation and lower memory bandwidth.Finally, every decision selected by the codec (modes, flags, MVs and coefficients) is stored in a serial format which can be easily cut, copied and pasted in memory, and is very convenient for fast output of the bitstream.
EVALUATION OF THE TURING CODEC
Different experiments were performed to assess the performance of the Turing codec under a variety of conditions.The results reported in this section refer to the codec run in single-thread mode, as this allows the algorithmic complexity associated with the codec workflow to be fully appreciated.First, the codec was evaluated with respect to its coding efficiency and computational complexity.Tests were performed using Main and Main10 profiles.In the former case, the same test set and conditions in Table 1 were used: sixteen UHD sequences were tested with four QP values.In these experiments, BD-rates and ES are reported as performance indicators, computed with respect to an anchor consisting of an HM encoder used to encode the same content under the JCT-VC CTCs (9).
Results of the experiments associated with the Main profile are shown in Table 3.The codec was tested with the three available speed presets and compared with an x265 encoder (Version 1.9).Three presets were used with x265, namely placebo, medium and fast.As can be seen, the Turing and x265 codecs are differently optimised.The performance of the Turing codec is generally weighted towards achieving higher compression efficiency, whereas x265 tends to be more optimised towards lower computational complexity.When using the slow preset, the Turing codec is very close to the performance of HM (4.5% BD-rate losses), with an ES of 35.96%.Conversely, x265 with the placebo preset (i.e. when the codec can achieve the highest compression efficiency) results in higher 9.8% BD-rate loss, for 69.05% ES.The medium preset of both codecs can be taken as a fair term of comparison, as both codecs seem to target a similar trade-off between complexity and efficiency when using this preset.In this case, the Turing codec outperforms x265 both in terms of speed and coding efficiency, resulting in 11.4% BD-rate loss for 87.22% ES.Conversely, x265 results in a higher loss (14.9%BD-rate) for lower ES (85.97%).Finally, when using the fast preset, the performance of x265 degrades to 49.1% BD-rate loss, for very high speedups of 98.83%.The Turing codec instead favours quality, returning lower efficiency loss (27.5% BD-rates), for lower ES of 95.57%.
When testing the codecs under the Main10 profile, similar results are obtained as shown in Table 4.For this experiment, six high dynamic range sequences from the test material described in (12) were used.The sequences are in HD resolution (1920 × 1080 luma samples) with frame rates from 24 to 50 Hz, 4:2:0 format, 10 bits per component and compressed with the Hybrid Lo-Gamma (HLG) opto-electronic transfer function (13).The current version of the Turing codec only supports the medium and fast presets for the Main10 profile, which were both used for these tests.Again, results are weighted towards maintaining high compression efficiency.When using the medium preset 17.5% BD-rate loss is obtained for 73.67%ES compared to the HM reference encoder.Limited losses of 33.4% are obtained when using the fast preset, for 88.01%ES.Conversely, x265 already results in higher losses with the placebo preset (20.8% on average) for 76.73% ES, and up to 50.5% BD-rate loss with the fast preset for 98.61% ES.
The three encoders (HM, x265 and the Turing codec) were also compared in terms of the memory consumption utilised during the encoding in single thread mode.Accordingly, the first 25 frames of the TapeBlackRed UHD sequence were encoded with QP equal to 26.A Linux machine using an Intel Xeon CPU E5-2680 at 2.50GHz was used for this experiment.The memory usage was sampled during the encoding at fixed intervals of one second, where the slow preset was used for the Turing codec, the placebo preset was used for x265, and the JCT-VC CTCs were used for HM.Results are shown in Figure 1.
As can be seen, the Turing codec requires a fraction of the memory needed by the other two encoders.The maximum amounts of memory recorded were 247 MBytes for the Turing codec, 1260 for x265 and 2557 for HM.The jump shown for the HM encoder after the first GOP of 8 frames is due to the codec buffering at that stage all 8 frames in the second GOP, while also maintaining frames in the first GOP as reference frames.This does not happen in subsequent GOPs because the buffer is freed of unnecessary frames at time instants distant in the past.x265 and the Turing codec are more efficient in reducing memory requirements by only keeping necessary information in the buffers.
Finally, the codec was also used to encode five sequences in HD format representative of broadcasting applications.The compressed content was visually inspected by expert viewers and compared with the output of a practical AVC encoder at bitrates in line with the statistical multiplex averages.Side-by-side comparisons were performed at the viewing distance of three times the display height in standard viewing conditions.At the same perceived quality, the rate savings of the Turing codec were in the range of 15-40%.
CONCLUSIONS
This paper presented an overview of the Turing codec, focusing on a technical description of the main features.Algorithmic optimisations as well as programming performance optimisations are included in the codec, as detailed in the paper.A comprehensive performance assessment shows that the Turing codec achieves consistent results across a variety of content, providing high coding efficiency for relatively low complexity requirements.The codec is also capable of drastically lowering memory requirements with respect to similar projects, and it results in comparable subjective qualities for lower bitrates than those typically required by practical codecs used in the broadcasting chain.
Figure 1 -
Figure 1 -Comparison of memory consumption of Turing codec, x265 and HM
Table 1 -
(8)t of HEVC tools: coding efficiency and complexityThe experiments were also used as a basis in the development of the various fast algorithms implemented in the codec.The Adaptive Partition Selection (APS) algorithm(8)analyses the motion activity in a portion of the sequence to determine whether to test the symmetric 2N × N and N × 2N modes (corresponding to splitting the CU into two rectangular PUs in the horizontal and vertical direction, respectively).Motion activity is computed based on the homogeneity of the residuals resulting from testing the 2N × 2N mode: if residuals are highly homogeneous, testing of symmetric modes is avoided, hence reducing complexity.The Multiple Early Termination (MET) algorithm stops integerprecision ME by analysing the residuals in the surrounding of the motion search starting points.In case an initial ME pattern search returns one of the multiple starting points as optimal MV, no other MVs are tested, else the starting point is updated and conventional ME is performed.The Reverse CU (RCU) algorithm reduces the set of depths to test on a (RQT) depth to 1 provides good ES for limited compression efficiency losses.Similarly, limiting the encoder to considering one inter-prediction reference frame from each list provides acceptable losses for consistent speedups.Limiting the Motion Vector (MV) precision to integer precision resulted in high efficiency losses, whereas disabling AMPs resulted in negligible effects.These experiments resulted in a set of requirements used throughout the development of the Turing codec.For example a maximum RQT depth of 1 is considered, only one reference frame is used, and AMPs are never tested.
Table 3 -
Results of the Turing codec and x265 against HM under the Main profile
Table 4 -
Results of the Turing codec and x265 against HM under the Main10 profile | 4,184.6 | 2016-01-01T00:00:00.000 | [
"Computer Science"
] |
Surface Modification by Nano-Structures Reduces Viable Bacterial Biofilm in Aerobic and Anaerobic Environments
Bacterial biofilm formation on wet surfaces represents a significant problem in medicine and environmental sciences. One of the strategies to prevent or eliminate surface adhesion of organisms is surface modification and coating. However, the current coating technologies possess several drawbacks, including limited durability, low biocompatibility and high cost. Here, we present a simple antibacterial modification of titanium, mica and glass surfaces using self-assembling nano-structures. We have designed two different nano-structure coatings composed of fluorinated phenylalanine via the drop-cast coating technique. We investigated and characterized the modified surfaces by scanning electron microscopy, X-ray diffraction and wettability analyses. Exploiting the antimicrobial property of the nano-structures, we successfully hindered the viability of Streptococcus mutans and Enterococcus faecalis on the coated surfaces in both aerobic and anaerobic conditions. Notably, we found lower bacteria adherence to the coated surfaces and a reduction of 86–99% in the total metabolic activity of the bacteria. Our results emphasize the interplay between self-assembly and antimicrobial activity of small self-assembling molecules, thus highlighting a new approach of biofilm control for implementation in biomedicine and other fields.
Introduction
Microbial adhesion and the subsequent formation of a biofilm on surfaces in a liquid environment is a natural phenomenon. However, these infections are difficult to detect [1,2], comprising a significant concern in different fields ranging from medical devices, surgical equipment, biosensors, water distribution systems [3,4], food storage [5,6] and industrial and marine instruments [7,8]. The commonly applied strategies to combat biofilm formation involve the prevention of initial bacterial adhesion to surfaces and biofilm degradation. The former mainly employs surface modifications, bactericidal coatings or anti-adhesive compounds as physical barriers [9,10], and the latter involves antimicrobial agents that kill or inhibit the growth of microorganisms [11]. Despite the advancements in the development of antimicrobial and anti-biofilm materials, coating techniques bear significant deficiencies, including the lack of long-term activity and stability, lack of coating adaptability to diverse materials, easy production and simple application [12].
Although peptide nano-structures have been the focus of many studies aimed towards technological applications, amino acids are gaining considerable interest as the simplest biological building blocks due to their commercial availability, low-cost, straightforward preparation and modification and biocompatibility [57]. Indeed, in recent years, much progress has been made in the generation of nano-structures from short and ultrashort peptides [58][59][60][61][62][63][64][65][66]. In 2012, phenylalanine (Phe), a single amino acid, was shown to form ordered fibrillary assemblies with amyloid-like properties due to both its aromatic structure and its hydrophobicity [67]. Later studies reported the self-assembly of Phe as well as other unmodified [68][69][70] and modified [71][72][73] amino acids to form fibrillary nano-structures.
The modification of amino acids can provide the assemblies with various properties, including antibacterial traits. Recently, Song et al. developed biometallohydrogels based on Ag + -coordinated Fmoc-amino acids self-assembly (Fmoc-l-serine, Fmoc-l-aspartic acid, Fmoc-l-leucine, Fmoc-l-proline, Fmoc-glycine) [24]. These metallohydrogels have been shown to have a significant antibacterial effect against both Gram-negative (Escherichia coli) and Gram-positive (Staphylococcus aureus) bacteria in cells and mice by their interaction with the cell walls and membrane, resulting in the detachment of the plasma membrane and leakage of the cytoplasm [24]. In another study, Nilsson and coworkers designed the Fluorenylmethoxycarbonyl (Fmoc)-pentafluoro-phenylalanine (Fmoc-F 5 -Phe) [73], which was later studied for its antibacterial properties [74]. Fmoc-F 5 -Phe nano-assemblies were incorporated into a dental resin-based composite restorative material and demonstrated their biocompatibility along with an inhibitory effect on Streptococcus mutans (S. mutans) in solution, targeting the bacterial cell membrane. This modified amino acid is comprised of the Fmoc-Phe moiety that induces nano-structure formation and the fluoride moieties, which are utilized for their antibacterial activity [74]. The crystallographic structure of Fmoc-F 5 -Phe was recently deciphered, leading to an understanding of the molecular interaction of thermodynamically stable structures, including π-π interaction and hydrogen bonding [75]. Furthermore, Fmoc-Phe has recently shown to confer an antimicrobial effect against Gram-positive bacteria such as Staphylococcus aureus both as a hydrogel and in solution. The underlying mechanism of this activity involves invasion into the bacterial cell followed by a reduction of glutathione levels [76]. The antibacterial properties of nano-assemblies formed by diphenylalanine have also been studied recently [19,20]. These assemblies cause membrane disruption in an E. coli model [20]. In addition, Fmoc-F 5 -Phe has been studied not only for its antibacterial potential but also due to its ability to co-assemble with the Fmoc-Phe-Phe dipeptide to form a fibrous hydrogel with extraordinary mechanical properties [77].
Here, inspired by these nano-structures' properties, we present an antibacterial modification of titanium, mica, glass and siliconized glass surfaces via the drop-cast coating technique using two self-assembling modified amino acid structures. We chose the minimal self-assembling building block, Phe, decorated with fluoride moieties known for their antibacterial properties and their hydrophobic nature. We further modified the amino acid with the Fmoc or tert-butyloxycarbonyl (Boc) protective groups. The topography, wettability and stability of the modified surfaces were characterized and the antimicrobial effect towards two facultative anaerobic bacterial strains, S. mutans and Enterococcus faecalis (E. faecalis), grown under anaerobic and aerobic conditions, respectively, was studied.
Results and Discussion
The surfaces' coating was designed based on phenylalanine modified on its N-terminus and decorated with fluoride moieties, known for their antibacterial properties and their hydrophobic nature. The N-terminus of the amino acid was modified with either Fmoc or tert-butyloxycarbonyl (Boc) protective groups. The Fmoc protective group endows strong driving forces to the self-assembly process, such as hydrogen bonding from the carbonyl group, aromatic and hydrophobic interactions from the fluorenyl ring and steric optimization from the linker, the methoxycarbonyl group [30]. When located at the N-terminus of diphenylalanine, the Boc modification can form both nanospheres and nanotubes under different conditions [78]. Fabrication of the Fmoc-F 5 -Phe coatings was performed using the drop-cast technique. The Fmoc-F 5 -Phe building blocks ( Figure 1A) were first dissolved in hexafluoronisopropanol (HFIP), a solvent with high evaporation propensity, which does not affect the tested surfaces' hydrophobicity or topography, verified by contact angle and SEM, respectively. Dissolving the modified amino acids in HFIP reverts them to their monomeric form. For the Fmoc-F 5 -Phe coatings, 10 mm diameter titanium, mica, glass and siliconized glass discs were coated by drop-casting the solution at a concentration of 30 µL/cm 2 . Solvent evaporation at room temperature resulted in the self-assembly of Fmoc-F 5 -Phe into ordered nano-structures, forming a continuous, homogeneous coverage of the surface. As confirmed by scanning electron microscopy (SEM) analysis, the layer, formed by a single drop-cast coating, was unstable in aqueous solution and did not remain adhered to the surfaces. Therefore, we have modified the drop-cast technique and added a complementary stage of surface heating to 60 • C while dropping the modified amino acid solutions throughout the evaporation process. This process was repeated thrice using Fmoc-F 5 -Phe to create homogeneous, stable coatings of the surfaces ( Figure 1B). This heat-driven procedure prompted the evaporation of the solvent, thereby facilitating the self-assembly process.
SEM analysis was used to characterize the topography of the different disc surfaces following the modified drop-casting process and after their immersion in water heated to 37 • C to assess the coating stability. A continuous and uniform coating of Fmoc-F 5 -Phe nano-assemblies could be seen on all four surfaces, before and after the stability test ( Figure 1C-F and Figure S1A-D). Amorphous Fmoc-F 5 -Phe nano-structures were observed on the titanium disc ( Figure 1C) and similarly on the siliconized glass discs ( Figure 1D). The structures remained intact and displayed a more defined architecture of spikey fibers stemming from the center after the stability test for titanium surfaces ( Figure 1E). The same trend was observed on the siliconized glass surface ( Figure 1F). These fibrils formed 10-20 µm buds, stemming from central granulation points. These fibrils appeared as sharp spikes and seemed accentuated and elongated in the connecting area between the buds. Fmoc-F 5 -Phe deposited on glass formed short intertwined fibers 10-20 µm in length and 1 µm in width ( Figure S1A) and on mica fibers 10 µm in length and 0.5 µm in width, stemming from nucleation points throughout the surface, could be observed ( Figure S1B). The structures on the glass surface maintained their morphology and dimension after the stability test ( Figure S1C). Fmoc-F 5 -Phe after the stability test on mica showed elongated fibers with spikes at the edges ( Figure S1D). Fmoc-F 5 -Phe coating stability was validated by X-ray diffraction (XRD) analysis. Crystalline structures with a similar pattern of diffraction peaks were observed both following the coating process and after the stability test ( Figure 1G). Moreover, we studied the Boc-F 5 -Phe (Figure 2A,B) coating on titanium, mica, glass and siliconized glass surfaces. Boc-F 5 -Phe forms a network of fibrils, approximately 0.5 µm in diameter, with a high aspect ratio on all four surfaces ( Figure 2C,D and Figure S2A,B). Upon immersion in 37 • C water for the stability test, the fibrils underwent a morphological transformation into larger rod assemblies, approximately 2 µm in diameter ( Figure 2E,F and Figure S2C,D). XRD analysis showed that Boc-F 5 -Phe initially formed an amorphous structure showing a broad indistinct peak and after wetting, the rods demonstrated a crystalline structure with distinct sharp peaks in the XRD spectra ( Figure 2G). In addition, contact angle analysis of a water droplet was used to analyze the surface wettability of the modified surfaces ( Figure 3A-D). Hydrophobicity was significantly increased using both Fmoc-F5-Phe and Boc-F5-Phe on titanium, mica and glass surfaces compared to the unmodified surfaces ( Figure 3E). The contact angle of each surface in different areas was consistent, indicating the uniformity of the coating. The highest change in hydrophobicity was observed on the mica surface. The measured water droplet contact angles on the Fmoc-F5-Phe and Boc-F5-Phe modified mica surfaces were θ = 80° and θ = 70°, respectively. These values represent a significant increase compared to unmodified mica, where the water droplet instantly covered the entire surface and the angle could not be measured. On the titanium-modified surfaces, a 3.5-fold and over 2-fold increase in hydrophobicity was obtained using Fmoc-F5-Phe (θ = 105°) and Boc-F5-Phe (θ = 70°), respectively, compared to the unmodified surface (θ = 30°) ( Figure 3C,D). A similar trend of increased hydrophobicity was observed on the Fmoc-F5-Phe and Boc-F5-Phe glass surfaces, however, no In addition, contact angle analysis of a water droplet was used to analyze the surface wettability of the modified surfaces ( Figure 3A-D). Hydrophobicity was significantly increased using both Fmoc-F 5 -Phe and Boc-F 5 -Phe on titanium, mica and glass surfaces compared to the unmodified surfaces ( Figure 3E). The contact angle of each surface in different areas was consistent, indicating the uniformity of the coating. The highest change in hydrophobicity was observed on the mica surface. The measured water droplet contact angles on the Fmoc-F 5 -Phe and Boc-F 5 -Phe modified mica surfaces were θ = 80 • and θ = 70 • , respectively. These values represent a significant increase compared to unmodified mica, where the water droplet instantly covered the entire surface and the angle could not be measured. On the titanium-modified surfaces, a 3.5-fold and over 2-fold increase in hydrophobicity was obtained using Fmoc-F 5 -Phe (θ = 105 • ) and Boc-F 5 -Phe (θ = 70 • ), respectively, compared to the unmodified surface (θ = 30 • ) ( Figure 3C,D). A similar trend of increased hydrophobicity was observed on the Fmoc-F 5 -Phe and Boc-F 5 -Phe glass surfaces, however, no significant change was observed on the modified siliconized glass surface, which exhibited a high contact angle of 80 • , probably due to its inherent relatively hydrophobic nature ( Figure 3E). significant change was observed on the modified siliconized glass surface, which exhibited a high contact angle of 80°, probably due to its inherent relatively hydrophobic nature ( Figure 3E). Figure 3G), respectively. The long-term stability of the coatings was tested in phosphate-buffered saline (PBS) and brain heart infusion (BHI) in order to assess their implementation in different areas of antibacterial treatment. The Fmoc-F5-Phe modification maintained a visible white coverage over the surface as opposed to Boc-F5-Phe, which proved to be unstable under the conditions required for bacterial growth. Therefore, only the Fmoc-F5-Phe modification was further tested for its antibacterial properties. (Figure 3G), respectively. The long-term stability of the coatings was tested in phosphate-buffered saline (PBS) and brain heart infusion (BHI) in order to assess their implementation in different areas of antibacterial treatment. The Fmoc-F 5 -Phe modification maintained a visible white coverage over the surface as opposed to Boc-F 5 -Phe, which proved to be unstable under the conditions required for bacterial growth. Therefore, only the Fmoc-F 5 -Phe modification was further tested for its antibacterial properties.
The antibacterial properties of the Fmoc-F 5 -Phe modified surfaces were analyzed using two bacterial strains. E. faecalis a facultative, Gram-positive bacterium known as a primary etiological agent of nosocomial infections and as a habitant of the oral cavity, specifically in filled root canals of teeth associated with apical periodontitis [79,80]. S. mutans, a Gram-positive, facultative anaerobic bacterium and one of the primary causes of human dental caries [81], which is also associated with bacteremia and infective endocarditis [82]. E. faecalis was grown in BHI broth containing 5% sucrose under aerobic conditions and formed a biofilm after 24 h incubation at 37 • C. The standard bacterial staining such as crystal violet and MTT (3-[4,5-dimethylthiazole-2-yl]-2,5-diphenyltetrazolium bromide) assays were inadequate for evaluating the biofilm on coated surfaces, since these dyes also stain the Fmoc-F 5 -Phe coatings, causing a very high background and no distinction between the bacteria and the coating ( Figure S3). In order to overcome this obstacle, the surfaces were prepared for HRSEM imaging. The samples were dried and imaged by HRSEM. These images showed that areas of the Fmoc-F 5 -Phe-coated sample contain a reduced amount of adhered E. faecalis bacteria in comparison to the non-coated siliconized glass ( Figure 4A,C). The areas covered with bacteria in Figure 4 are marked in green, the unmarked images are presented in Figure S4. The antibacterial properties of the Fmoc-F5-Phe modified surfaces were analyzed using two bacterial strains. E. faecalis a facultative, Gram-positive bacterium known as a primary etiological agent of nosocomial infections and as a habitant of the oral cavity, specifically in filled root canals of teeth associated with apical periodontitis [79,80]. S. mutans, a Gram-positive, facultative anaerobic bacterium and one of the primary causes of human dental caries [81], which is also associated with bacteremia and infective endocarditis [82]. E. faecalis was grown in BHI broth containing 5% sucrose under aerobic conditions and formed a biofilm after 24 h incubation at 37 °C. The standard bacterial staining such as crystal violet and MTT (3-[4,5-dimethylthiazole-2-yl]-2,5-diphenyltetrazolium bromide) assays were inadequate for evaluating the biofilm on coated surfaces, since these dyes also stain the Fmoc-F5-Phe coatings, causing a very high background and no distinction between the bacteria and the coating ( Figure S3). In order to overcome this obstacle, the surfaces were prepared for HRSEM imaging. The samples were dried and imaged by HRSEM. These images showed that areas of the Fmoc-F5-Phe-coated sample contain a reduced amount of adhered E. faecalis bacteria in comparison to the non-coated siliconized glass ( Figure 4A,C). The areas covered with bacteria in Figure 4 are marked in green, the unmarked images are presented in Figure S4. Furthermore, the biofilm of S. mutans was grown under anaerobic conditions in BHI broth containing 5% sucrose and bacitracin, an antibiotic S. mutans is resistant to, for 4 h at 37 °C. Modified and unmodified surfaces were placed in the S. mutans suspension for an additional 24 h. HRSEM analysis demonstrated the presence of adhered S. mutans bacteria on the coated siliconized glass surfaces ( Figure 4B,D).
Next, a quantitative analysis of the biofilm viability of both E. faecalis and S. mutans on the modified surfaces compared to non-modified surfaces were analyzed ( Figure 5). All four different surfaces coated with Fmoc-F5-Phe showed a significant reduction in E. faecalis viability, as indicated by attenuated ATP production, when compared and normalized to their corresponding nonmodified surfaces ( Figure 5A). A substantial reduction of approximately 94% was observed on mica and glass surfaces. The prominent reduction observed on the mica surface may be correlated to the significant change in surface wettability from extremely hydrophilic to hydrophobic. A significant Furthermore, the biofilm of S. mutans was grown under anaerobic conditions in BHI broth containing 5% sucrose and bacitracin, an antibiotic S. mutans is resistant to, for 4 h at 37 • C. Modified and unmodified surfaces were placed in the S. mutans suspension for an additional 24 h. HRSEM analysis demonstrated the presence of adhered S. mutans bacteria on the coated siliconized glass surfaces ( Figure 4B,D).
Next, a quantitative analysis of the biofilm viability of both E. faecalis and S. mutans on the modified surfaces compared to non-modified surfaces were analyzed ( Figure 5). All four different surfaces coated with Fmoc-F 5 -Phe showed a significant reduction in E. faecalis viability, as indicated by attenuated ATP production, when compared and normalized to their corresponding non-modified surfaces ( Figure 5A). A substantial reduction of approximately 94% was observed on mica and glass surfaces. The prominent reduction observed on the mica surface may be correlated to the significant change in surface wettability from extremely hydrophilic to hydrophobic. A significant reduction of approximately 86% and 87% in bacterial viability was also demonstrated on titanium and siliconized glass, respectively. reduction of approximately 86% and 87% in bacterial viability was also demonstrated on titanium and siliconized glass, respectively. Notably, the biofilm of S. mutans also demonstrated a significant reduction of approximately 99% in ATP production on all four surfaces, highlighting the substantial antibacterial activity of the coating ( Figure 5B). The two complementary methods that were used to study the bacteria interaction with the Fmoc-F5-Phe-coated surfaces enabled a comprehensive understanding of E. faecalis and S. mutans interaction with the surfaces. The qualitative electron microscopy methods were based on direct visualization of the bacteria on the surfaces together with the ATP quantification method for antibacterial properties assessment. The results demonstrate that the adherence of the bacteria is not prevented by the nano-structures' coatings, however, the bacterial cell viability of the adhered bacteria is significantly reduced as demonstrated in the luminescence ATP assay. Notably, the biofilm of S. mutans also demonstrated a significant reduction of approximately 99% in ATP production on all four surfaces, highlighting the substantial antibacterial activity of the coating ( Figure 5B). The two complementary methods that were used to study the bacteria interaction with the Fmoc-F 5 -Phe-coated surfaces enabled a comprehensive understanding of E. faecalis and S. mutans interaction with the surfaces. The qualitative electron microscopy methods were based on direct visualization of the bacteria on the surfaces together with the ATP quantification method for antibacterial properties assessment. The results demonstrate that the adherence of the bacteria is not prevented by the nano-structures' coatings, however, the bacterial cell viability of the adhered bacteria is significantly reduced as demonstrated in the luminescence ATP assay.
In conclusion, we studied two phenylalanine-modified building blocks, Boc-F 5 -Phe and Fmoc-F 5 -Phe for various ordered-structures surface coatings. Although, both building blocks' coatings increased the hydrophobicity of the titanium, mica, glass and siliconized glass surfaces, only the Fmoc-F 5 -Phe coating was stable and durable in aqueous solution. We further incubated the Fmoc-F 5 -Phe-coated surfaces with two bacteria strains under both aerobic and anaerobic conditions, demonstrating reduced E. faecalis adhesion and regular S. mutans adhesion. Notably, both bacteria stains showed a significant reduction in bacteria viability in luminescence ATP assay. In summary, we demonstrated the utilization of a simple self-assembling building block to form ordered structure for various surface coatings, demonstrating durability and antibacterial properties that can be useful in future biomedical applications.
Coating Preparation Using the Drop-Cast Method
Prior to surface modification using the two modified amino acids, all discs were cleaned by sonication in 100% ethanol. The discs were then heated using a hot plate set to 60 • C. Amino acid solution (30 µL) was dropped onto the pre-heated substrate. The discs were heated for 30 min to allow solvent evaporation and self-assembly of the amino acids into nano-structures. Coatings were applied twice more using the same protocol.
Scanning Electron Microscopy (SEM)
Samples were dried under vacuum and then coated by a thin gold layer and viewed by SEM (JEOL, JSM-IT100 InTouchScope) or HRSEM (Zeiss, GeminiSEM 3000) with no additional coating.
Contact Angle Measurements
The static water contact angle measurements on the discs were performed using the Ramé-Hart goniometer with the Drop Image Advance analysis software. A 2 µL drop of DDW was deposited and the average contact angle was calculated from five measurements on each disc surface. The measurements were conducted at room temperature.
X-ray Diffraction (XRD)
Fmoc-F 5 -Phe and Boc-F 5 -Phe samples were prepared by the drop-cast method with heat-assisted adhesion on 20 × 20 mm glass slides and then analyzed by wide-angle XRD. The XRD pattern was collected using a Bruker's D8 Discover Diffractometer; the used set-up was a θ:θ Bragg-Brentano geometry, the source was copper anode and the detector was a LYNXEYE XE linear detector. The diffraction patterns were collected between 4 and 40 • 2θ with step 0.02 • 2θ for 1 s per step.
Bacterial Strains and Growth Conditions
S. mutans (ATCC 35668) stored at −80 • C was transferred into BHI broth (Difco Brain Heart Infusion, 241830) supplemented with 5% sucrose (SIGMA-ALDRICH, sucrose for microbiology, ACS reagent, ≥99.0%, 84100) and Bacitracin (0.5 unit/mL) using an inoculating loop. The suspension was incubated at 37 • C in an air-tight container under anaerobic conditions using an anaerobe container system sachet (BD GasPak EZ anaerobe container system sachet, 260678). After four hours, the suspension was transferred to cover the samples in a 24-well plate and incubated for an additional 24 h under anaerobic conditions. E. faecalis (ATCC 29212) stored at −80 • C was transferred into BHI broth (Difco Brain Heart Infusion, 241830) supplemented with 5% sucrose (SIGMA-ALDRICH, sucrose for microbiology, ACS reagent, ≥99.0%, 84100) using an inoculating loop. The suspension was incubated at 37 • C overnight under aerobic conditions and then transferred to cover the samples in a 24-well plate and incubated for an additional 24 h at 37 • C under aerobic conditions.
Evaluation of Antibacterial Properties Using Luminescence Assay
Quantification of microbial cell viability on the tested surfaces was based on bacterial ATP production which is used to convert beetle luciferin to oxyluciferin and light using the BacTiter Glo Microbial viability assay kit (Promega, G8231). The light emitted was quantified by Turner Biosystems Veritas Luminometer and Glomax 96 program and normalized to the corresponding non-coated surfaces treated with the same bacteria. All discs were treated with bacteria in growth medium, allowing for optimal conditions for bacterial growth and biofilm formation on the surfaces. After the allotted time, each surface was washed in PBS and transferred to a clean plate and immersed in 1.5 mL of fresh PBS. The quantitative assay was performed immediately.
Conclusions
In conclusion, we have demonstrated uniform, stable surface coatings of nano-structures formed by the self-assembly of simple building blocks. The characterization of these coatings by SEM, surface wettability and XRD provides an insight into their possible applications. The Fmoc-F 5 -Phe nano-structure coating caused a clear reduction in the metabolic activity of E. faecalis and S. mutans, as reflected by reduced ATP production, under both aerobic and anaerobic conditions, respectively. In addition, the coating slightly reduced the adherence of the E. faecalis and S. mutans to the surface. Therefore, the antibacterial properties of the Fmoc-F 5 -Phe structures that were previously demonstrated when incorporated into dental resin [74] are retained when drop-casted to form a thin antibacterial surface coating. This coating could potentially be used to fabricate dental appliances such as those used in the field of orthodontics and prosthodontics, to reduce the incidence of dental caries, thus improving oral health. The coatings showed a high durability in an aqueous environment, which is of utmost importance for such applications. | 5,649 | 2020-10-01T00:00:00.000 | [
"Environmental Science",
"Materials Science",
"Medicine"
] |
Inhibition of renal cell carcinoma angiogenesis and growth by antisense oligonucleotides targeting vascular endothelial growth factor
Angiogenesis is critical for growth and metastatic spread of solid tumours. It is tightly controlled by specific regulatory factors. Vascular endothelial growth factor has been implicated as the key factor in tumour angiogenesis. In the present studies we evaluated the effects of blocking vascular endothelial growth factor production by antisense phosphorothioate oligodeoxynucleotides on the growth and angiogenic activity of a pre-clinical model of renal cell carcinoma (Caki-1). In vitro studies showed that treating Caki-1 cells with antisense phosphorothioate oligodeoxynucleotides directed against vascular endothelial growth factor mRNA led to a reduction in expressed vascular endothelial growth factor levels sufficient to impair the proliferation and migration of co-cultured endothelial cells. The observed effects were antisense sequence specific, dose dependent, and could be achieved at a low, non-toxic concentration of phosphorothioate oligodeoxynucleotides. When vascular endothelial growth factor antisense treated Caki-1 cells were injected into nude mice and evaluated for their angiogenic potential, the number of vessels initiated were approximately half that induced by untreated Caki-1 cells. To test the anti-tumour efficacy of vascular endothelial growth factor antisense, phosphorothioate oligodeoxynucleotides were administrated to nude mice bearing macroscopic Caki-1 xenografts. The results showed that the systemic administration of two doses of vascular endothelial growth factor antisense phosphorothioate oligodeoxynucleotides given 1 and 4 days after the tumours reached a size of ∼200 mm3 significantly increased the time for tumours to grow to 1000 mm3. British Journal of Cancer (2002) 87, 119–126. doi:10.1038/sj.bjc.6600416 www.bjcancer.com © 2002 Cancer Research UK
Vascular endothelial growth factor is an endothelial cell specific mitogen, secreted as a 45 kDa homo dimer protein. There are five human isoforms derived from alternative splicing (VEGF 121,145,165,189,206) (Houck et al, 1991). VEGF121 and VEGF165 are the only soluble isoforms and also the most abundant, with VEGF165 being the most powerful stimulator of endothelial cell proliferation (Soker et al, 1997). VEGF165 is commonly expressed in a wide variety of human and animal tumours (Hanahan and Folkman, 1996) and has been shown to induce angiogenesis both in vitro and in vivo (Leung et al, 1989;Plate et al, 1992). It is currently believed that this diffusible molecule is probably a key mediator of tumour angiogenesis (Ferrara, 1999). Indeed, the expression of VEGF has been related to fundamental features of tumours, such as growth rate (Kim et al, 1993), microvessel density (Toi et al, 1994) and vascular architecture (Drake, 1999) as well as the development of tumour metastasis (Weidner et al, 1991). A correlation between VEGF expression and survival has been noted in some cancer patients (Maeda et al, 1996;Gasparini et al, 1997).
In light of its role in tumour angiogenesis, VEGF may be an attractive target for anti-angiogenic therapeutic interventions applied to the treatment of cancer. Renal cell carcinoma (RCC) may be an excellent site to investigate VEGF targeted anti-angiogenic therapies. RCC is the most common malignancy of the kidney in adults and accounts for about 2% of all adult malignancies (McLaughlin and Lipworth, 2000). Histopathologic evaluations of RCC reveal it to be a highly vascularised neoplasm demonstrating clear evidence of abundant angiogenesis and abnormal blood vessel development (Yoshino et al, 2000). Not surprisingly, several studies have pointed to an important role for pro-angiogenic growth factors in RCC. VEGF has been shown to be expressed in renal cell carcinoma tissues and renal cell carcinoma cell lines (Sato et al, 1994;Wang and Becker, 1997;Paradis et al, 2000). Serum levels of VEGF often are elevated in RCC patients and VEGF mRNA levels in renal cell carcinoma have been reported to be higher than those found in surrounding normal tissues Berger et al, 1995). In addition, elevated serum/urine VEGF levels have been shown to associated with malignant progression and poor treatment outcome (Sato et al, 1994;Baccala et al, 1998;Tomisawa et al, 1999;Ishizuka et al, 2000;Jacobsen et al, 2000). Taken together, these findings suggest that VEGF is one of the important factors involved in the angiogenesis of RCC.
In the present study we evaluated the anti-angiogenic and antitumour effects of VEGF antisense phosphorothioate oligodeoxynucleotides (PS-ODNs) in a pre-clinical model of human RCC (Caki-1).
Caki-1 xenografts
Female nude mice (NCR, nu/nu), age 6 -8 weeks were maintained under specific-pathogen-free conditions (University of Florida Health Science Centre) with food and water supplied ad libitum. Animals were inoculated subcutaneously in a single flank with 5610 6 tumour cells. When the tumours reached a size *200 mm 3 , animals were randomly assigned to the different treatment groups. All animal experiments have been carried out with ethical committee approval. The ethical guidelines that were followed meet the standards required by the Cancer Research UK guidelines (Workman et al, 1998).
DOTAP : DOPE liposome preparation
Cationic liposomes were prepared using the method described by Tang and Hughes (1999). Briefly, cationic lipid 1,2-dioleoyloxy-3-(trimethylammonium) propane (DOTAP) was dissolved in chloroform and mixed with a helper lipid 1,2-dioleoyl-3-snphosphatidylethanolamine (DOPE) (Avanti Polar-Lipids, Alabaster, AL, USA) at a molar ratio of 1 : 1. The mixture was evaporated to dryness in a round-bottomed flask using a rotary evaporator at room temperature. The resulting lipid film was dried by nitrogen for an additional 10 min to evaporate any residual chloroform. The lipid film was re-suspended in sterile water to a final concentration of 1 mg ml 71 based on the weight of cationic lipid. The resultant mixtures were shaken in a water bath at 358C for 30 min. The suspensions were then sonicated using a Sonic Dismembrator (Fisher Scientific, Pittsburgh, PA, USA) for 1 min at room temperature to form homogenised liposomes. The particle-size distribution of liposomes was measured using a NICOMP 380 ZLS instrument (Santa Barbara, CA, USA). The average particle diameter was 144.0+77.0 nm. Liposomes were stored at 48C and used within 3 months.
VEGF enzyme immunoassay
Caki-1 cells (1610 5 ) were set in 60 mm dishes and allowed to attach overnight. The medium then was removed and replaced with PS-ODNs in serum free medium with liposome (DOTAP : DOPE) and incubated for 5 h. Fresh medium containing 10% FBS then was added. After 24 h of incubation the VEGF concentration was determined in the medium using a human VEGF immunoassay kit (R & D Systems, Minneapolis, MN, USA).
VEGF relative quantitative RT -PCR
Caki-1 cells were set at 3610 5 in 100 mm dishes and allowed to attach overnight. The cells were then treated with 1 mM VEGF antisense or control PS-ODNs as described. Twenty-four hours later the cells were collected and the total RNA was isolated using a RNeasy Mini Kit (Qiagen, Valencia, CA, USA) and RNA concentrations were determined by UV spectrophotometry. A 2 mg total RNA sample was used to reverse synthesize cDNA using Superscript II reverse transcriptase (Invitrogen, Grand Island, NY, USA). A 2.5 ml aliquot of the reverse transcriptase reaction product then was used for the PCR reaction. VEGF PCR reactions were carried out with a VEGF gene specific relative RT -PCR Kit (Ambion, Austin, TX, USA). The PCR reactions were run 22 cycles (denature 948C 30 s, anneal 608C 60 s, extension 728C 60 s) in a DNA Engine 200 (MJ research, Waltham, MA, USA). PCR products then were run in 2% agrose gel and stained by ethidium bromide. The gels were visualised and analysed (Gel Doc 2000 gel documentation system, Bio-Rad, Hercules, CA, USA).
Co-culture assay
Transwell (Corning, Corning, NY, USA) 6-well dishes with a membrane pore size of 0.4 mM were used. Caki-1 cells were seeded at 5610 4 in the transwell inserts and MHE or HMVEC-L cells were plated at 5610 4 per well in the 6-well dishes and allowed to attach overnight. The Caki-1 cell medium was then replaced with serum free medium containing 1 mM V515 PS-ODNs or control PS-ODNs liposome complexes (DOTAP : DOPE). After 5 h of treatment, medium containing 10% heat inactivated FBS was added to yield a final FBS concentration of 2.5%. The transwells containing treated Caki-1 cells were assembled with 6-well dished containing MHE and HMVEC-L cells and incubated at 378C for 48 h at which time the numbers of MHE or HMVEC-L cells were determined by haemocytometer count.
Migration assay
Caki-1 cells were set at 1610 5 per well in 24-well dishes and allowed to attach overnight. The Caki-1 cells then were treated with 1 mM control or V515 PS-ODNs for 24 h. HTS FluoroBlok inserts (Becton Dickinson, Franklin Lakes, NJ, USA) with a pore size of 8.0 mm were assembled into the 24-well dish with the Caki-1 cells. MHE or HMVEC-L cells were grown in T-150 flasks to about 80% confluence. The endothelial cells were stained in medium containing 10 mg ml 71 Di-I (Molecular Probes, Eugene, OR, USA) for 24 h, washed four times with PBS, collected, added into the FluoroBlok inserts (5610 4 MHE or HMVEC-L) and incubated for 24 h. The number of migrated endothelial cells then were determined by direct measurement of the fluorescence in the bottom well using a CytoFluor 4000 plate reader (Perceptive BioSystems, St. Paul, MN, USA).
Intradermal angiogenesis assay
Caki-1 cells (5610 4 ) were inoculated intradermally in a volume of 10 ml at four sites on the ventral surface of nude mice. One drop of 0.4% trypan blue was added to the cell suspension, which making it lightly coloured, simplifying subsequent location of the sites of injection. Three days later the mice were killed, the skin carefully separated from the underlying muscle and the number of vessels counted using a dissecting microscope (Sidky and Auerbach, 1976). Scoring of all of the reaction areas was carried out at the same magnification (65) and only vessels readily detected at this magnification were counted. The sites of injection, recognised by local swelling and blue staining, were exposed by carefully removing fat or other tissue covering the area. All vessels that touched the edge of the tumour inoculates were counted. All the animals in the experiments were pre-coded and vessel counts in each animal were scored twice. The resultant data points for each treatment group were pooled for statistical analysis (Wilcoxon rank sum test).
PS-ODNs up-take in tumour
FITC labelled control PS-ODNs were mixed with DOTAP : DOPE in 200 ml 5% dextrose and injected into Caki-1 xenograft bearing mice via the tail vein at a dose of 20 mg kg 71 . Three hours later, the mice were killed by CO 2 asphyxiation, the tumours removed, frozen in liquid nitrogen and 20 mm sections were cut. The sections were photographed using a Zeiss Axioplan 2 Florescence Microscope (Zeiss, Thornwood, NY, USA) within 3 days.
VEGF Western blot
VEGF antisense PS-ODNs V515 was injected tail vein at a dose of 10 mg kg 71 . At various times after injection (24, 48 and 72 h), the mice were killed, the tumours excised and frozen in liquid nitrogen. The tumours were then homogenised (Dounce tissue grinder, Wheaton, Millville, NJ, USA) and the homogenates were lysed on ice for 30 min with 1 ml of hypotonic buffer (20 mM Tris-HCl, pH 6.8, 1 mM MgCl 2 , 2 mM EGTA, 0.5% Nonidet P-40, 2 mM Phenylmethanesulphonyl fluoride (PMSF), 200 u ml 71 Approtinin, 2 mg ml 71 leupetin) (Giannakakou et al, 1998) per 0.1 g tissue. Following a brief but vigorous vortex the samples were centrifuged at 14 000 r.p.m. for 10 min at 48C. A 30 ml aloquot of each sample was mixed with 10 ml 46 SDS -PAGE sample buffer (0.3 M Tris-HCl, pH 6.8, 45% glycerol, 20% b-mercaptoethanol, 9.2% SDS and 0.04 g per 100 ml bromophenol blue) and heated at 1008C for 10 min. Thirty ml of each sample was then analysed by SDS -PAGE on a 12% separating gel and 3% stacking gel. Following transfer, the membrane was immunoblotted using a VEGF primary antibody (Sigma, Saint Louis, MS, USA) 1 : 1000 diluted in antibody solution (3% dry milk, 25 mM Tris, pH 7.5, 0.5 M NaCl, 0.05% Tween 20) overnight at 48C. After washing, a secondary antibody labelled with horseradish peroxidase was applied and incubated at room temperature for 1 h. Protein bands were visualised and densitometry was performed.
Tumour growth delay assay
Once the Caki-1 xenografts reached a size of *200 mm 3 , animals were assigned randomly to various treatment groups. V515 or control PS-ODNs were administrated via the tail vein with DOTAP : DOPE liposomes at a dose of 5 mg kg 71 or 10 mg kg 71 on day 1 and day 4. Tumours were measured using callipers and volumes were approximated by the formula, volume=1/6pab 2 , with a and b represent two perpendicular tumour diameters. The times for the tumours in the various treatment groups to grow from 200 mm 3 to 1000 mm 3 were recorded and compared (Wilcoxon rank sum test).
RESULTS
The ability to down-regulate VEGF expression by antisense PS-ODNs treatment in Caki-1 tumour cells was first evaluated in vitro. The results showed that after 24 h treatment with 1 mM VEGF antisense PS-ODNs (V515) delivered by cationic liposome (DOTAP : DOPE), the medium VEGF level was significantly reduced to *35% that found in the control untreated group (from a normal of 850 pg ml 71 10 6 cells 71 to 300 pg ml 71 10 6 cells 71 ) (Figure 1). This effect was sequence, and target region specific. Treating Caki-1 cells with liposome vehicles (DOTAP: DOPE) or control scramble PS-ODNs did not affect VEGF levels. Similarly, treatment with sense or inverted sequence PS-ODNs failed to reduce VEGF expression. PS-ODNs treatment did not affect Caki-1 cell viability and proliferation. This repression of VEGF expression by V515 was dose dependent (Figure 2). For example, a 24 h treatment with 0.5 mM, reduced the medium VEGF level to 56% of control (P50.01) whereas a 1 mM dose down-regulated the VEGF level to 22% of control (P50.05). VEGF mRNA levels in different PS-ODNs treatment groups also were determined ( Figure 3). The results indicated a marked inhibition of VEGF mRNA after treatment with V515 which was absent in cells treated with scramble PS-ODNs.
Since the goal of VEGF antisense therapy is to inhibit cancer cell induced angiogenic signalling, experiments then were designed to evaluate the impact of reducing tumour cell expression of VFGF by antisense PS-ODNs treatment on endothelial cell growth and migration. Because the ultimate goal was to examine the efficacy of VEGF antisense treatment in a human tumour model grown in a mouse host, we evaluated the effect on both human (HMVEC-L) and mouse (MHE) endothelial cells. Transwell co- culture systems were used to mimic the in vivo paracrine interaction between tumour cells and endothelial cells. Caki-1 tumour cells were grown in transwells with 0.4 mm membrane pores. These were chosen to allow the exchange of growth factors but without direct cell -cell interactions. The effects of pretreating Caki-1 tumour cells with VEGF antisense PS-ODNs on endothelial cell proliferation then were determined (Figure 4). The results showed that compared to untreated Caki-1 cells, Caki-1 cells pre-treated with V515 significantly inhibited both HMVEC-L and MHE cell proliferation. Once again, treating Caki-1 cells with a variety of control PS-ODNs had no effect on HMVEC-L or MHE cell growth.
Experimental Therapeutics
To test whether a reduction in VEGF expression by tumour cells could affect endothelial cell migration, HMVEC-L or MHE cells were stained with 10 mg ml 71 Di-I for 24 h and added into Fluoroblok inserts placed into 24 well dishes containing Caki-1 tumour cells treated with either control or V515. The number of prelabelled endothelial cells which migrated through the 8 mm pore size membranes in a 24 h period were quantified by determining the fluorescence intensity in the bottom well. The results showed ( Figure 5) that 24 h after co-culturing the two cell populations *35% (P50.05) and 27% (P50.05) fewer MHE or HMVEC-L cells respectively migrated through the membrane in the presence of V515 treated Caki-1 cells compared to untreated or scramble control PS-ODNs treated Caki-1 cells. Although, the in vitro studies indicated that treating Caki-1 tumour cells with VEGF mRNA targeted antisense PS-ODNs down-regulated VEGF protein production sufficiently to affect the proliferation and migration of endothelial cells, it was important to demonstrate that such treatments also could affect Caki-1 cell induction of angiogenesis in vivo. To examine this possibility Caki-1 cells that had been treated with V515 or control PS-ODNs were injected intradermally and the number of vessels induced were counted 3 days later ( Figure 6). While untreated Caki-1 cells and control PS-ODNs treated Caki-1 cells had very similar angiogenic potency in vivo (both groups induced *44 -46 new vessels in the assay period), the angiogenic potential of Caki-1 cells that had been pre-treated with V515 antisense PS-ODNs (V515) was found to be significantly impaired; only *25 new blood vessels were observed.
Experimental Therapeutics
To evaluate the tumour up-take of PS-ODNs delivered by cationic liposome (DOTAP : DOPE). FITC labelled PS-ODNs were mixed with cationic liposome DOTAP : DOPE in 5% dextrose and were injected via the tail vein at a dose of 20 mg kg 71 into Caki-1 xenograft-bearing nude mice. Frozen sections prepared 3 h later showed the FITC labelled PS-ODNs to be efficiently delivered to the tumour (Figure 7). The heterogeneous PS-ODNs uptake likely reflects the inhomogeneous distribution of blood vessels in the tumour. To further confirm the antisense effect of V515 in vivo, tumour samples were collected at various times after V515 injection. Western blot analysis of these samples showed significant reductions in VEGF levels at 24, 48 and 72 h, with the maximum depression occurring at 48 h after treatment ( Figure 8). These findings clearly indicate the delivery of VEGF antisense to the tumour would result in a reduction in VEGF expression levels.
Subsequent experiments were undertaken to determine the antitumour efficacy of V515 antisense PS-ODNs when delivered systemically by examining the effect of such treatments on Caki-1 tumour growth. Caki-1 xenografts-bearing mice were treated with two doses of VEGF antisense PS-ODNs V515 (5 or 10 mg kg 71 ) 1 and 4 days after the tumours reached a size of *200 mm 3 . The time for the tumours to grow from 200 mm 3 to 1000 mm 3 then was recorded (Figure 9). The data showed that the median time for the tumours to grow to five times the starting size was significantly prolonged in the V515 antisense PS-ODNs (V515) treated groups. Administrating two doses of 5 mg kg 71 caused a growth delay of *5.5 days (P50.05, Wilcoxon rank sum test) while treatment with two 10 mg kg 71 doses led to a tumour growth delay of *8 days. The latter treatment therefore resulted in an approximately doubled the response of the tumours compared to the tumours of untreated or scramble PS-ODNs treated mice. No toxicity of such antisense PS-ODNs treatment was observed. This included no significant weight loss, no abnormal movements or behaviour and no loose stools.
DISCUSSION
Anti-angiogenesis treatment strategies represent a new approach to cancer management. Given that solid tumours cannot progress effectively without the generation of new blood vessels, various tacks have been taken to interfere with tumour angiogenesis. One possible target which has received considerable attention is the pro-angiogenic factor VEGF. VEGF can induce endothelial cell proliferation and migration in vitro (Hanahan and Folkman, 1996;Soker et al, 1997) and angiogenesis in vivo (Leung et al, 1989;Plate et al, 1992). Its expression level has been associated with a variety of tumours and correlated to treatment outcome (Maeda et al, 1996;Gasparini et al, 1997). To date, attempts to abrogate the angiogenic activity of VEGF have focused on inactivating VEGF through the use of antibodies (Kim et al, 1993;Mordenti et al, 1999) and soluble receptors (Lin et al, 1998), inhi-biting VEGF receptor tyrosine kinases (Hennequin et al, 1999) or suppressing VEGF message (Ellis et al, 1996;Smyth et al, 1997;Nguyen et al, 1998). The latter relied on antisense oligonucleotides or antisense RNA (Eguchi et al, 1991;Mercola and Cohen, 1995;Phillips and Zhang, 2000) to modulate gene expression by disrupting RNA expression. While the inhibition of VEGF expression by vector mediated gene transfer of antisense RNA has been shown to lead to growth delays in several tumour models (Folkman and Shing, 1992;Belletti et al, 1999;Kang et al, 2000;Nakashima et al, 2000), the only reports of in vivo efficacy when using VEGF antisense oligonucleotides occurred in VEGF dependent tumour models (Masood et al, 1997(Masood et al, , 2001. In the present studies, we used a VEGF independent tumour model of RCC (VEGF-R negative) to evaluate the in vitro and in vivo efficacy of a different VEGF antisense PS-ODNs sequence (V515). Antisense oligodeoxynucleotide technology provides an approach for inhibiting gene expression with target specificity as a particular advantage (Stein and Cheng, 1993;Engelhard, 1998). Antisense oligonucleotides are easy to produce in large quantities which make them potentially more practical than antisense RNA vector delivery approaches.
In the present study, we investigated the anti-angiogenic and anti-tumour effects of VEGF antisense PS-ODNs in a VEGF independent tumour model of RCC. In vitro experiments showed that the inhibition of VEGF production in Caki-1 tumour cells by antisense PS-ODNs treatment significantly reduced the ability of cocultured endothelial cells to proliferate ( Figure 4) and migrate ( Figure 5). To minimise interference of other growth factors in the serum, these studies were carried out under reduced serum conditions. Importantly, suppressing tumour cell expression of VEGF message impaired angiogenic responses in both human and mouse endothelial cells. Subsequent in vivo studies demonstrated that treating Caki-1 tumour cells with VEGF antisense PS-ODNs significantly impaired their ability to elicit an angiogenic response when injected intradermally ( Figure 6). These results not only support the role of VEGF as an important pro-angiogenic growth factor in Caki-1 cell induced angiogenesis, but also clearly suggest that inhibition of cancer cell VEGF expression may ultimately impact tumour growth. This belief was born out in experiments evaluating the in vivo anti-tumour efficacy of systemic administration of VEGF antisense PS-ODNs which showed that substantial tumour growth delays could be achieved with such treatment (Figure 9).
The results of this study indicate a key role for the VEGF signalling pathway in renal cell carcinoma angiogenesis. Treatment with VEGF antisense PS-ODNs to down regulate VEGF production was found to be effective at impairing the Caki-1 angiogenic signalling both in vitro and in vivo. Most importantly the systemic administration of VEGF antisense PS-ODNs to mice bearing macroscopic tumours resulted in significant inhibition of the growth of VEGF independent RCC tumour model. Taken together, these findings suggest that antisense PS-ODNs targeted to VEGF may have utility in the management of renal cell carcinoma either alone or in conjunction with conventional anti-cancer therapies.
ACKNOWLEDGEMENTS
This work was supported by USPNS grant CA89655. Untreated Scramble V515 5 mg kg -1 V515 10 mg kg -1 Figure 9 (A) Growth of median RCC Caki-1 tumours in nude mice treated systemically with antisense PS-ODNs against VEGF. Mice were untreated, treated with 10 mg kg 71 scramble PS-ODNs, 5 mg kg 71 VEGF antisense V515, or 10 mg kg 71 VEGF antisense V515 at the time indicated by arrows. Each group consisted of 10 animals. (B) Tumour response of Caki-1 xenografts treated systemically with antisense PS-ODNs targeted to VEGF mRNA. Scramble control (Scramble) or VEGF antisense (V515) PS-ODNs were administrated with cationic liposome (DOTAP : DOPE) via the tail vein 1 and 4 days after the tumours reached a size of *200 mm 3 . Liposome administration alone had no effect on Caki-1 tumour growth (data not shown). Each circle represents a single tumour; the bars show the response of the median tumour in each group of 10 mice. The stars indicate significant differences (P50.05, Wilcoxon rank sum test) from the untreated and scramble control groups. | 5,520.2 | 2002-07-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Characterizing and Predicting Social Correction on Twitter
Online misinformation has been a serious threat to public health and society. Social media users are known to reply to misinformation posts with counter-misinformation messages, which have been shown to be effective in curbing the spread of misinformation. This is called social correction. However, the characteristics of tweets that attract social correction versus those that do not remain unknown. To close the gap, we focus on answering the following two research questions: (1) ``Given a tweet, will it be countered by other users?'', and (2) ``If yes, what will be the magnitude of countering it?''. This exploration will help develop mechanisms to guide users' misinformation correction efforts and to measure disparity across users who get corrected. In this work, we first create a novel dataset with 690,047 pairs of misinformation tweets and counter-misinformation replies. Then, stratified analysis of tweet linguistic and engagement features as well as tweet posters' user attributes are conducted to illustrate the factors that are significant in determining whether a tweet will get countered. Finally, predictive classifiers are created to predict the likelihood of a misinformation tweet to get countered and the degree to which that tweet will be countered. The code and data is accessible on https://github.com/claws-lab/social-correction-twitter.
INTRODUCTION
Online misinformation leads to societal harm including diminishing trust in vaccines and health policies [6,50], damaging the wellbeing of users consuming misinformation [36,64], encouraging violence and harassment [5,61], and posing a danger to democratic processes and elections [58][59][60].The problem has been exacerbated during the COVID-19 pandemic [41,57]; particularly, COVID-19 vaccine misinformation including false claims that the vaccine causes infertility, contains microchips, and even changes one's DNA and genes has fueled vaccine hesitancy and reduced vaccine uptake [57].Therefore, it is crucial to restrain the spread of online misinformation [37,41].In this work, we use a broad definition of misinformation which contains rumors, falsehoods, inaccuracies, decontextualized truths, or misleading leaps of logic [36,69].
To combat misinformation, various countermeasures have been developed [41,43,66].Recent work has shown that ordinary users of online platforms play a crucial role in countering misinformation.
According to the research study by Micallef et al. [41], the vast majority (96%) of online counter-misinformation responses are made by ordinary users, with the remainder being made by professionals such as fact-checkers and journalists.While fact-checking from these professionals has been widely used due to its prominent and measurable impact [41,66], this process typically does not involve engaging with the actors spreading misinformation.Instead, the ordinary users' counter-misinformation efforts complement those from professional fact-checkers by directly engaging in countering conversations through making independent posts or direct replies to misinformation posts made by others [26].
Countering of misinformation messages via direct replies from ordinary users is called social correction [7,35].One real example is shown in Figure 1.Notably, social correction has been shown to be effective in curbing the spread of misinformation [13,67], as well as doing so without causing increases in misperception [20,62,68].While certainly not a panacea for convincing people to reconsider potentially misinformative beliefs, they are most effective at reducing the misperceptions of those who may consume it [7,8,13,55].
However, little is known about the characteristics of misinformation tweets that attract social correction.Developing this understanding has several advantages: (1) first, it can help identify inequities in misinformation correction.For example, comparison of correction across users or communities (e.g., political ideologies) can reveal whether certain user types/communities are less likely to be self-correcting, e.g., communities where users correct misinformation when they see it.Identifying these disparities is the first step towards addressing them by redirecting resources towards entities that require external attention to curb misinformation; (2) second, if certain misinformation content is less likely to be socially corrected, targeted efforts can be directed toward countering them.Such instances can be escalated and prioritized for interventions by professionals or social media platforms; (3) third, if certain misinformation content is likely to be socially corrected, then additional participants can be encouraged to provide reinforcements.
Despite these promising benefits, characterizing and predicting social correction is non-trivial due to several challenges.First, existing datasets do not contain conversation-style narratives with paired misinformation posts and counter-replies.Second, existing works (including Miyazaki et al. [43]) do not analyze counterreplies to misinformation in a stratified manner where tweets with different numbers of replies are considered separately.This finegrained analysis is necessary since comparing across or aggregating statistics across tweets that have drastically different numbers of (counter-)replies can skew the findings [28,32].
In this work, we seek to characterize and predict counter-replies to misinformation.The contributions can be summarized as follows: • We curate a novel large-scale dataset that contains 1,523,849 misinformation tweets and 690,047 counter-misinformation replies, along with a hand-annotated dataset of misinformation tweets and counter-replies.• We perform a stratified, fine-grained analysis of the linguistic, engagement, and poster-level characteristics of misinformation tweets that get countered versus those that do not.Our analysis reveals several features of tweets that attract social correction, such as anger and impoliteness.• We create two counter-reply prediction models to identify whether a misinformation tweet will be countered or not, and if so, to what degree (i.e.low or high), based on its linguistic, engagement, and poster features.We achieve promising predictive performance with both of these models, with best F-1 scores of 0.860 and 0.801, respectively.
RELATED WORKS 2.1 Social Correction on Social Media Platforms
Misinformation widely spreads on social media platforms, which has caused detrimental effects on society [5,13,61], including harassment and personal attacks [41].To combat misinformation, users actively employ various strategies [44], including replying to and commenting on misinformation [35,43,66].This debunking behavior can broadly reduce the misinformed beliefs of the author and the audience who see the misinformation [7,12].Notably, current research works have shown the promising impact of debunking [7,12] in both curbing the perception of misinformation and reducing the belief of false information [12].In this work, we deep-dive into this misinformation-countering behavior by looking at both the misinformation posts and the counter-misinformation replies to these posts.Since user response information can indicate the textual properties of misinformation posts that are highly likely to get countered, our work sheds light on better understanding of misinformation-countering behavior, especially, understanding the misinformation tweets that get countered.
Analysis of Counter-misinformation
Due to the significance of counter-misinformation messages in curbing misinformation, much research has been focused on analyzing and understanding counter-misinformation [41,66].
One type of work is to analyze and compare misinformation and counter-misinformation messages [41,66].For instance, Micallef et al. [41] first created a textual classifier to classify tweets into misinformation, counter-misinformation, and irrelevant groups, and then analyzed the tweets in each group.Interestingly, they find that a surge in misinformation tweets results in a corresponding increase in tweets that reject such misinformation.Vo and Lee [66] first identified fact-checking replies by checking whether a reply contains a fact-checking URL toward two trustworthy factchecking websites (i.e., Snopes.com and Politifact.com).Then, they retrieved the corresponding misinformation tweet toward which the fact-checking post replies to, and use them to construct pairs of misinformation posts and fact-checking replies for fact-checking content analysis and reply generation.
Meanwhile, Miyazaki et al. [43] curated a large-scale dataset containing pairs of misinformation tweets and debunking replies, by first crawling COVID-19 related misinformation tweets from existing research [14,29,34,40,56] and then recruiting crowd-sourcing workers via Amazon Mechanical Turk to annotate responses to these tweets as being debunking or not.They then perform analysis to illustrate who counters misinformation and how they do so.However, contrary to this work, we conduct an in-depth stratified analysis of the replies to examine which features matter during the countering.Stratification helps to compare similar tweets by controlling for the number of replies it receives.Furthermore, we also conduct analysis of whether tweets get a high or low proportion of countering replies.Importantly, we also build two new tasks of predicting which misinformation posts will get countered and to what degree they get countered.Our work complements the existing counter-misinformation studies.
Birdwatch (a.k.a. Community Note)
Twitter launched Birdwatch (recently renamed to Community Note) to facilitate misinformation detection by ordinary users.On the platform, users can report suspicious and/or misleading tweets, as well as annotate tweets reported by others.Many have investigated this kind of countering [3,45] and derived different patterns among this collective countering.For instance, Allen et al. [3] looks at the impact of partisanship during the crowds' annotation by analyzing existing data from the Birdwatch/Community Note platform; they find its users are more likely to (1) give negative annotations of tweets from counter-partisans, and (2) rate annotations from counter-partisans as unhelpful.Though Birdwatch/Community Note enables community-based detection of misinformation, it does not provide a way for users to counter misinformation.Notably, users provide inputs within the Birdwatch ecosystem only, which is restricted and does not reflect the larger dynamics of information flow on Twitter.Recent research has also shown that Birdwatch can be manipulated by motivated bad actors [45].Therefore, we focus on the misinformation that spreads on Twitter and is countered by ordinary users for a more complete and comprehensive study.
DATASET
In this section, we describe the definition of the problem, as well as the corresponding dataset curation.
Definitions
Misinformation: We employ a broad definition of misinformation which includes falsehoods, inaccuracies, rumors, or misleading leaps of logic [69].Building on the existing work [24], we focus on misinformation related to the COVID-19 vaccine due to its broad impact around the world during the COVID-19 pandemic.Practically, the misinformative claims include "the vaccine alters DNA", "the vaccine causes infertility", "the vaccine contains dangerous toxins", and "the vaccine contains tracking devices"; these topics are popular and widely studied by existing research works [1,24].
Counter-reply: Motivated by existing research works on analyzing replies that show disbelief toward misinformation [33] or fact-check misinformation [49], a direct response to a misinformation post is considered a "countering" reply if it makes an attempt to explicitly or implicitly debunk or counter the misinformation tweet .Otherwise, the reply is considered as non-countering.Practically, given a reply , it is a: • Countering reply: Motivated by existing research works on identifying and analyzing text that is countering, debunking, disbelieving, or disagreeing with misinformation [29,33,41], a countering reply is a reply that explicitly or implicitly refutes the misinformation post ("this is misinformation"), points out the falsehood ("the COVID-19 vaccine does not change DNA"), insults the tweet poster ("you are born to lie"), or questions the misinformation ("Is there any reference I can check?").• Non-countering reply: Instead of countering, a non-countering reply supports, is in favor of, comments, repeats misinformation, etc., such as "This is not the vaccine but the gene therapy", "Yes, I agree with you", or "It makes sense".
A post is considered to be countered if it receives at least one counter-reply.Meanwhile, given that different misinformation tweets have various numbers of replies, to have a normalized measure of the magnitude of which a misinformation tweet gets countered, we define the proportion of counter-replies to total replies, denoted as ().
Task Objective
We consider the set M of misinformation posts about the COVID-19 vaccine.Each misinformation post ∈ M has a set of replies = [ 1 , ..., ] posted in direct response to .Our final goal is to build a classifier F such that it can output a binary label F (), which indicates whether the misinformation post will be countered or not, i.e., whether it will receive at least one counter-reply.
Misinformation Tweet Collection and Classification.
We utilize the Anti-Vax dataset from Hayawi et al. [23], a large-scale dataset of tweets related to the topic of COVID-19 vaccines, in order to identify misinformation tweets for our study.These tweets range eight months from December 1, 2020 to July 31, 2021, which was the relevant period covering a substantial part of the time from when the vaccines were approved by the FDA in December 2020 [57].Also during this period, many uncertainties and misinformation about COVID-19 vaccines were spreading on social media [23,46,57].The original dataset consists of approximately 15.4 million tweets collected from the Twitter API [23], each containing at least one of the following COVID-19 vaccine relevant keywords: {'vaccine', 'pfizer', 'moderna', 'astrazeneca', 'sputnik', 'sinopharm'}.Only original tweets were considered, i.e., retweet, reply, or quote tweets were removed.We utilized the Twitter API to retrieve the tweet text, user ID of the tweet author, datetime, conversation ID, reply settings, and tweet engagement metrics (like, retweet, quote, and reply counts).In total, we were able to retrieve 14,123,209 tweets from the original dataset while the remaining 1.3 million tweets were unavailable due to the deletion by the users or the Twitter platform.Following the definition of misinformation in Section 3.1 and the current approach of identifying COVID-19 vaccine related misinformation tweets [23], we first get the annotated misinformation tweets from Hayawi et al. [23], train a text classifier to determine if a tweet is misinformation or not, and classify all non-annotated tweets.Specifically, we first crawl and get 4,836 annotated misinformation and 8,596 annotated non-misinformation tweets from Hayawi et al. [23].Next, we build a text classifier using BERT [16].This classifier has a promising performance in precision, recall, and F-1 scores of 0.972, 0.979, and 0.975, respectively.This performance is comparable to the reported one in the original paper by Hayawi et al. [23] (i.e., the precision, recall, and F-1 scores of 0.97, 0.98, and 0.98).The classifier has high performance as per the metrics and thus can be used for downstream classification tasks.
Finally, we use this misinformation classifier to identify misinformation tweets in the entire dataset, resulting in 1,523,849 misinformation tweets and 12,599,360 non-misinformation tweets.Since we only focus on replies to misinformation in this work, we only use misinformation tweets for downstream analyses.
Next, we perform filtering of the dataset.Since our work focuses on categorizing misinformation by the composition of their replies, we further discard misinformation tweets that have zero replies.In addition, we discard tweets where the poster has limited the set of users who can reply to their tweet, to ensure that all tweets in our dataset have an equal opportunity to be replied to.This information is obtained from the Twitter API.
Finally, our COVID-19 vaccine misinformation tweet dataset consists of 268,990 tweets where each tweet has at least one reply.This is the final set of misinformation tweets that we use.
Counter-misinformation Reply Collection and Classification.
For each tweet in our misinformation dataset, we use the Twitter API to crawl all direct replies to the original tweet.In total, we collected a total of 1,991,611 replies to the 268,990 tweets.One misinformation tweet has an average of approximately 7.4 replies.The distribution of the reply count per tweet is shown in Figure 2 in blue.
Building a Counter-reply Classifier: Since it is of high cost to manually annotate all replies, in order to identify all the counterreplies (and non-counter-replies) from this set of 1.9 million replies, we train another text-based classifier to determine if a reply counters the tweet or not.Here, we call this a "counter-reply classifier".
Building on the existing works of the reply classification task [33], we first annotated replies and then built the classifier.Specifically, two students each first annotated 500 randomly-selected pairs of tweets and replies based on the textual contents into 'Countering' or 'Non-Countering' as per the definition provided in Section 3.1.This annotation resulted in an inter-rater agreement score of 0.7033 measured by percent agreement, resulting in 244 responses expressing countering while the remainder were non-countering.Then, after discussing the disagreements and creating the same annotation standard, each annotator labeled another 545 randomly selected pairs of tweets and replies.In total, we get 802 countering replies and 788 non-countering replies in our final annotated counter-reply dataset.
After getting the annotated replies, we utilize the Roberta-base lower-case architecture [38] as the classifier to which the input is the pair of tweets and replies.After the hyperparameter search across batch size and learning rate, the classifier achieves a decent performance with a precision of 0.834, a recall of 0.819, and an F1-score of 0.822, which is sufficient for counter-reply classification on unlabeled replies.
Finally, we classify 690,047 (34.65%) replies as counter-replies, and the remaining 1,301,564 (65.35%) as non-counter-replies.The distribution of the counter-reply count per tweet is shown in Figure 2 in orange.The average number of counter-replies that a misinformation tweet has is 2.57, and the average proportion of all replies of a misinformation tweet that are counter-replies is 0.271.
Misinformation Poster Attribute Collection.
For each misinformation tweet, we also collect information of the user who posted the misinformation tweet, which includes date and time of account creation, number of tweets posted, account verification, follower count, and following count.In total, information for 137,929 unique users was retrieved.
Additionally, we collected all the tweets that the user posted in the 7 days leading up to them posting the misinformation tweet; we refer to these tweets as "pre-misinformation" tweets.Only original and quote tweets were retrieved; replies and other retweets were excluded.We pull the same set of attributes as in the misinformation tweet crawling.In total, we retrieved a total of 31,450,114 "premisinformation" tweets, with an average of 116.9 "pre-misinformation" tweets per misinformation tweet.Note that these numbers include duplicate tweets if the user had posted two misinformation tweets within 7 days of each other.
As a final step, we identify the subset of "pre-misinformation" tweets that are related to the topic of COVID-19 vaccines, as well as those that are also misinformative.We define a "pre-misinformation" tweet belonging to that subset if it contains at least one of the aforementioned six keywords that were used to collect the original Anti-Vax dataset, namely {'vaccine', 'pfizer', 'moderna', 'astrazeneca', 'sputnik', 'sinopharm'}.In total, 1,781,161 (5.71%) of the "pre-misinformation" tweets are labeled as being about COVID-19 vaccines.We then utilize the aforementioned misinformation classifier to identify COVID-19 vaccine misinformation within this subset of "pre-misinformation" tweets, of which 335,458 (18.83%) were classified as misinformative.
CHARACTERIZATION OF COUNTER-REPLY
In this section, we analyze the properties of misinformation tweets with respect to the degree to which their misinformation gets countered.In order to do so, we identify the tweets that see a high proportion of their replies being counter-replies, and compare it to the group that see a low proportion of their replies being counterreplies.
To avoid skewing the results due to extreme data points, for this analysis, we do not consider tweets at the two extremes of the "reply count" distribution -specifically, we remove tweets with fewer than three replies, as well as the top 2% of tweets that have the greatest number of replies, following similar tweet filtering procedures in existing research works [2,4,70].This is done to remove dataset noise related to low-engagement tweets, along with outliers associated with the highest engagement tweets.After this process, we are left with 74,663 misinformation tweets, with reply counts ranging from 3 to 52 (both inclusive).
Stratified Dataset Creation
The linguistic, engagement, and user-level properties of tweets that get a low number of replies are different from those of tweets that receive many replies [9,10,39].Thus, to avoid conflating the factors that lead to receiving a high number of replies with the factors to receive counter-replies, we define and create several strata based on the number of replies that a misinformation tweet receives.Specifically, the strata are defined as follows: [3,5], [6,10], [11,15], ..., [46,50].Each stratum contains similar misinformation tweets that receive a similar number of replies, with some tweets that get countered and others that do not.We then compare these two groups within each stratum.Figure 3 shows the distribution of counterreply proportion within each stratum.We observe that, with the exception of tweets with a lower number of replies (that have more tweets with relatively fewer counter-replies), the distribution is similar across reply counts.
Within each stratum, we assign tweets to a "Highly Countered" group if its counter-reply proportion is in the top quartile (also within that stratum), a "Low Countered" group if its counter-reply proportion is in the bottom quartile (within that stratum), or discard it if it does not fall into either of the two groups.Figure 4 shows the distribution of the tweets in the two relevant categories.
Within each stratum, we compare misinformation tweets between the two groups.We identify three types of attributes to perform this comparison along: Table 1 displays the full list of attributes we study within each of these categories.We present results in the following subsections.
Linguistic Attributes of Tweets that are Countered
First, we observe from Figure 5a that on average, "Highly Countered" tweets contain 32.1% higher usage of affective language (words and phrases that appeal more to emotions) than "Low Countered" tweets (p < 0.05 for all strata2 ; average Cohen's d = 0.277) 3 .This indicates that those who post counter-replies tend to gravitate more towards replying to misinformation that induces a stronger emotional reaction in them.This is consistent with the finding that emotional content gets more attention on social media in existing research works [53].Further, we find that "Highly Countered" tweets express significantly higher negative sentiment than "Low Countered" tweets across all strata.Figure 5b shows this result for VADER negative sentiment (p < 0.05 for all strata; average Cohen's d = 0.304); we find similar results for the "negative emotion" dimension of the LIWC lexicon (p < 0.05 for all strata; average Cohen's d = 0.279).In particular, we find that on average, "Highly Countered" tweets contain 104% more anger-related words than "Low Countered" tweets (see Figure 5c) (p < 0.01 for all strata; average Cohen's d = 0.347).This implies that the negative tone of misinformation tweets attract more attention [22,30], and therefore, more counter-replies.
In addition, we measure differences in the degree to which the misinformation tweet expresses politeness and impoliteness.We do this by identifying the sets of linguistic strategies associated with each as presented in [15], and compute the total number of linguistic instances associated with each set to derive the "politeness" and "impoliteness" score, respectively.As shown in Figure 5d average, "Highly Countered" tweets utilize 23.1% more strategies associated with impoliteness than "Low Countered" tweets (p < 0.05 for all but one stratum; average Cohen's d = 0.248); this finding is consistent with the previous findings involving negative sentiment.Meanwhile, we do not find a significant difference between the groups for strategies associated with politeness, implying that trying to be polite in presenting a misinformation tweet does not significantly impact the chance of being countered.Next, we find that there exist differences in topical presence between "Highly Countered" and "Low Countered" tweets.Figure 5e shows that on average, "Highly Countered" tweets utilize 28.7% fewer health-related terms than "Low Countered" tweets (p < 0.05 for all but one stratum; average Cohen's d = 0.220).This suggests that for the average counter-reply poster, the inclusion of more technical medical terminology might pose a barrier for their willingness or ability to post an effective debunking response.One possible reason is that the inclusion of technical health-related terms can signal authority over the topic and be more convincing to the reader [11,21].
Engagement Attributes of Tweets that are Countered
In this subsection, we study the impact of engagement attributes (e.g., likes, retweets, etc.) on whether misinformation gets countered.There are two possibilities: (1) first, misinformation tweets with higher engagement get countered more often because the misinformation gets more attention and therefore, have a higher likelihood of becoming accessible to someone who would counter it; (2) second, misinformation tweets that get countered are less likely to be liked or retweeted by others.We investigate which of the two possibilities hold as per the data.
In addition to the reply count, we compare tweets using the number of likes, retweets (RT), and quotes (QT) they receive.As these methods of engagement on the platform serve a different purpose and have different functionality than the "reply" method, it is worth using these metrics in our cross-group comparison.In order to effectively capture these differences with respect to reply count, we first perform a scaling of these attributes by dividing by the reply count, then performing comparisons of this quotient across the two groups.
Figure 5g shows that on average, "Highly Countered" tweets receive 37.6% fewer QTs relative to replies on average (p < 0.05 for all strata).This difference is very small at the lowest stratum (8.9% fewer; Cohen's d = 0.05), but is much higher on the highest stratum (57.4% fewer; Cohen's d = 0.37).We receive similar results for retweets and likes; on average, "Highly Countered" tweets receive 27.4% fewer retweets relative to replies (p < 0.05 for all but one stratum; see Figure 5h) and 25.6% fewer likes relative to replies (p < 0.05 for all but 3 strata).
These findings show that the presence of counter-replies on a tweet organically decreases engagement by average users, suggesting that the practice of countering is potentially effective at reducing the spread of misinformation [13,18,67].
User Attributes of Tweet Posters that are Countered
First, we study the impact of the user being verified on Twitter on the tweet getting countered.We find that, on average, the proportion of "Highly Countered" misinformation posters that are verified is 16.8% higher than that for "Low Countered" misinformation posters (p < 0.05 in all but 3 strata; average Cohen's d = 0.143).Since the majority of the posters on Twitter are non-verified, we study that set of users next.We compare the attributes of nonverified users in the "Highly Countered" group versus the "Low Countered" group.For the remainder of the attributes, we found none of them to be statistically different across the two groups.Thus, together with the linguistic results presented in Section 4.2, we find that the content of the misinformation tweet is more important in attracting countering than the user who posts the misinformation.
INEQUALITY IN SOCIAL CORRECTION
We further investigate the potential inequality in social correction.This can help identify whether certain types of users are less likely to be countered, leading to an increase in disparity.Motivated by existing work [64], we use education level as a key demographic variable to illustrate the potential inequality between different users.Since lack of education and literacy play a crucial role in believing in misinformation [19,52,63], it is important to study whether it also impacts correction.
We derived the education level of users by quantifying the readability of their posts using the Automated Readability Index (ARI), which is known to produce an approximate representation of education level in prior works [17,51,54].A higher ARI corresponds to a higher education level.We use the "pre-misinformation" posts of each user (i.e., posts made within the 7 days prior to posting the misinformation tweet) to calculate that user's ARI.Then, for each post, we compute the ARI score [17,51,54].Finally, we compute the average of these scores, and use it as the final ARI value to present the education level of the user.Thus, it should be noted that the ARI score is not the education level portrayed in the misinformation tweet, but instead, the education level derived across the historical posts of the user who spread misinformation tweets.We randomly sampled 10,000 users who spread misinformation in our dataset to illustrate the inequality phenomenon.
As shown in Figure 6, we find that misinformation posts made by users with lower education levels have a higher likelihood of getting corrected.There is a systematically negative trend with an increase in the user's (perceived) education level.This highlights a need to pay attention to misinformation spread by users who portray a higher education level, since ordinary users are less likely to correct them.
SOCIAL CORRECTION PREDICTION
In this section, we aim to answer two research questions: • RQ1: Given a misinformation tweet, can we predict whether it will be countered or not in the future?• RQ2: Given a misinformation tweet that will be countered in the future, can we predict whether it will be countered with fewer or more counter-replies?
Both RQs are important to address for the combating of future misinformation.By being able to effectively predict future interactions surrounding misinformation tweets, we can better identify sets of online interactions where misinformation is being organically countered, along with those where additional countering needs to be performed.Answering RQ1 can identify sets of misinformation posts where other users may take the initiative in posting a counter-reply, while answering RQ2 can predict the intensity or magnitude of countering.
Dataset
For both the research questions, we use the aforementioned dataset as we described in Section 4.
For RQ1, we divide the dataset into two sets of misinformation tweets: (1) misinformation tweets that have replies but none of them are counter-replies; (2) misinformation tweets that have at least one counter-reply.The sizes of these sets are 17,787 and 55,136, respectively.For RQ2, we divide the misinformation tweets into two groups: one with a low proportion of counter-replies, and another group with a high proportion of counter-replies.Similar to the stratified setup in Section 4.1, we use the proportion of counterreplies as an indicator of membership for the two groups.The bottom 25% of posts with respect to their countering proportion are assigned to the low countered group.On the other hand, the top 25% posts with the highest proportion of countering replies are assigned to the highly countered group.The sizes of these sets are 14,274 and 15,224, respectively.
Experimental setup
Using each of these datasets, we follow similar approaches in tweet prediction tasks [41,71] to address both RQ1 and RQ2.We aim to build a binary classifier for each of RQ1 and RQ2, using the label definitions described above.For both RQs, we use the same set of features.We begin with the set of attributes listed in Table 1 with p < 0.001 and have non-null values for all datapoints; there are 63 such attributes (53 linguistic, 5 engagement, 5 poster).As shown in the existing tweet prediction task [41], the semantic information from textual embedding benefits the prediction task.Thus, we also generate the embedding vector for each tweet using RoBERTa [38], which results in a 768-dimensional feature vector.Finally, we concatenate the above feature vectors to form a tweet feature vector to comprehensively represent the tweet and use it for classification.
Classifier: Following similar tweet or general text classification tasks [25,27,41 3: RQ2: Classification performance of whether tweets will be highly countered versus that will be low countered.
learning classifiers including Logistic Regression, XGBoost, and a Feed-forward Neural Network with a single hidden layer, using the feature vector as input.During the experiment, 10-fold crossvalidation is deployed, and we report precision, recall, and F-1 score as the performance metrics.
Classifier Performance
In Table 2, we report the classification result for RQ1.As we can see, all three models are able to achieve good performance on the task.The logistic regression achieves the best performance in terms of precision, recall, and F-1 score; this result is also found in other similar tweet classification tasks [41].This high performance grants the ability to effectively predict whether a tweet will be countered or not, enabling fact-checkers and social media platforms to prioritize countering tweets identified as less likely to be countered organically.
For RQ2, the classification result is shown in Table 3.As we can see, the model performance is still reasonably acceptable, but is worse compared to RQ1.This decrease in performance may imply that the task to identify the intensity of countering tweets is not only more difficult, but also distinct from the task to identify whether a tweet will be countered.In other words, the phenomenon of posting of the first counter-reply is easier to forecast than that of the posting of additional counter-replies given that at least one has already been posted.
DISCUSSION AND CONCLUSION
In this paper, we studied the tweet and user-level properties of misinformation tweets that get countered versus those that do not.The in-depth analysis shows that misinformation tweets expressing negative emotion, strong emotion, third-person pronouns, and strategies associated with impoliteness are more likely to result in more countering replies from users.Our result also shows that tweets that get countered have a higher amount of reply engagement in proportion to like, retweet, and quote tweet engagement.Moreover, we develop well-performing classifiers to predict whether a misinformation tweet will be countered or not, and if so, to what degree they will be countered (i.e. the proportion of its replies that end up being counter-replies).
Given the statistical significance of our analysis and the high performance of our classifiers, we demonstrate that it is possible to identify tweets that are more or less likely to get countered.In particular, nearly all of these attributes (tweet linguistic attributes and user attributes) are readily available as soon as the tweet is posted, allowing for the quantity of future counter-misinformation (or the lack thereof) to be reasonably forecast.This can have major implications in times of breaking news or other such events in which large quantities of (mis-)information are posted to online platforms at a rapid rate; in conjunction with state-of-the-art misinformation detection approaches, the counter-reply prediction approach presented in this paper can be used to identify tweets that are less likely to be countered, possibly necessitating additional platform-level approaches to control the spread of misinformation for these tweets.One of these approaches may be adding or increasing interventions to draw attention towards accuracy, an approach that has been shown to be effective in discouraging users from spreading misinformation [48].
A limitation of this work that it focuses on only one platform: Twitter.On other online platforms, different mechanisms of post and user engagement, as well as information exchange, may be present [42], possibly influencing the types of misinformation tweets and posters users will choose to counter.Another limitation is that it studies only one topic (COVID-19 vaccines), which has become one of the most widely discussed topics in our society due to the universal effects of the COVID-19 pandemic.On misinformation-related topics that might be more obscure or less widely discussed (e.g.flat earth theories), it could be possible that the more specific demographics of misinformation and/or counterreply posters may affect the ways in which they interact.In addition, we only study text in the English language; the dynamics and discussion in other languages and other modalities (images, videos) may differ [65].
For future work, similar analysis can be performed on the user network surrounding the misinformation poster and counter-reply poster (e.g.their followers and those they follow, how much misinformation these accounts spread, etc.) in order to assess if there are any network-related attributes that may increase the likelihood of counter-replies.In addition, given that we can reasonably determine which tweets will and will not be countered, it would also be valuable to perform user studies or field studies to evaluate if certain characteristics about online encounters with misinformation can increase (or decrease) the likelihood of a user posting a counter-reply.Also, while we explore it in Section 5, further studies can be done to understand the inequities surrounding counterreply targets along additional demographic, social, political, and/or geographic dimensions; this can allow further exploration of the greater societal implications surrounding counter-misinformation.
Figure 2 :
Figure 2: Distributions of the total number of replies (blue) and number of counter-replies (orange) per misinformation tweet, each presented on a log scale.
Figure 3 :
Figure 3: Distribution of proportion of counter-replies for each stratum.Each boxplot represents a stratum, displaying the minimum, maximum, quartiles, and (any) outliers.
Figure 4 :
Figure 4: Number of tweets in each of the "Low Countered" (yellow) and "Highly Countered" (red) groups for each stratum, presented on a log scale.
( 1 )
Tweet linguistic attributes, to analyze the degree to which the tweet falls into meaningful personal, psychological, topical, emotion, and other content-related categories.(2) Tweet engagement attributes, to analyze how and how much the tweet is interacted with among online users.(3) Tweet poster attributes, to analyze the behavior, popularity, and status of the user behind the tweet.
Table 1 :
List of linguistic, engagement, and poster attributes considered for the analysis in Section 4. A set of three asterisks(***) next to the attribute indicates a statistical test result of p < 0.001.1This subset of statistically significant attributes is considered for the predictive tasks in Section 6.(a) LIWC Affect score (b) VADER negative sentiment (c) LIWC Anger score (d) Impoliteness score (e) LIWC Health score (f) LIWC SheHe score (g) Fraction of quote tweets (QTs) among all the replies (h) Fraction of retweets (RTs) among all the replies
Figure 5 :
Figure 5: Means and 95% confidence intervals of the linguistic and engagement attributes of misinformation tweets that get highly countered versus those that do not.
Figure 6 :
Figure 6: Comparison of user communities with different education levels.As shown, users with lower education levels will have higher possibilities of getting countered when sending misinformation tweets.
Table 2 :
], we deploy widely-used conventional machine RQ1: Classifier performance of whether tweets will get countered or not. | 8,446.2 | 2023-03-15T00:00:00.000 | [
"Computer Science"
] |
An Analysis of Agricultural Production Efficiency of Yangtze River Economic Belt Based on a Three-Stage DEA Malmquist Model
The Yangtze River Economic Belt (YREB) is a major national strategic development area in China, and the development of the YREB will greatly promote the development of the entirety China, so research on its agricultural production efficiency is also of great significance. This paper is committed to studying the agricultural production efficiency of 11 provinces in the YREB and adopts a combination of the Data Envelopment Analysis (DEA) model and the Malmquist index to make a dynamic and static analysis on the YREB’s agricultural production efficiency from 2010 to 2019. Then, a three-stage DEA Malmquist model that eliminates the factors of random interference and management inefficiency is compared to a model without elimination. The results show that the adjusted technological efficiency changes, technological progress, and total factor productivity increased by −0.1%, 0.24%, and 0.22%, respectively. When comparing these values to the pre-adjustment values, the results indicate that the effect of environmental variables cannot be ignored when studying the agricultural production efficiency of the YREB. At the same time, the differences in the agricultural production efficiency in the YREB are reasonably explained, and feasible suggestions are put forward.
Introduction
China is an advanced agricultural country, as it forms an important foundation for economic development, social progress, and industrial structure adjustment [1]. Therefore, the state has also issued a series of policies for the processes of agricultural development to promote the sustainable and healthy development of agriculture. The Yangtze River Economic Belt (YREB) includes 11 provinces and cities: Shanghai, Jiangsu, Zhejiang, Anhui, Jiangxi, Hubei, Hunan, Chongqing, Sichuan, Guizhou, and Yunnan ( Figure 1). Amongst these provinces/cities, there are great differences in economic development level, industrial structure, and agricultural production efficiency [2,3]. There are numerous studies on the YREB. Wang et al. studied the emission reduction efficiency of PM2.5 in the YREB, which is of great significance to the control of atmospheric environmental pollution [4]. Zhang et al. studied the relationship between the economic development and the pressure of the water environment in the YREB [5]. Liu et al. analyzed the spatial distribution characteristics and driving forces of total phosphorus emissions in the YREB, which is of great significance to the treatment of phosphorus pollution in the YREB [6]. In addition, agricultural issues in the YREB have also become an important research field. In 2018, the state issued the Strategic Plan for Rural Revitalization (2018-2022) proposal, which was meant to build a modern agricultural industrial system, realize the integrated development of 2018, the state issued the Strategic Plan for Rural Revitalization (2018Revitalization ( -2022 proposal, which was meant to build a modern agricultural industrial system, realize the integrated development of rural primary, secondary and tertiary industries, promote rural strategic transformation, and enhance the innovation and competitiveness of China's agricultural industries. As an important development strategy of the country, the YREB is also of great significance to the nation's agricultural economic development and agricultural production efficiency (APE). Based on the dynamic and static analysis of agricultural production efficiency in the 11 provinces included the YREB, this paper aims to study the regional differences in agricultural development levels in the YREB territories and to put forward feasible suggestions for the current agricultural development situation in this region and China. The innovation of the presented paper is that a three-stage DEA model is used to study the agricultural efficiency of the YREB, which eliminates environmental impact and random impact, meaning that the results are more objective. In addition, spatio-temporal analysis of agricultural efficiency can dynamically reflect the changing characteristics of agricultural efficiency in the YREB. This paper is structured as follows: Section 2 contains the literature review, Section 3 contains the research methodology and data, Section 4 contain experimental procedure, and Sections 5 and 6 contain the discussion and conclusion, respectively.
Literature Review
APE evaluation is very important to a country's agricultural development level, and there are a variety of ways to measure it. For example, Shah et al. used energy accounting to analyze the sustainability of Pakistan's agricultural production system, helping decision makers understand the important role of ecosystems in agricultural production systems [7]. Guo et al. used labor, land, and capital as input variables and agricultural economic benefits as output variables. There, they used the Stochastic Frontier Approach (SFA) model, spatial correlation analysis method, and Tobit model to effectively evaluate This paper is structured as follows: Section 2 contains the literature review, Section 3 contains the research methodology and data, Section 4 contain experimental procedure, and Sections 5 and 6 contain the discussion and conclusion, respectively.
Literature Review
APE evaluation is very important to a country's agricultural development level, and there are a variety of ways to measure it. For example, Shah et al. used energy accounting to analyze the sustainability of Pakistan's agricultural production system, helping decision makers understand the important role of ecosystems in agricultural production systems [7]. Guo et al. used labor, land, and capital as input variables and agricultural economic benefits as output variables. There, they used the Stochastic Frontier Approach (SFA) model, spatial correlation analysis method, and Tobit model to effectively evaluate the spatial layout of agricultural production and APE optimization [8]. Li China's 30 provincial agricultural departments from 1997-2014 and concluded that there are differences in APE in the eastern and western regions of China [9]. Agricultural efficiency is also inseparable from the sustainable development of agriculture. As a case in point, Laurett et al. used an exploratory factor analysis (EFA) model to explore the important factors affecting the sustainable development of agriculture [10]. Valizadeh and Hayati used factor analysis to measure the indicators of sustainable agricultural development [11]. Shen et al. decomposed the overall inefficiency into three components of technology and the mixing and scale effect and conducted empirical research on the economic and environmental performance and APE of 31 provinces in China from 1997-2014 [12]. Wang et al. used multilateral total factor growth rate estimates to measure the differences in the production efficiency of the agricultural sector in various regions of China [13]. Ma et al. used spatial autocorrelation and econometric models in an analysis of China's economic value from 1990-2017 and found that China's APE is generally low [14].
In the research on agricultural efficiency and factors affecting agricultural production, the data envelopment analysis (DEA) model is one of the most widely used methods. Based on the panel data from 2002 to 2015, Yu and Zhang used the ultra-efficient DEA model and Malmquist index to measure and analyze the APE of Shandong Province, China [15]. Li et al. used panel data from 30 administrative regions in China as the research objects, selected a series of indicators as input and output variables, and used the DEA model to analyze the agricultural total factor production efficiency of various provinces from 1997-2014 and concluded that China's agricultural total factor productivity showed a slow growth trend in fluctuation [16]. In recent years, in order to evaluate agricultural efficiency more effectively, some scholars have used and developed DEA models. Mosbah et al. converted the DEA model into two twin input decomposition and output decomposition (i.e., ID-DEA and OD-DEA) models and effectively evaluated agricultural efficiency [17]. Toma et al. estimated the APE of EU countries from 1993 to 2013 based on the non-parametric Bootstrap DEA model and concluded that from the perspective of resource conservation that the oldest EU countries have higher APE [18]. Based on the global DEA model and the weighted Russell distance function model, Liu and Feng used 30 provinces in China as a sample to analyze green total factor productivity in the period from 2005 to 2016 and to propose countermeasures for the development of green agriculture in China [19].
Another mainstream method for studying APE is the SFA. To demonstrate this, Deng and Gibson used the Land Production Estimation System (ESLP) to estimate the land productivity of Shandong Province in the period from 1990-2010 and analyzed the ecological efficiency of sustainable agricultural production based on SFA [20]. Yigezu et al. used the SFA model to measure the efficiency of agricultural irrigation water in Syria [21]. Ilaria et al. evaluated agricultural efficiency using the heteroscedasticity stochastic frontier model with Italian farm characteristics as input variables [22]. Villano and Fleming applied the SFA model to a technical efficiency analysis of rice production and evaluated its production risks [23].
It can be seen that the DEA method can be used to evaluate the production or business performance of decision-making units with multiple inputs and multiple outputs. The DEA method does not need to specify the form of the production function of the input and output, so it can evaluate the efficiency of decision-making units (DMU) with more complex production relations [24,25]. In summary, this paper uses a combination of a three-stage DEA model and the Malmquist index to study agricultural efficiency. According to the experience of some scholars [26][27][28][29], this paper takes the sown area of crops, the number of fertilizers, the agricultural practitioners, the total power of agricultural machinery and the effective irrigation area as input variables and the total agricultural output value and the output of main crops as output variables. At the same time, considering factors such as random disturbance and management inefficiency, this paper incorporates the affected crop areas, government subsidies, and gross regional product as environmental variables into the model system, and scientifically analyzes and evaluates the APE of the YREB ( Figure 2) [30,31]. factors such as random disturbance and management inefficiency, this paper incorporates the affected crop areas, government subsidies, and gross regional product as environmental variables into the model system, and scientifically analyzes and evaluates the APE of the YREB (Figure 2) [30,31].
DEA Model
DEA is very popular model for dealing with multi-input and especially non-single output [32]. This method can evaluate the relative effectiveness of decision-making units, pays attention to optimizing each decision-making unit, and provides directional guidance for the adjustment of relevant indicators. The more mature CCR model and the BCC model are more frequently used in DEA. BCC assumes that the variable returns to scale, and the model form is as follows: where is the efficiency value of the DMU decision-making unit, represents the combination ratio of the j-th decision-making unit DMU in a certain effective DMU combination, is the total input of the j-th DMU to the i-th input, and is the j-th DMU. For the total output of the i-th category output, and represent the slack variable. If = 1, = 0, then the DEA of the decision-making unit is valid, and it is on the leading edge. If = 1, ≠ 0, or ≠ 0, the DEA of the decision-making unit is weakly effective. If < 1, then the DEA decision unit is not valid. The calculated efficiency value
DEA Model
DEA is very popular model for dealing with multi-input and especially non-single output [32]. This method can evaluate the relative effectiveness of decision-making units, pays attention to optimizing each decision-making unit, and provides directional guidance for the adjustment of relevant indicators. The more mature CCR model and the BCC model are more frequently used in DEA. BCC assumes that the variable returns to scale, and the model form is as follows: where θ is the efficiency value of the DMU decision-making unit, λ j represents the combination ratio of the j-th decision-making unit DMU in a certain effective DMU combination, x ij is the total input of the j-th DMU to the i-th input, and γ ij is the j-th DMU. For the total output of the i-th category output, S + and S − represent the slack variable. If θ = 1, S + = S − = 0, then the DEA of the decision-making unit is valid, and it is on the leading edge. If θ = 1, S + = 0, or S − = 0, the DEA of the decision-making unit is weakly effective. If θ < 1, then the DEA decision unit is not valid. The calculated efficiency value is comprehensive technical efficiency (TE), which can be further decomposed into scale efficiency (SE) and pure technical efficiency (PTE).
Malmquist Index
Static DEA analysis can only analyze the efficiency value horizontally and cannot show a dynamic trend. In order to reflect the agricultural development of the YREB more comprehensively and intuitively, based on the static analysis, this paper introduces the Malmquist index method, which deeply analyzes the agricultural development through the changes in the cross-period efficiency value and then puts forward targeted counter- measures to promote the balanced development of agriculture in all of the YREB provinces. The Malmquist index solution process is as follows: where x t and x t+1 are the input variables, y t and y t+1 are the output variables, and D c is a distance function. EFFCH represents changes in the technological efficiency, TECHCH represents technological change, and EFFCH can be decomposed into the product of pure technological efficiency change (PECH) and scale efficiency change (SECH).
Stochastic Frontier Approach
The Stochastic Frontier Approach (SFA) can eliminate the impact of environmental variables and statistical noise on the effectiveness of a decision-making unit. Therefore, in the second stage, the stochastic frontier model is mainly used to exclude environmental variables and statistical noise, leaving only relaxation variables created by management inefficiency.
Step 1: Step 2: where, s ni represents the slack value of the decision-making unit, Z i represents the environmental variables, β n is the environmental variable coefficient, γ ni represents random interference items, and µ ni represents management inefficiency. X A ni represents the input variable after adjustment, X ni represents the input variable before adjustment, max f Z i ,β n − f Z i ,β n represents the adjustment of all of the decision-making units to the same external environment, and max(ν ni ) −ν ni represents the adjustment of the random errors of all of the decision-making units to the same situation.
Data
In this work, data are derived from the following sources: The China Statistical, City Statistical and Provincial Statistical Yearbooks from 2011-2020. The data used include the total sown area of crops, agricultural employees, agricultural fertilizer usage, total machinery power, effective irrigation area, disaster areas, agricultural subsidies, and regional GDP. Missing data were calculated via the interpolation of the adjacent year.
Static Analysis and Temporal and Spatial Evolution of Agriculture in the YREB
According to the provincial agricultural data from the YREB, this paper selects the sowing area of crops, the amount of chemical fertilizer, agricultural employees, the total power of agricultural machinery, and effective irrigation area as input variables, and the total agricultural output value and the output of main crops as the output variables. The APEs in 2010 and 2019 were analyzed using DEAP 2.1 software, and the following results are obtained: Tables 1-3.
It can be seen that the technical efficiency, pure technical efficiency, and scale efficiency of the provinces in the YREB are all on the rise, with the average values increasing by 3.7%, 2%, and 1.7%, respectively. In 2010, there were three provinces (Jiangxi, Yunnan, and Guizhou) with increasing returns to scale in the YREB, three provinces (Hubei, Zhejiang, and Hunan) with diminishing returns to scale, and five provinces (Jiangsu, Anhui, Sichuan, Chongqing, and Shanghai) with constant returns to scale. By 2019, only Hunan had increasing returns to scale, with an additional three provinces (Hubei, Zhejiang, and Yunnan) with diminishing returns to scale, and seven provinces (Jiangsu, Anhui, Sichuan, Chongqing, Shanghai, Jiangxi, and Guizhou) with constant returns to scale. In order to express the temporal and spatial evolution of agricultural efficiency more intuitively in the YREB, this paper used ArcGIS 10.2 to plot the changes in the technical efficiency, pure technical efficiency, and scale efficiency in each province in 2010 and 2019, as shown in Figures 3-5.
The First Stage DEA-Malmquist
According to the principles and calculations of Equations (1) and (2), which were made using the DEAP 2.1 software, the initial APE evaluation results for each province in the YREB (Table 2) and the total efficiency changes in the YREB from 2010-2019 were obtained (Table 3).
The First Stage DEA-Malmquist
According to the principles and calculations of Equations (1) and (2), which were made using the DEAP 2.1 software, the initial APE evaluation results for each province in the YREB (Table 2) and the total efficiency changes in the YREB from 2010-2019 were obtained (Table 3).
The First Stage DEA-Malmquist
According to the principles and calculations of Equations (1) and (2), which were made using the DEAP 2.1 software, the initial APE evaluation results for each province in the YREB (Table 2) and the total efficiency changes in the YREB from 2010-2019 were obtained (Table 3).
The First Stage DEA-Malmquist
According to the principles and calculations of Equations (1) and (2), which were made using the DEAP 2.1 software, the initial APE evaluation results for each province in the YREB (Table 2) and the total efficiency changes in the YREB from 2010-2019 were obtained (Table 3).
(I) From the perspective of technological progress, with the exception of the fact that the technological progress in the YREB from 2011 to 2012 was less than 1, the technological progress of other years was greater than 1, indicating that the technological progress of the YREB was generally at a high level, which promoted total factor productivity progress. Looking at each province individually, only Jiangsu, Hubei, and Jiangxi have technological progress less than 1 from 2010-2019. Other provinces have technological progress that is greater than 1, and the overall average value is 1.012, which represents a relatively high level. (II) On the whole, from 2010-2019, the APE index of the YREB has alternately increased and decreased, but the efficiency index has become more and more stable, and there is an overall increasing trend. With the exception of the efficiency values of Jiangsu and Hubei provinces, which are less than 1, the efficiency values of the other provinces are all greater than 1. It can be seen that the main driving factor of APE in the YREB is technological progress. In order to see the changes in the total factor productivity in the YREB more intuitively, this article uses a histogram to show it more clearly ( Figure 6 Table 4 shows the impact of the environmental variables on agricultural productionrelated inputs. As shown in the table, environmental variables have an impact on the five input factors, and most of them are positive. The values of the five input slack variables are 0.64, 0.61, 0.05, 0.71, and 0.67, respectively. Four of them are greater than 0.6 and are at a relatively high level, indicating that the environmental variables have a significant impact on their input indicators, and the LR likelihood tests all passed the 1% significance level. Therefore, it is necessary to study the impact of environmental variables on input and to eliminate them to make the research results more objective and accurate. When the impact coefficient of the environmental variables on the input variables is positive, such as the impact coefficient of disaster areas on crop sowing area being (0.027), which is positive, it shows that increasing the number of environmental variables will reduce the difference variable; that is, the impact of disaster area on APE will be negative. When the influence coefficient of the environmental variables on the input variables is negative, such as when the influence coefficient of regional GDP on total mechanical power is (−0.011), it indicates that increasing the number of environmental variables will increase the difference variable; that is, an increase in the regional GDP will have a positive impact on APE. Table 4 shows the impact of the environmental variables on agricultural productionrelated inputs. As shown in the table, environmental variables have an impact on the five input factors, and most of them are positive. The γ values of the five input slack variables are 0.64, 0.61, 0.05, 0.71, and 0.67, respectively. Four of them are greater than 0.6 and are at a relatively high level, indicating that the environmental variables have a significant impact on their input indicators, and the LR likelihood tests all passed the 1% significance level. Therefore, it is necessary to study the impact of environmental variables on input and to eliminate them to make the research results more objective and accurate. When the impact coefficient of the environmental variables on the input variables is positive, such as the impact coefficient of disaster areas on crop sowing area being (0.027), which is positive, it shows that increasing the number of environmental variables will reduce the difference variable; that is, the impact of disaster area on APE will be negative. When the influence coefficient of the environmental variables on the input variables is negative, such as when the influence coefficient of regional GDP on total mechanical power is (−0.011), it indicates that increasing the number of environmental variables will increase the difference variable; that is, an increase in the regional GDP will have a positive impact on APE.
The Third Stage of DEA-Malmquist
After SFA regression in the second stage, the input variables are adjusted by Equation (4), and then DEA analysis is carried out according to the same method used in the first stage to obtain the adjusted APE of the provinces in the YREB in period from 2010-2019. The results are presented in Tables 5 and 6. Note: A1 represents the total sown area of crops; A2 represents agricultural employees; A3 represents agricultural fertilizer usage; A4 represents total machinery power; A5 represents effective irrigation area; B1 represents disaster area; B2 represents agricultural subsidies; and B3 represents regional GDP. Similarly, the adjusted APE of the YREB will be evaluated in three aspects. (I) From the perspective of changes in technical efficiency, the overall technical efficiency changes in the YREB from 2010 to 2019 show a downward trend, and the changes in technical efficiency in various provinces fluctuate around 1, which is also because the pure technical efficiency and scale efficiency are basically unchanged. (II) From the perspective of technological progress, the technological progress of the YREB is almost greater than 1, which is also the main driving force to promote changes in total factor productivity. However, from 2010 to 2019, the technological progress of the YREB changed from an increase to a decrease, but it remained at generally high levels. (III) As for the total factor productivity of the YREB, the change in the total factor productivity also shows an alternating change of a rise and fall from 2010 to 2019, which was mainly affected by technological progress. It can be seen that the technological progress of the YREB directly affects changes in the total factor productivity and plays a leading role [33][34][35].
Results and Discussion
Compared to the first stage, the changes in the technical efficiency, technological progress, and total factor productivity after adjustment increased by −0.1%, 0.24%, and 0.22%, respectively, indicating that the impact of environmental variables cannot be ignored and that they will have a great impact on the APE. As discussed by Fei [36] and Li [37], agricultural production is affected by various environmental factors. Among them, the change in technical efficiency has a negative effect on the efficiency of agricultural production. Specifically, the technical efficiency of Hunan, Hubei, and Shanghai is less than 1, representing a low level, resulting in low total factor productivity. Therefore, these cities should focus on improving technical efficiency. The total factor productivity of Zhejiang, Yunnan, and Guizhou is higher than the average value of the YREB, indicating that there is no large input redundancy phenomena in these three provinces, and the APE is high.
DEA analysis was conducted for the adjusted input-output variables for 2019. Taking technical efficiency as the abscissa and scale efficiency as the ordinate, the decomposition diagram of the comprehensive efficiency of the 11 provinces and cities in the YREB was determined and was divided into four categories, as shown in Figure 7 and Table 7. The first category is a double low type of technical efficiency and scale efficiency, which can be seen in Shanghai, Chongqing, and Hunan. These three cities need to continue to improve their technical capabilities and technical levels to improve their local APE. The second and third categories include Hubei Province, which has a low technical level, and Jiangxi Province, which as low scale efficiency. These two cities have some deficiencies in terms of technical efficiency and scale efficiency, respectively, so they can learn from each other. The fourth category is a double high type of technical efficiency and scale efficiency, and these are represented by Jiangsu, Anhui, Sichuan, Zhejiang, Yunnan, and Guizhou. If these cities want to further improve their agricultural efficiency, then they must develop new technologies and innovative agricultural means, such as the introduction of new agricultural technologies, the promotion of agricultural mechanized production, the reduction of as many agricultural and human costs as possible, and the optimization of the layout of the agricultural industry.
Int. J. Environ. Res. Public Health 2022, 19, 958 11 of 16 0.22%, respectively, indicating that the impact of environmental variables cannot be ignored and that they will have a great impact on the APE. As discussed by Fei [36] and Li [37], agricultural production is affected by various environmental factors. Among them, the change in technical efficiency has a negative effect on the efficiency of agricultural production. Specifically, the technical efficiency of Hunan, Hubei, and Shanghai is less than 1, representing a low level, resulting in low total factor productivity. Therefore, these cities should focus on improving technical efficiency. The total factor productivity of Zhejiang, Yunnan, and Guizhou is higher than the average value of the YREB, indicating that there is no large input redundancy phenomena in these three provinces, and the APE is high. DEA analysis was conducted for the adjusted input-output variables for 2019. Taking technical efficiency as the abscissa and scale efficiency as the ordinate, the decomposition diagram of the comprehensive efficiency of the 11 provinces and cities in the YREB was determined and was divided into four categories, as shown in Figure 7 and Table 7. The first category is a double low type of technical efficiency and scale efficiency, which can be seen in Shanghai, Chongqing, and Hunan. These three cities need to continue to improve their technical capabilities and technical levels to improve their local APE. The second and third categories include Hubei Province, which has a low technical level, and Jiangxi Province, which as low scale efficiency. These two cities have some deficiencies in terms of technical efficiency and scale efficiency, respectively, so they can learn from each other. The fourth category is a double high type of technical efficiency and scale efficiency, and these are represented by Jiangsu, Anhui, Sichuan, Zhejiang, Yunnan, and Guizhou. If these cities want to further improve their agricultural efficiency, then they must develop new technologies and innovative agricultural means, such as the introduction of new agricultural technologies, the promotion of agricultural mechanized production, the reduction of as many agricultural and human costs as possible, and the optimization of the layout of the agricultural industry. Throughout the YREB, the APE of provinces and cities is generally at a high level because the YREB is a major strategic area for national development. Promoting the development of agricultural production in the YREB; forming a pattern of complementary advantages, cooperation, and interaction between upstream, middle, and downstream areas; and narrowing the development gap between the east, central, and western regions is conducive to a new path towards innovative and green agricultural development. Therefore, according to the current agricultural development situation in the YREB, this paper puts forward the following suggestions: First, the government should implement the strategy of differentiated development to promote the balanced development of regional agriculture, especially in areas with low agricultural efficiency, such as in Jiangsu, Hubei, Shanghai, and so on. These areas should improve production conditions as guidance, strengthening deep cultivation and fine farming ability, developing fine agriculture, tapping into the characteristic agricultural products with great potential and high efficiency, and improving product quality and influence [38,39]. At the same time, these areas need to introduce excellent varieties, build high-standard farmland, cultivate high-quality main production areas, and realize the multi-layer superposition of benefits. In addition, the government should strive to improve the technical level of these areas, promote agricultural innovative development, and narrow the gap in agricultural development among regions. Second, the government needs to improve the level of regional financial support for agriculture. Local financial expenditure on agriculture, forestry, and water has a positive effect on agricultural efficiency. Agricultural subsidies in Chongqing, Shanghai, and Jiangxi are low, so they also have a certain impact on the improvement of agricultural efficiency. Therefore, on the one hand, the agricultural sector should strengthen its financial support for agriculture in order to improve the government's attention to agricultural development. On the one hand, the state council should also standardize the management and process acquiring agricultural support for agriculture and should strengthen the construction of corresponding laws and regulations to ensure that the support funds are scientific, reasonable, and smooth. Third, the provinces located in the YREB may be affected by natural disasters, resulting in the APE not reaching the expected level, especially in provinces with high agricultural sowing areas, such as Yunnan, Hunan, and Hubei. The government should pay attention to the affected area and scope of crops, improve agricultural policies, and introduce financial and insurance services to the agricultural sector, minimize losses in the process of agricultural development, and improve APE. Fourth, it is necessary to deepen the reform of the government system, strengthen government supervision, and rationally allocate agricultural resources. Even though the industry sector plays a leading role in agricultural product planting, processing, tourism, and other industries, it should also focus on the joint development of rural primary, secondary, and tertiary industries, the innovation of new models of agricultural development, the creation of regional advantages, and build core competitiveness in local agricultural development [40,41].
Conclusions
As it is an important national development strategy, the research on agricultural efficiency is of great economic significance. This paper makes a dynamic and static analysis on the agricultural efficiency of the YREB from 2010-2019 through a three-stage DEA Malmquist model. First, this paper makes a static analysis on the agricultural efficiency of provinces in the YREB and clearly shows the differences in the APE through the use of a spatiotemporal evolution diagram. Then, it provides a dynamic analysis on the APE of the YREB from 2010-2019 and studies the differences in the agricultural efficiency in the YREB by using the combination of the DEA and Malmquist index without excluding the influencing factors of environmental factors and random disturbance, with the APE of the YREB showing an increasing trend in terms of fluctuation=; however, the APE of Jiangsu and Hubei is low due to environmental factors. Then, SFA was used to eliminate the impact of environmental variables and statistical noise on the effectiveness of the decision-making unit in the second stage. The environmental variables include affected area, regional GDP, and agricultural subsidies. The results of the stochastic frontier show that the γ value and LR unilateral detection are significant. Therefore, it is necessary to eliminate environmental factors and statistical noise factors to further analyze the APE of the YREB. Finally, the third stage DEA Malmquist model is used to study the APE of the YREB. It was determined that the agricultural development level of the entire YREB is better. Compared to the results of the first stage, environmental variables affected the development of agricultural production to a certain extent. After excluding the influencing factors of the environmental variables, this paper was divided into four categories according to the differences in the agricultural efficiency values among the provinces and cities in the YREB and then put forward problems and guiding suggestions for agricultural development according to the different categories. In short, in-depth research on regional agricultural efficiency is beneficial for the government to make effective policy, which can promote the rapid agricultural development.
Based on these results and discussions, changes and drivers of agricultural efficiency can be further analyzed: (a) The overall agricultural production efficiency of Shanghai, Chongqing, and Hunan is relatively low. The reason for this is that Shanghai and Chongqing are municipalities that are directly under the central government of the YREB, and their geographical area is relatively small, resulting in a decrease in the sown area of crops and the number of farmers. Hunan is located in the middle reaches of the YREB, and natural disasters are relatively serious, resulting in low agricultural efficiency, which is also demonstrated in the observation data.
(b) Although Jiangxi's agricultural scale efficiency is relatively good, its overall efficiency is relatively low, and the total agricultural output value is not high. Because Jiangxi has more agricultural planting areas but less local agricultural subsidies, farmers lack high levels of enthusiasm and motivation. The total agricultural factor productivity is low. Therefore, government subsidies are also crucial for the improvement of agricultural production efficiency.
(c) The reason for the low agricultural efficiency in Hubei is technical efficiency. The agricultural irrigation technology and machine power in this province are relatively low compared to other provinces and cities. Therefore, in order to improve agricultural production efficiency in Hubei, it is necessary to improve the local technical level.
(d) The agricultural production efficiency of Jiangsu, Anhui, Zhejiang, Sichuan, Yunnan, and Guizhou were able to reach optimal levels. The internal factors permitting this are the superior geographical location, high agricultural scale efficiency, and advanced agricultural technology that these provinces. The external factors are that these provinces have a low frequency of natural disasters and high government subsidies. Therefore, the input-output ratio of agriculture was able to reach a reasonable state. If these areas want to further improve their agricultural production efficiency, they need to implement innovative agricultural development strategies.
APE represents the degree of agricultural development. According to the above conclusions some suggestions to realize the development of agricultural production are discussed below: Increase infrastructure construction and consolidate the foundation of agricultural development. First, the agricultural production sector should calculate the sowing area, production conditions, and resource distribution of local agriculture and should adjust the agricultural scale on the premise of ensuring reasonable output. Second, provinces and cities in the YREB should strengthen the construction of farmland and water conservancy facilities, carry out production technologies such as intelligent irrigation and energy-saving drainage, and improve production conditions and further consolidate the foundation of agricultural production. At the same time, regions should develop mixed agriculture according to local conditions, promoting the comprehensive utilization and development of farmland and achieving a win-win situation for scale and efficiency, quality, and quantity. Third, the agricultural market should strengthen the factor input guarantee mechanism, optimize the input-output structure, improve the resource transformation efficiency, promote the efficient transformation of factors, and ensure the balance of agricultural output.
Accelerate the process of market-oriented allocation of factors such as labor, land, energy, and water resources. All regions should reduce the resource waste caused by the factor mismatch and imbalance of the demand and supply structure, strengthen the construction of modern agricultural facilities in the YREB, constantly improve the production capacity of agricultural development, and make the market play a decisive role in the allocation of resource factors.
Improve the agricultural structure and mechanism and promote the transformation and upgrading of agriculture. First, the government should strengthen the construction of agricultural parks, guide the convergence of resource elements, radiate, and drive the landing of surrounding facilities to solve problems related insufficient resources, scattered facilities, and weak industries in rural areas to improve the level of agricultural development. Second, leading enterprises should be guided to cooperate with small farmers, which will result in the implementation of the integrated development strategy for production and marketing, will constantly expand market capacity and sales channels, and will solve the problems of slow and difficult sales of agricultural products. Third, the market needs to open factor circulation channels, guide the flow of advantageous resources to weak areas, promote the free flow of factors, and realize the effective integration and sharing of regional resources. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Publicly available datasets were analyzed in this study. This data can be found here: http://www.stats.gov.cn/ (accessed on 7 December 2021). | 8,493.8 | 2022-01-01T00:00:00.000 | [
"Economics",
"Business"
] |
Remote Sensing of Atmospheric CO and O3 Anomalies before and after Two Yutian MS7.3 Earthquakes
Satellite remote sensing data were used to extract concentrations and volume mixing ratios (VMR) of CO and O3 and Global Data Assimilation System (GDAS) data associated with Yutian MS7.3 earthquakes on March 21, 2008, and February 12, 2014. Difference value and anomaly index methods and the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model were used to simulate gas backward trajectories and analyze the relations between spatial and temporal variations in total columns of CO and O3 (TotCO and TotO3) and earthquakes. Then, the causes of abnormal changes were examined. Maximum anomalies in TotCO and TotO3 occurred one month before the 2008 earthquake and one month after the 2014 earthquake. Anomalies in TotCO and TotO3 were distributed along or were consistent with the fault zone. Furthermore, during the abnormal period, the coefficient of correlation between CO and O3 was 0.672 in 2008 and 0.638 in 2014, with both values significant at p < 0:05. The correlation between TotCO and TotO3 was also significant. The abnormal phenomena of TotCO and TotO3 associated with the two earthquakes were attributed to underground gas escape, atmospheric chemical reactions, and atmospheric transportation caused by in situ stress in the generation of earthquakes.
Introduction
Understanding earthquake precursory anomalies is a worldwide concern [1]. At present, ground observations based on geophysics and crustal deformation are the typical focus of research [2]. However, it is difficult to obtain large-area dynamic and continuous information on seismic precursory anomalies because of limitations in ground observations, which restrict the ability to predict earthquakes [1]. Satellite hyperspectral technology has the advantages of wide coverage and short observation period and is not affected by the underlying surface [2]. The technology can identify different gases, invert their concentration distributions, and predict earthquakes [3]. With the development of satellite remote sensing technology, using abnormal changes in gas concentrations near epicenters to predict earthquakes has become a focus of research [4,5]. However, the mechanisms for the abnormal changes remain unclear because of a lack of research. The C-H-O-S system of the earth is rich and includes CO 2 , CH 4 , H 2 , CO, O 3 , water vapor, and other gases [6][7][8]. These gases escape to the atmosphere from seismic fault and rupture zones before and after earthquakes and thus can change atmospheric composition and concentrations [9][10][11][12]. After the Gujarat M S 7.7 (2001) and M S 5. 2 (2006) earthquakes, high-altitude O 3 -rich air was transported to the epicenter area by the atmosphere, which increased its O 3 concentration [13]. Similarly, before and after two M S > 8:0 earthquakes in Sumatra in 2004 and 2005, abnormal changes were detected in CO and O 3 concentrations, primarily caused by escape of underground gases during the earthquake and chemical reactions between underground and atmospheric gases [14]. Before and after the Wenchuan M S 8.0 earthquake in 2008 and the Lushan M S 7.0 earthquake in 2013, there were abnormal spatiotemporal changes in CH 4 and CO, and Cui et al. [15] proposed that they were caused by the two major earthquakes and the accompanying fault tectonic activities. Similarly, Singh et al. [16] suggest that abnormal change in CO concentration before the Gujarat M S 7.7 earthquake in India in 2010 was precursor information on the earthquake. Thus, satellite hyperspectral remote sensing data can be used in seismic monitoring.
The Yutian 2008 M S 7.3 earthquake (35.6°N, 81.6°E) occurred in Yutian, Xinjiang, China, at 0633 on , with a focal depth of 19 km. The earthquake rupture was primarily extensional and strike-slip, and the epicenter was the intersection of the Kangxiwa fault zone and the southwest end of the Arerjin fault zone in the West Kunlun Mountains. The main seismogenic fault structure is the Arerjin fault [17]. Before this earthquake, there were two M S ≥ 6 (excluding aftershocks) earthquakes within 500 km of the 2008 Yutian earthquake epicenter. One was a 6.1 earthquake in Rutog County, Tibet, on May 5, 2007 (140 km from the 2008 Yutian earthquake epicenter), and the other was a 6.9 earthquake in Gêrzê County, Tibet, on January 9, 2008 (480 km from the 2008 Yutian earthquake epicenter). Since then, the extension of the fault zone in the region has increased. On August 12, 2012, another M S 6.2 earthquake occurred 90 km from the 2008 Yutian earthquake, resulting in long-term crustal instability and accumulation of strain energy in the region. As a consequence, a M S 7.3 earthquake (36.1°N, 82.5°E) occurred at 1719 on February 12, 2014, in Yutian, Xinjiang. The focal depth of the 2014 Yutian earthquake was 12 km, and the mechanism was a strike-slip type. The tail of the large strike-slip Arerjin fault zone, which is an extension zone in the southwest section of the fault zone, was the epicenter. Atmospheric infrared sounder (AIRS) data were obtained, and the difference and anomaly index methods were used to analyze temporal and spatial variations in O 3 and CO gases before and after two earthquakes. The relations and the differences between the two earthquakes were also examined. Figure 1 shows the faults and seismic history of the research area.
Data and Methods
2.1. Data. Column concentration data of CO, O 3 , and CH 4 (TotCO, TotO 3 , and TotCH 4 ) and volume mixing ratio data of CO and O 3 (VMR) were all derived from the 8-day data, monthly average standard product deorbit data, and daily data of the AIRS level 3 of the National Aeronautics and Space Administration (NASA). The data were downloaded from NASA's Goddard Earth Sciences Data and Information Services Center (http://disc.sci.gsfc.nasa.gov/). The 2008 Yutian earthquake epicenter area was 35.6°N, and the average value of a gas was that in a 1°× 1°space at 81.6°E. The 2014 Yutian earthquake epicenter area was 36.1°N, and the average value of a gas was that in a 1°× 1°space at 82.5°E. For spatial distributions, gas concentrations were determined from 33°N to 39°N and 79°E to 85°E. Because AIRX3STD data (AIRS+AMSU) were used in this paper, sensor data were only available to 2016. Therefore, data from 2003 to 2015 were used. The AIRS is a hyperspectral sensor mounted on the Aqua satellite launched by NASA on May 4, 2002. The satellite can cover 85% of the earth twice a day. The AIRS has 2,378 continuous infrared spectral channels, which can provide hyperspectral resolution data in the wavelength range from 3.7 to 15.4 μm with a spatial resolu-tion of 1°× 1°. The spectral resolution was λ/Dλ > 1,200 nominal. It can monitor the physical parameters pressure, temperature, and humidity and the chemicals CH 4 , CO, and O 3 [18,19]. The meteorological data used for backward trajectory were from atmospheric assimilation products and model reanalysis data of the National Centers for Environmental Prediction (NCEP) in the US and were obtained through the GDAS, which assimilates a variety of conventional data and satellite observation data. The data set included air temperature, humidity, near-surface wind speed, and near-surface pressure, with a time resolution of 3 h and a spatial resolution of 1°× 1°. The data were downloaded from the official website of the NCEP of the National Meteorological Administration (https://www.noaa.gov/ [20]).
2.2.
Methods. Data were extracted by MATLAB, and the abnormal index method and the difference method were used to subtract the average value of the gas background field in nonseismic years and eliminate the influence of seasonal changes. The HYSPLIT model was used to examine transport and diffusion trajectories of O 3 .
(1) Extraction of CO, CH 4 , and O 3 data. The CO, CH 4 , and O 3 concentration data and the VMR data were in NASA standard disk storage format HDF-type (hierarchical data format). MATLAB (2021a) software was used to extract the data, and ArcGIS (10.5) software was used for interpolation processing.
(2) Abnormal index method. This method is similar to the definition of thermal anomaly [21,22], and the A index (equation (2)) is the ratio of the difference (equation (3)) to the standard deviation (σðx, y, tÞ) (equation (1)) [15]. The anomaly index is used to assess the reliability of anomalies. When the A index > 2, the anomaly reliability reaches 95.44%.
(3) Difference method. This method directly reflects the absolute variation in a gas by using the abnormal difference value and highlighting the abnormal degree by which a gas concentration value deviates from the background value. The anomaly difference is the difference between the current gas concentration (G ðx, y, tÞ) and the background gas concentration (G bac ðx, y, tÞ) at one point. The monthly background value G bac was the average value of each month corresponding to the nonearthquake years from 2003 to 2015 and was obtained using equation (4). [15] σ x, y, t In the formulas, x, y, and t are the longitude, latitude, and month, respectively; G is the gas column concentration value at the current point (longitude x, latitude y) in month t; and G bac is, for the time in N years (N = 13, 2003 to 2015), the arithmetic mean value of the gas column concentration at the same point (longitude x, latitude y) in month t. The value σðx, y, tÞ is the standard deviation at the point (longitude x, latitude y) in month t.
(4) HYSPLIT model The HYSPLIT model is a professional model used to calculate and analyze transport and diffusion trajectory of atmospheric pollutants [23]. The model has two primary forms: backward transport and forward diffusion. Of the two, backward simulation is another form of simulating the flow direction of the target area and has been primarily used to explain the gas source in a target area [23,24]. In this paper, HYSPLIT backward trajectory was used to simulate the transmission path of O 3 on the day of an earthquake and the maximum concentration day, as well as the O 3 contribution rate to the air mass.
Results
Characteristics of TotCO and TotO 3 anomalies associated with the two M S 7.3 earthquakes in Yutian in 2008 and 2014 obtained by difference value and anomaly index methods are shown in Table 1.
3.1. Spatial Anomaly Features. The difference method indicated the anomaly of TotCO before the 2008 Yutian earthquake that occurred primarily in the northwest direction of the epicenter, appearing along the NWW (west-northwest) trending the Tekrick fault (Figure 2(a)). The results obtained by the anomaly index method were similar to those obtained by the difference method, and the maximum anomaly index was 2:0σ ( Figure 2(b)). The results indicated that it might be associated with the 2008 Yutian earthquake. Figure 3(a) shows that TotO 3 anomalies before the 2008 Yutian earthquake were primarily distributed in a double ring, with the line between the extreme value centers of the two abnormal rings stretching in an EW direction. The distribution of abnormal bands shown in Figure 3(b) was consistent with that in Figure 3(a), and the maximum anomaly index was approximately 2:2σ. Figure 4(a) shows that the TotCO anomaly was linearly distributed in the NE (northeast) direction along the Arerjin fault-Xiaoerkule-Ashe Cooley fault after the 2014 Yutian earthquake. The CO anomaly distribution in Figure 4(b) was generally consistent with that in Figure 4(a), with a maximum anomaly index of approximately 1:97σ. However, the TotCO anomaly shown in Figure 4(b) was slightly weaker than that in Figure 4(a). Figure 5(a) shows that the anomalous concentration of TotO 3 after the 2014 Yutian earthquake was linear along the Ashkule-Guozhacuo fault zone. Similar results are shown in Figure 5(b), with an anomaly index of approximately 2:0σ. Theoretically, the anomaly centers of TotCO and TotO 3 should be near the epicentral fault zone and overlap each other. However, this expectation was not supported by the results, possibly because of the topographic conditions and the gas distribution height in Yutian County. Yutian County is ox leg-shaped, with the terrain higher in the south and lower in the north, with a height difference of approximately 3,500 m. The Kashtash and Kunlun mountains are in the south of Yutian County, and the Taklimakan Desert and the Tarim Basin are in the north. The CO was concentrated primarily near the surface. A downdraft prevails in the basin, and a pressure difference develops between the basin and the alpine region, forming a local atmospheric circulation that moves the CO release point northward to form an abnormal center. O 3 was concentrated primarily at high altitudes, where wind speeds 3 Geofluids are high and gas flow is fast. Affected by northerly wind, O 3 moves southward from the original release point to form an abnormal center [26]. Therefore, the TotCO and TotO 3 anomaly centers shifted and did not coincide with one another.
3.2. Temporal Anomaly Characteristics. The maximum anomalies of TotCO and TotO 3 occurred one month before the 2008 Yutian earthquake (Figures 2 and 3). A smallamplitude CO concentration anomaly occurred in December 2007 (Figure 2), which might be associated with the 6.9-magnitude earthquake in Gêrzê County, Tibet, on January 9, 2008 (480 km from the epicenter of the 2008 Yutian earthquake) [27]. Then, the maximum TotCO anomaly appeared in February and gradually recovered to a 5normal level of variation from March. The timing of anomalous change in TotO 3 ( Figure 3) was generally consistent with that of TotCO. As shown in Figure 6(a), TotCO increased sharply from February 8, reached its maximum on February 10, and then decreased gradually. The TotCO also increased from February 28 to March 13, but the change in amplitude was small, consistent with periodic changes. After that increase, the TotCO returned to the level of periodic change. The TotO 3 began to increase from February 8, breaking the cycle of gradual change, but then decreased sharply on February 11 ( Figure 6 1.011e+017 5°79°81°83°85°79°81°83°85°79°81°83°85°79°81°83°85°79°81°83°85°Molecules/cm 2 (a) Spatial distribution of CO anomalies obtained by the difference method were not seasonal. Thus, the cause of those changes was most likely associated with the earthquake. The CO VMR values at 400 to 700 hPa increased significantly from January 5 to February 22 (Figure 8(a)). The CO VMR values increased rapidly above 600 hPa and reached a maximum value on February 22. Then, the values began to decline and returned to normal levels of annual change in April. The CO VMR values at 100 to 300 hPa also increased, but the changes were not obvious because of the height. The May. These results demonstrated that changes in the 400 to 850 hPa CO VMR values were because of contributions from near the ground. As shown in Figure 9(
Relations between TotCO and TotO 3 Anomalies and
Earthquakes. The TotCO was abnormal three months before the 2008 Yutian earthquake and reached the maximum abnormality in February before that earthquake. The maximum abnormal value of TotCO occurred on February 10, and the degree of abnormality exceeded the background value of 1:011 × 10 17 molecules/cm 2 (Figure 2(a)). Then,
Geofluids
TotCO decreased abnormally and gradually returned to the level of periodic changes. During the abnormal period, the abnormal fluctuation range of TotCO was large at first and then small from the beginning of February 2008. In the month of the 2008 Yutian earthquake, TotCO also fluctuated, but the range of fluctuation was smaller than that in February (Figure 6(a)), which might be related to changes in underground gas emissions caused by changes in ground stress during earthquake buildup. In addition, TotCO showed a slight abnormality three months before the 2014 Yutian earthquake and then returned to normal, which might be associated with an M S 5.6 earthquake (36.8°N, 86.7°E) that occurred on November 24, 2013. In March 2014, TotCO reached the maximum abnormality, and the maximum abnormal value exceeded the background value of 1:166 × 10 17 molecules/cm 2 during the same period, after which the abnormal degree of TotCO decreased. This result might be related to the release of ground stress during the earthquake in this area, which caused the fault zone near the epicenter to close before the earthquake and then open afterward. As shown in Figure 6 The low values might be because the gas in the epicenter area was evacuated to the northern area far from the epicenter under the influence of atmospheric circulation. This phenomenon was consistent with the results in Figure 4, which further illustrated that abnormal changes in TotCO might be associated with the 2014 Yutian earthquake.
The TotO 3 was abnormal one month before the earthquake on . The maximum abnormal value of TotO 3 occurred on February 27, exceeding the background value of 22.67 DU during the same period. Then, the variation in values returned to normal (Figure 3(a)). In addition, TotO 3 began to appear abnormal two months before the earthquake on February 12, 2014, and reached the maximum abnormality in March 2014. The maximum abnormal value of TotO 3 occurred on March 5, exceeding the background value of 33 DU during the same period ( Figure 5(a)), and then, the degree of abnormality decreased. The abnormal values of TotO 3 appeared later than those of TotCO, and their duration was longer than that of CO. The spatial correspondence of the two was relatively good, but their intensities were not consistent. The differences might be related to the multiple causes of O 3 abnormalities. In addition to underground gas escape and atmospheric chemical reactions, atmospheric transportation might also affect TotO 3 abnormalities. Moreover, the spatial distributions of the anomaly centers of TotO 3 and TotCO in 2014 did not correspond well (Figures 4 and 5) which might be related to the topography and gas distribution height in Yutian County.
In the past two decades, the strong earthquakes on the Qinghai-Tibet Plateau have generally been distributed on the periphery of the Bayan Har Block [29]. The 2008 Yutian earthquake occurred on the western boundary of the Bayan Har Block. Wan et al. [29] studied the regional structure of the fault zone around the 2008 Yutian earthquake. They found that under the NNE thrust of the Indian Plate against the Eurasian Plate, the Qaidam Basin in the northern margin of the Qinghai-Tibet Plateau moved eastward along the Arerjin fault, whereas the Xingdukush block moved NW along the Karakorum fault as a whole. The epicenter area was between the two blocks, resulting in slow, large-scale strain accumulation in the crust of the epicenter area. The EWtrending tension of the fault zone before the earthquake occurred under the bilateral dynamic interaction, and when 9 Geofluids the structure was unlocked, the in situ stress was released. The authors speculated that underground gas was released in large quantities, resulting in anomalies. After the earthquake, the structure was locked, and the anomaly gradually decreased. Afterward, the coseismic stress disturbance of the 2008 Yutian earthquake [27] triggered subsequent aftershocks and the M S 6.2 earthquake (90 km from the 2008 Yutian earthquake epicenter) that occurred on August 12, 2012. As a result, stress increased on the Kunlun fault in the northern segment of the Gonggacuo fault zone [29], such that the southwestern end of the Arerjin fault expanded along the NW tensile structural belt [30], which accelerated the 2014 Yutian earthquake. Because of the tectonic stress caused by the 2008 Yutian earthquake and its triggered aftershocks, the Xiaoerkule fault and the Ashe Cooley fault were locked before the earthquake. Therefore, the gas upward flow pores were closed, resulting in a decrease in normal overflow gas in the short term before and after the earth-quake and a continuous decline in gas concentration. After the earthquake, the stress in this area was released. [2,14,31], there are currently two primary explanations for the abnormal changes in TotCO and TotO 3 in the epicentral area. One explanation is the escape of underground gas along the fault zone, and the other is that the gas dissipating into the atmosphere reacts with original atmospheric gases. However, the contributions of these processes might not be large enough to produce an abrupt increase in total ozone in such a short period [13]. In addition, according to Ganguly [28], atmospheric transportation is also a cause of abnormal changes in TotO 3 . The causes of abnormal TotCO and TotO 3 associated with the two Yutian earthquakes are discussed from these three aspects below. . These gases are stored in the earth's crust at higher than atmospheric pressure, and they tend to migrate upward and penetrate into shallow rock cracks and pores and escape to the atmosphere [15]. During an earthquake, the increase in ground stress causes many cracks, pores, and fissures to form in the fracture zone near the epicenter, and large amounts of carbon-containing gases (e.g., CH 4 and CO) escape into the atmosphere through these channels, causing abnormal changes in gas concentrations in the epicenter area [2]. In this study, the spatial distributions of TotCO and TotO 3 corresponded well with the fault zone. However, an earthquake is a complex process, and other factors such as topography, weather, and human activities in the epicenter area can also cause abnormal changes in gas concentrations. In this study, calculations were used during the processing of data to eliminate the influences of other factors, and therefore, underground gas emissions might be the largest cause of abnormal changes in TotCO and TotO 3 in the epicenter area.
Second, underground gases escape into the atmosphere and chemically react with atmospheric gases. Gas increments affect atmospheric concentrations in an epicenter area, resulting in abnormal changes in TotCO and TotO 3 . Following escape from underground, CH4 forms the transition products CO [32,33]. Because of the high background content of atmospheric CH 4 , the increase in CH 4 from underground gas emissions is limited. Nevertheless, the oxidation of CH 4 contributes to CO and O 3 abnormalities [34][35][36]. O 3 is distributed primarily in the troposphere, and atmospheric photochemical reactions in the troposphere can also cause abnormal O 3 concentrations, according to the reaction CO + 2O 2 ⟶ CO 2 + O 3 . As shown in Table 2 [14], which is consistent with the results of this study. As shown in Figure 6, the change in TotO 3 lagged behind that in TotCO, indicating that the earthquakes caused CO to be oxidized to O 3 . The coefficients of correlation between CH 4 and CO in 2008 and 2014 were −0.731 and −0.370, respectively, with correlations significant at p < 0:05 and 0.01, respectively. The coefficients of correlation between CH 4 and O 3 in 2008 and 2014 were −0.660 and −0.558, respectively, with both correlations significant at p < 0:05, indicating that part of TotCO and TotO 3 was also derived from CH 4 oxidation. In addition, during an earthquake, low-frequency electromagnetic radiation and ionospheric disturbances promote 14N decay to form CO [37,38], accelerating the production of CO according to the reactions 14Nðn, pÞ ⟶ 14C + H and 2C + O 2 ⟶ 2CO. These reactions are another reason for abnormal changes in TotCO.
Third, atmospheric transmission might also be a cause of abnormal gas concentrations in epicentral areas. The 5-day backward trajectories on the day of the 2008 Yutian earthquake (Figure 10(a)) showed that in the air masses from the NWW direction in the 100 hPa pressure layer, the O 3 contribution rate was the largest at 28.57%. In the remaining air masses, the contribution was relatively small. The 5-day backward trajectories of the maximum O 3 concentration day (Figure 10(b)) showed that the O 3 contribution rate in the air mass from the NWW direction increased to 34.43%. Although the O 3 contribution rate in the other air masses also increased, the magnitude of increase was small.
Geofluids
Yutian earthquake (Figure 11(a)) showed that among the air masses from the SWW direction, the two air masses closest to the south had the largest contributions of O 3 , reaching 20.68% and 20.78%. The 5-day backward trajectories of the day with the maximum O 3 concentration (Figure 11(b)) showed that in the air mass from the SWW direction, the maximum contribution rate of O 3 increased to 40.26%. For the source of the air mass, these results were consistent with those obtained by the difference and abnormal index methods, indicating that atmospheric transportation was also a cause of abnormal changes in TotO 3 . Ganguly [28] studied the increase in TotO 3 after the Gujarat M S 7.7 and M S 5.2 earthquakes in 2001 and 2006, respectively, and found that the increase in TotO 3 after the earthquake was due to atmospheric transmission between the upper troposphere and lower stratosphere. The upper O 3 -rich air was transported to the epicenter. The results of this study were similar.
Conclusions
(1) AIRS hyperspectral remote sensing data were analyzed, and TotCO and TotO 3 changed abnormally before and after two M S 7.3 earthquakes in the study area. Abnormal gas concentrations associated with the 2008 Yutian earthquake occurred in February before the earthquake, and those associated with the 2014 Yutian earthquake occurred in March after the earthquake. Before and after the two earthquakes, the anomalies of TotCO and TotO 3 were distributed along the fault zone or the anomaly trend was consistent with the fault zone trend.
(2) The release of ground stress during earthquake buildup and occurrence causes the release of underground gases into the atmosphere along the rupture zone, which might be a cause of the abnormal changes in TotCO and TotO 3 . The CO and O 3 released into the atmosphere chemically react with CH 4 and other original atmospheric gases, which may be another cause of the abnormal changes in TotCO and TotO 3 . In addition, atmospheric transportation might also contribute to abnormal changes in TotO 3 .
(3) The abnormal changes in TotCO and TotO 3 before and after earthquakes may be anomalies that can predict impending earthquakes. The CO and O 3 anomaly indexes of the two Yutian M S 7.3 earthquakes obtained by the anomaly index method were 2.0 and 2.2 and 2.0 and 2.2, respectively, and the anomaly reliability exceeded 94%. Thus, satellite hyperspectral remote sensing data can be used to extract reliable seismic-related information from geochemical anomalies.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 6,136.2 | 2021-09-01T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Diffusion Nuclear Magnetic Resonance Measurements on Cationic Gold (I) Complexes in Catalytic Conditions: Counterion and Solvent Effects
The amount of free ions, ion pairs, and higher aggregate of the possible species present in a solution during the gold(I)-catalyzed alkoxylation of unsaturated hydrocarbon, i.e., ISIP (inner sphere ion pair) [(NHC)AuX] and OSIP (outer sphere ion pairs) [(NHC)Au(TME)X] [NHC 1,3-bis(2,6-di-isopropylphenyl)-imidazol-2-ylidene; TME = tetramethylethylene (2,3-bis methyl-butene); X− = Cl−, BF4−, OTf−; and OTs− BArF4− (ArF = 3,5-(CF3)2C6H3)], has been determined. The 1H and 19F DOSY NMR measurements conducted in catalytic conditions indicate that the dissociation degree (α) of the equilibrium ion pair/free ions {[(NHC)Au(TME)X] [(NHC)Au(TME)]+ + X−} depends on the nature of the counterion (X−) when chloroform is the catalytic solvent: while the compounds containing OTs− and OTf− as the counterion gave a low α (which means a high number of ion pairs) of 0.13 and 0.24, respectively, the compounds containing BF4− and BArF4− showed higher α values of 0.36 and 0.32, respectively. These results experimentally confirm previous deductions based on catalytic and theoretical data: the lower the α value, the greater the catalytic activity because the anion that can activate methanol during a nucleophilic attack, although the lower propensity to activate methanol of BF4− and BArF4−, as suggested by the DFT calculations, cannot be completely overlooked. As for the effect of the solvent, α increases as the dielectric constant increases, as expected, and in particular, green solvents with high dielectric constants show a very high α (0.90, 0.84, 0.80, and 0.70 for propylene carbonate, γ-valerolactone, acetone, and methanol, respectively), thus confirming that the moderately high activity of NHC-Au-OTf in these solvents is due to the specific effect of polar functionalities (O-H, C=O, O-R) in activating methanol. Finally, the DOSY measurements conducted in p-Cymene show the formation of quadrupole species: under these conditions, the anion can better exercise its ‘template’ and ‘activating’ roles, giving the highest TOF.
We started with the preliminary determination of the ion-pairing structure of [(L)Au(UHS)] + X − (L = carbene and phosphane, UHS = unsaturated hydrocarbon, and X − = weakly coordinating counterion) systems, which are the most important intermediates
We started with the preliminary determination of the ion-pairing structure of [(L)Au(UHS)] + X − (L = carbene and phosphane, UHS = unsaturated hydrocarbon, and X − = weakly coordinating counterion) systems, which are the most important intermediates formed during gold-catalyzed nucleophilic additions to an unsaturated substrate.The anion, in order to influence the kinetics of the reaction, must be in the correct position, at least at the RDS of the reaction [42].From 2009 onwards [43], several interionic characterizations [44,45] of the [(L)Au(UHS)] + X − species have been made by some of us researchers, taking advantage of nuclear Overhauser effect (NOE) NMR experiments and DFT calculations of potential energy surfaces (PESs) as well as the Coulomb potential of ions.Recently, also, a cationic gold (III) pre-catalyst ion-pairing structure was determined for the first time using the same approach [46].These powerful experimental and theoretical methods were used by us to understand the relative anion-cation orientation determined by the nature of the ancillary ligand (L), substrate (S), and counterion (X − ).This fine-tuning of the interionic structure has paved the way for larger control over the properties and activity of these catalysts [47].
The hydration and alkoxylation of alkynes are key processes for the industrial production of carbonyl derivatives, and the pivotal role of ion pairing in the mechanism of the hydration and alkoxylation of alkynes promoted by the gold(I) catalyst L-Au-X is deeply analyzed and discussed in the literature [48].
Kinetic experiments, together with multinuclear and multidimensional NMR measurements and DFT calculations, allowed us to study, understand, and rationalize the importance of both counterions [49] (in terms of the gold-counterion coordination ability and basicity/proton affinity) and ligands [50,51] (in terms of the donation and π-back donation properties versus gold) in the catalytic cycle (pre-equilibrium step, nucleophilic attack, protodeauration), as shown in Figure 1.Moreover, we have pointed out the crucial role of solvent and noncovalent interactions from both an experimental and theoretical point of view [52].These results allowed us to develop, for the first time, a green strategy for the hydration of alkynes promoted by gold(I) species in both neat conditions [53] and in green solvents [54,55].formed during gold-catalyzed nucleophilic additions to an unsaturated substrate.The anion, in order to influence the kinetics of the reaction, must be in the correct position, at least at the RDS of the reaction [42].From 2009 onwards [43], several interionic characterizations [44,45] of the [(L)Au(UHS)] + X − species have been made by some of us researchers, taking advantage of nuclear Overhauser effect (NOE) NMR experiments and DFT calculations of potential energy surfaces (PESs) as well as the Coulomb potential of ions.Recently, also, a cationic gold (III) pre-catalyst ion-pairing structure was determined for the first time using the same approach [46].These powerful experimental and theoretical methods were used by us to understand the relative anion-cation orientation determined by the nature of the ancillary ligand (L), substrate (S), and counterion (X − ).This fine-tuning of the interionic structure has paved the way for larger control over the properties and activity of these catalysts [47].
The hydration and alkoxylation of alkynes are key processes for the industrial production of carbonyl derivatives, and the pivotal role of ion pairing in the mechanism of the hydration and alkoxylation of alkynes promoted by the gold(I) catalyst L-Au-X is deeply analyzed and discussed in the literature [48].
Kinetic experiments, together with multinuclear and multidimensional NMR measurements and DFT calculations, allowed us to study, understand, and rationalize the importance of both counterions [49] (in terms of the gold-counterion coordination ability and basicity/proton affinity) and ligands [50,51] (in terms of the donation and π-back donation properties versus gold) in the catalytic cycle (pre-equilibrium step, nucleophilic attack, protodeauration), as shown in Figure 1.Moreover, we have pointed out the crucial role of solvent and noncovalent interactions from both an experimental and theoretical point of view [52].These results allowed us to develop, for the first time, a green strategy for the hydration of alkynes promoted by gold(I) species in both neat conditions [53] and in green solvents [54,55].In summary, the main previous results [56] concerning the pivotal role of the counterion in the alkoxylation of alkynes promoted by the L-Au(I) catalyst are as follows: a.The rate-determining step resulted in the nucleophilic attack of methanol on the coordinated alkyne (Figure 1), and the intermediate coordination ability, basicity, and hydrogen bond-accepting properties of OTs − and OTf − provide the best compromise for achieving an efficient catalyst (high TOF) [57].In summary, the main previous results [56] concerning the pivotal role of the counterion in the alkoxylation of alkynes promoted by the L-Au(I) catalyst are as follows: a.
The rate-determining step resulted in the nucleophilic attack of methanol on the coordinated alkyne (Figure 1), and the intermediate coordination ability, basicity, and hydrogen bond-accepting properties of OTs − and OTf − provide the best compromise for achieving an efficient catalyst (high TOF) [57].b.
In the optimized geometry of the transition state, the anions OTs − and OTf − are located near the alkyne, interacting both with the metal center and with the methanol acting as a template, helping the methanol to assume its reactive position and acti-vating the methanol through a hydrogen bond (enhancing the nucleophilicity of the alcohol).If the basicity of the anion is too low (BF 4 − and BARF − ), the template effect is lost, and then the hydrogen bonding with the methanol does not take place [58].c.
The polarity of the solvent is crucial in determining the catalytic activity of L-Au(I) complexes because it is related to the amount of ion pairs in the solution.Moreover, peculiar functional groups present in the solvent could promote the nucleophilic attack [54].
These important considerations and deductions in gold catalysis are still missing one fundamental finding: the experimental determination of the number of free ions, ion pairs, and higher aggregates in catalytic conditions as a function of both the counterion and solvent.
In this context, pulsed-field-gradient spin-echo (PGSE) NMR and its implementation of DOSY (diffusion-ordered NMR spectroscopy) are the most effective approaches for the analysis of organometallic compounds in a solution [59][60][61].PGSE and DOSY provide an accurate estimate of the translational diffusion coefficients (D t ) of the corresponding organometallic compounds.These coefficients can be interpreted using several approaches, which provide information on the molecular sizes and are used to estimate the hydrodynamic radii of organometallic compounds.These approaches have been highly successful in the characterization of neutral gold molecules [62,63] and salts [64,65].
Even if a number the amount of free ions, ion pairs, and higher aggregates present in catalytic conditions, they are very important parameters for gold(I) catalysis, and their determination can be essential to better understand the mechanism of a catalyst; to the best of our knowledge, there are no examples in the literature [66] of PGSE (or DOSY) measurements conducted in catalytic conditions.
In this paper, the amount of free ions, ion pairs, and higher aggregates in a solution of (NHC
Gold(I) catalysis is a vitally important area of research with many reported examples
Over the last few years, some of us has been engaged in the rationalization, from an experimental and theoretical point of view, of the important features of gold(I) [33][34][35][36][37][38][39] and gold(III) [40,41] catalysis.[(NHC)Au(TME)X] 2 } depends on both the nature of the counterion (X − ) and solvent.These values relate to the performance of the (NHC)Au catalyst as a function of X− and the solvent and provide further insight into the effects of ion pairs in gold catalysis.
Results and Discussion
1OTf and 1OTs (Figure 2) have been synthesized following literature-available synthetic protocols [49] by chloride abstraction by adding AgOTf and AgOTs, respectively, to a solution of 1Cl in methylene chloride, affording the desired compounds with excellent yields and purity (see Section 3 for details).
On the other hand, 1(TME)BF 4 and 1(TME)BARF have been generated in situ in an NMR tube by chloride abstraction.This was achieved by adding AgBF 4 and AgBARF, respectively, to (i) a solution of 1Cl in CDCl 3 in the presence of five equivalents of TME [TME = tetramethylethylene (2,3-bis methyl-butene)] and (ii) in pseudo-catalytic conditions (solvent, Methanol-d 4 , and TME), affording the desired compounds and AgCl (see Section 3 and Supporting Information for details).
The NMR spectra of complexes 1OTf and 1OTs are consistent with those reported in the literature [57].The NMR spectra of complexes 1(TME)BF 4 and 1(TME)BARF clearly show the characteristic signals of the coordinated alkene, as reported for the compound 1(TME)SbF 6 [67].On the other hand, 1(TME)BF4 and 1(TME)BARF have been generated in situ in an NMR tube by chloride abstraction.This was achieved by adding AgBF4 and AgBARF, respectively, to (i) a solution of 1Cl in CDCl3 in the presence of five equivalents of TME [TME = tetramethylethylene (2,3-bis methyl-butene)] and (ii) in pseudo-catalytic conditions (solvent, Methanol-d 4 , and TME), affording the desired compounds and AgCl (see Section 3 and Supporting Information for details).
The NMR spectra of complexes 1OTf and 1OTs are consistent with those reported in the literature [57].The NMR spectra of complexes 1(TME)BF4 and 1(TME)BARF clearly show the characteristic signals of the coordinated alkene, as reported for the compound 1(TME)SbF6 [67].
1 H and 19 F DOSY NMR experiments were performed for gold complexes and TME in chloroform-d (Table S1) and in pseudo-catalytic conditions (Tables 1 and 2).In order to avoid the reaction of 3-hexyne with methanol during the 1 H and 19 F DOSY experiments, 3-hexyne was replaced with unreactive TME towards alkoxylation at room temperature.However, this approximation remains valid because the coordination properties towards gold and the dielectric constant of the TME are comparable to those of 3-hexyne.
Entry
Compound The experimental observable of DOSY NMR spectroscopy is the self-diffusion coefficients (D t , see Section 3) of the species (both cationic, D + , or anionic, D − ), from which the corresponding hydrodynamic radius can be derived by the Stokes-Einstein Equation ( 1) where k is the Boltzmann constant, T is the absolute temperature, η is the viscosity, and c is a numerical factor, which usually approximates to 6 for large-size molecules.
A more accurate estimation of the hydrodynamic radius r H can be obtained by Equation ( 2) where the c factor is expressed as a function of the solvent-to-solute ratio of radii on the basis of the model proposed by Wirtz and coworkers [68,69], and where r solv is the radius of the solvent.The D t data were treated accordingly to Equation (2), as described in the literature [70], to derive the hydrodynamic dimensions, taking the solvent or TME as the internal standard.Assuming the shape of the aggregate is spherical, r H ± can easily be converted to a hydrodynamic volume (V H ± ).The average degree of aggregation can be evaluated by dividing V H ± by the hydrodynamic volume of the ion pair, V H 0,IP , in order to derive the aggregation number (N ± ).
Estimating V H 0,IP is not straightforward, and relying on the van der Waals and crystallographic volume of the species is unreliable, particularly when aromatic groups are present [71].By choosing the experimental conditions properly to avoid any ion-pairing process (for example, performing PGSE measurements in a polar solvent at a low concentration [72]), it is possible to measure V H 0,IP as the sum of the V H 0,− (the hydrodynamic volume of the free anion) and V H 0,+ (the hydrodynamic volume of the free cation) [72].If N ± are both equal or smaller than 1, the amount of ion triples or quadruples of higher aggregates can be considered small, and ion pairing might be assumed as the only process active in the solution.In such a case, it is useful to define the dissociation degree (α) by Equation ( 3) in order to quantify the relative concentration of free ions and ion pairs: Before carrying out the measurements under pseudo-catalysis conditions, diffusion measurements on compounds 1OTf, 1OTs, 1(TME)BF 4 , 1(TME)BARF, and TME (which we will use as an internal standard in the pseudo-catalytic measurements) were performed.The obtained values of D t , r H , and V H are given in the Supporting Information.The hydrodynamic volumes ranged from 732 Å 3 to 840 Å 3 for compounds 1OTf and 1OTs, in line with the literature data for compound 1Cl (752 Å 3 ).With regard to compound 1(TME)BF 4 , the values are 800 Å 3 and 850 Å 3 for the anion and the cation, respectively, in line with those obtained for 1(4-Me-styrene)BF 4 in CD 2 Cl 2 .In addition, a D t (CDCl 3 )/D t (TME) ratio of 0.9 was obtained, together with a value of V H of 100 Å 3 for TME (considering CDCl 3 as the internal standard).At this point, we started the measurements under pseudocatalytic conditions in CDCl 3 (400 µL of CDCl3, 142 µL of CD 3 OD, and 105 µL of TME).We began by measuring the catalytic mixture with precursor 1Cl, which is inactive in catalysis (Table 1).The D t (CDCl 3 )/D t (TME) is now equal to 0.88 but very close to 0.90.This means that we can apply the methodology known in the literature, even if a mixture of solvents is used.We, therefore chose TME as an internal standard and r solv of CDCl 3 (Equation ( 2)) for the treatment of diffusion measurements and thus calculated the hydrodynamic radii and volumes of the gold catalyst.
The value obtained for 1Cl was 937 Å 3 .This V H was taken as the reference volume of the cation 1 + (NHC-Au + ) [73].We have calculated the V H 0,IP of 1(TME)X while considering the additive volumes (1 + , TME and anion) and the literature V H 0,− values [74], where the results are 1152 Å 3 (X − = OTf − ), 1245 Å 3 (X − = OTs − ), 1089 Å 3 (X − = BF 4 − ), and 1927 Å 3 (X − = BARF − ).These data were used to obtain the values of the aggregation numbers and α, as described above.Since V H − is always much larger than V H 0,− , V H − is a more sensitive probe than V H + to quantitatively assess the ion pairing in a solution, whereas values of N ± larger than 1 indicate the presence of higher aggregates [74].The experimental error on V H ± , N ± , and α is estimated as 10% [70].
Table 1 shows the results of the diffusion NMR measurements conducted under the same pseudo-catalytic conditions for the gold complexes as a function of the counterion.The values of the hydrodynamic radii of the cations r H + vary from 6.2 Å (entry 3, Table 1) to 6.9 Å (entry 4, Table 1), corresponding to volumes between 998 Å 3 and 1346 Å 3 .These hydrodynamic volumes (V H + ) are about equal to those of the ion pair (V H 0,IP ) for compounds 1(TME)OTs (entry 2, Table 1) and 1(TME)OTf (entry 3, Table 1) while the value for complex 1(TME)BF 4 (entry 4, Table 1) is slightly higher and the V H + of complex 1(TME)BARF is lower.As far as the V H − is concerned, the values range from 720 Å 3 to 1596 Å 3 (r H − from 5.6 Å to 7.3 Å, respectively) and are less than the respective V H 0,IP .Analysis of the anion aggregation number (N − ) shows that the values are close to 1 for complexes 1(TME)OTs and 1(TME)OTf, while the values are less than 1 for complexes 1(TME)BF 4 and 1(TME)BARF.It can be stated with confidence that complexes 1(TME)OTs and 1(TME)OTf have a greater tendency to form ion pairs than complexes 1(TME)BF 4 and 1(TME)BARF.This is in line with what has already been observed in the literature for organometallic complexes and organic salts.
Having values of an aggregation number less than one to better quantify the amount of ion pairs, we calculated the dissociation degree (α), Equation (3).The (α) values span from 0.13 for 1(TME)OTs to 0.24 for 1(TME)OTf and have almost the same values of 0.32 and 0.36 for 1(TME)BF 4 and 1(TME)BARF, respectively.
In order to be able to analyze the role of the anion during catalysis in major detail, the trend of the degree of aggregation α against TOF [TOF (h −1 ) = moles of the product/moles of the catalyst/time (h)] is shown in Figure 3 (data are also present in Table 1) [49].From the analysis of Figure 3, it can be seen that the anion that has the highest TOF value of 300 h −1 -1(TME)OTs-is the one with the lower α, i.e., the highest amount of ion pairs.This is expected because the anion helps during the nucleophilic attack of methanol with a templating effect (helping the methanol to assume its reactive position and activate the methanol through a hydrogen bond).This is only possible with ion pairs and is absent when the catalyst is present in the solution in the form of free ions.The combination of the present NMR diffusion results with the previous catalytic and theoretical data allows us to conclude that the different reactivity between 1(TME)OTs and 1(TME)OTf is given by both factors: anion OTs − have a higher templating power with respect to OTf − , but on the one hand, 1(TME)OTs is also present in the solution in the form of an ion pair in higher amounts with respect to 1(TME)OTf.
The higher α values, together with the low TOF values for anions BF4 − , BARF − 176 h −1 , and 153 h −1 , respectively, [49] unequivocally demonstrate that the low activity is due to the low percentage of ion pairs in catalytic conditions compared with the anions OTs − and OTf − .However, it is important to note that the low templating and activating effect of BF4 − and BARF − , with respect to OTs − (and OTf − ), cannot be completely neglected: in these conditions, it has been suggested that a second methanol molecule could be involved in the mechanism.The combination of the present NMR diffusion results with the previous catalytic and theoretical data allows us to conclude that the different reactivity between 1(TME)OTs and 1(TME)OTf is given by both factors: anion OTs − have a higher templating power with respect to OTf − , but on the one hand, 1(TME)OTs is also present in the solution in the form of an ion pair in higher amounts with respect to 1(TME)OTf.
The higher α values, together with the low TOF values for anions BF 4 − , BARF − 176 h −1 , and 153 h −1 , respectively [49], unequivocally demonstrate that the low activity is due to the low percentage of ion pairs in catalytic conditions compared with the anions OTs − and OTf − .However, it is important to note that the low templating and activating effect of BF 4 − and BARF − , with respect to OTs − (and OTf − ), cannot be completely neglected: in these conditions, it has been suggested that a second methanol molecule could be involved in the mechanism.This is the first time that experimental and theoretical catalytic data and NMR diffusion results have been combined for gold catalysis.It allows us to deeply analyze the role of the anion during the rate-determining step of the alkoxylation of alkynes, i.e., the nucleophilic attack (Figure 1) in light of the different amounts of ion pairs and different templating and activating effects of the counterion.
The same methodology (theoretical and experimental kinetic data compared with the NMR diffusion experiments) is the best way to understand the role of the functional groups of solvent and additive in gold catalysis.Gold affinity and hydrogen-bond basicity of the functional groups of additives and solvent are very important topics, being recently the subject of a review by Xu and collaborators [75]; however, further mechanistic studies and novel control experiments are needed for a deeper understanding of the additives and solvent role in gold catalysis.
Table 2 shows the values of the D t (10 −10 m 2 s −1 ), r H ± (Å), V H ± (Å 3 ), aggregation number (N ± ), and α for complexes 1(TME)OTf in different solvents (p-cymene, chloroform, methanol, acetone, γ-valerolactone, and propylene carbonate).The values of the hydrodynamic radii of the cations r H + vary from 6.1 Å (entry 4, Table 2) to 8.4 Å (entry 1, Table 2), corresponding to volumes between 946 Å 3 and 2509 Å 3 .These hydrodynamic volumes (V H + ) are about equal to those of the ion pair (V H 0,IP ) for compounds 1(TME)OTf in all solvents except p-cymene (entry 1, Table 2).As for V H − , the values ranged from 217 Å 3 to 1499 Å 3 (r H − from 3.7 Å to 7.1 Å, respectively) and are less than the respective V H 0,IP , except for the value for p-cymene (entry 1, Table 2) The values of the aggregation numbers (N ± ) are all less than 1, with the exception of p-cymene (entry 1, Table 2).The values of 2.18 and 1.3 for N + and N − , respectively, indicate the formation of aggregates greater than the ion pairs.The values for chloroform, already commented above (entry 3, Table 1), indicate a high presence of ion pairs with respect to free ions, whereas the very low values of N − for the other solvents indicate a very low aggregation of ions, as expected due to their high dielectric constant.
Having values of an aggregation number of less than one (entries 2-6, Table 2) to better quantify the amount of ion pairs, we calculated the dissociation degree (α), Equation (3).For p-cymene, we can confidently assume that the α value is zero, as the values are greater than 1 and there are no free ions from the ion-pair/free-ions equilibrium.
The (α) values spanned from 0 for p-cymene to 0.90 for propylene carbonate, and almost the same higher values of 0.70, 0.74, and 0.80 for methanol, acetone, and γvalerolactone, respectively, were obtained.
In order to be able to establish in greater detail the role of solvent during catalysis, the trend of α against TOF [54,55] has been analyzed, which is shown in Figure 4 (data are also present in Table 2).
From the analysis of Figure 4, it can be seen that the p-cymene, in which 1(TME)OTf has the highest TOF value of 500 h −1 , is also the one that has the lowest α of 0, i.e., the highest amount of ion pairs and higher aggregates.The templating and activating effects of OTf − during the nucleophilic attack through the formation of a hydrogen bond thus appear to be increased when aggregates greater than the ion pair (quadruple ions) are present in the catalytic solution.A similar positive effect in the catalysis of aggregations above the ion pair has also been recently observed in metallocenium catalysts for polyolefin synthesis [76].
The (α) values spanned from 0 for p-cymene to 0.90 for propylene carbonate, and almost the same higher values of 0.70, 0.74, and 0.80 for methanol, acetone, and γvalerolactone, respectively, were obtained.
In order to be able to establish in greater detail the role of solvent during catalysis, the trend of α against TOF [54,55] has been analyzed, which is shown in Figure 4 (data are also present in Table 2).2).See text for details.
From the analysis of Figure 4, it can be seen that the p-cymene, in which 1(TME)OTf has the highest TOF value of 500 h −1 , is also the one that has the lowest α of 0, i.e., the highest amount of ion pairs and higher aggregates.The templating and activating effects of OTf − during the nucleophilic attack through the formation of a hydrogen bond thus appear to be increased when aggregates greater than the ion pair (quadruple ions) are present in the catalytic solution.A similar positive effect in the catalysis of aggregations above the ion pair has also been recently observed in metallocenium catalysts for polyolefin synthesis [76].
Also, in Figure 4, it can be seen that chloroform, methanol, acetone, and γ-valerolactone have comparable TOFs (values between 280 h −1 and 340 h −1 ) [54,55] despite the significant differences of α values (from 0.24 for chloroform to 0.84 for γ-valerolactone).As noted earlier, the high TOF value in chloroform is related to the low α value, while the high TOF combined with the high α values for methanol, acetone, and γ-valerolactone can be explained through a direct role of the solvent during the nucleophilic attack of methanol.2).See text for details.Also, in Figure 4, it can be seen that chloroform, methanol, acetone, and γ-valerolactone have comparable TOFs (values between 280 h −1 and 340 h −1 ) [54,55] despite the significant differences of α values (from 0.24 for chloroform to 0.84 for γ-valerolactone).As noted earlier, the high TOF value in chloroform is related to the low α value, while the high TOF combined with the high α values for methanol, acetone, and γ-valerolactone can be explained through a direct role of the solvent during the nucleophilic attack of methanol.
The higher TOF value calculated for the reaction run in methanol, with respect to that obtained for the reaction run in chloroform, is a direct result of the specific templating and activating roles of other surrounding molecules of MeOH in the reaction mechanism, as predicted by the DFT calculations [57].Furthermore, the coordination ability of MeOH towards the gold center should be lower than that shown by 3-hexyne [48].
The higher TOF value calculated for the reaction run in acetone and γ-valerolactone, with respect to that obtained for the reaction run in chloroform, can be attributed to a specific character of the carbonyl functionality in the reaction mechanism because the O-H bond may be polarized via specific intramolecular interactions, thus suppressing the anion effect.Both solvents possess functionalities that resemble those present in the DMF and DMPU utilized as appropriate ionic neutral additives in order to increase the rate of the catalytic hydration and alkoxylation of the alkyne reactions [77,78].
These solvents play a fundamental role during the reaction.They do not interact with the NHC-Au + fragment, allowing the coordination of 3-hexyne, and can interact with the MeOH molecule during the nucleophilic attack.These experimental results confirm our DFT calculations.For the more polar γ-valerolactone, the degree of ion-pair separation is superior, and thermodynamically more stable species, both as a reactant complex and transition state, are formed.The transition state stability is additionally enhanced by the reduced (increased) anion affinity for the cationic fragment (the solvent) in the more polar solvent, which in turn increases the electrophilic character of the substrate.Conversely, when free ions are taken into account, cationic transition states are stabilized by more polar γ-valerolactone, which also induces more polarization of the hydroxyl group of methanol [54,55].
Synthesis and Intramolecular Characterization
TME, silver triflate (AgOTf), silver p-toluenesulfonate (AgOTs), silver tetrafluoroborate (AgBF4), and AgBARF were purchased from Sigma Aldrich.All the solvents were used as received without any further purification unless otherwise stated.1Cl, 1OTf, and 1OTs were synthesized according to the literature [49].1(TME)BF4 and 1(TME)BARF were generated in situ by adding the appropriate silver salt to a solution of 1Cl and TME.All compounds were characterized in solution by 1 H, 13 C, and 19 F NMR spectroscopies.The NMR spectra were recorded on an Avance 400 III HD spectrometer.Chemical shifts (ppm) were relative to TMS for both 1 H and 13 C nuclei, whereas the 31 P, 19 F, and 15 N chemical shifts were referenced to 85% H 3 PO 4 , CCl 3 F, and CH 3 NO 2 , respectively.The complexes were fully characterized with mono-dimensional ( 1 H and 13 C) and bi-dimensional ( 1 H-1 H COSY, 1 H-1 H NOESY, 1 H- 13 C HSQC, 1 H-13 C HMBC) NMR experiments.All the experimental details and NMR data are reported in the Supporting Information.
DOSY Measurements
The DOSY NMR spectra were acquired using a Bruker Avance III HD 400 MHz spectra equipped with a broadband 5 mm probe ( 1 H/BBF iProbe) with a z-axis gradient (50 G/cm) at 298 K without sample spinning.The DOSY experiments were carried out using the double-stimulated echo version with a longitudinal eddy current delay (dstegp3s sequence).The gradient pulse (P30, δ) was set to 1750 us, while the diffusion time (D20, ∆) was set to 0.1 s, and the eddy current delay (D21) was set to 5 ms.The experiments were acquired using the "dosy" AU program, collecting a total of 32 points (TD1 entry) following a linear ramp with a gradient intensity (g) ranging from 95% to 5% (47.187 dB to 0.963 dB).The number of scans (ns) was set to 64.
The dependence of the resonance intensity (I) on a constant waiting time and on a varied gradient strength G is described by the following Equation (4): where I is the intensity of the observed spin echo, I 0 is the intensity of the spin echo in the absence of a gradient, D t is the self-diffusion coefficient, and ∆ is the delay between the midpoints of the gradients, δ, the length of the gradient pulse, and γ the magnetogyric ratio.The semilogarithmic plots of ln(I/I 0 ) versus G 2 were fitted by using a standard linear regression algorithm, and a correlation factor better than 0.99 was always obtained (Figure 5).The self-diffusion coefficient D t , which is directly proportional to the slope m of the regression line obtained by plotting ln(I/I 0 ) versus G 2 , was estimated by evaluating the proportionality constant for a sample of HDO (5%) in D 2 O (known diffusion coefficients in the range 274-318 K) [38] under the exact same conditions as the sample of interest.The solvent (or 2,3-dimethyl-2-butene, TME) was taken as an internal standard.The D t data were treated as described in the literature to derive the hydrodynamic dimensions.where I is the intensity of the observed spin echo, I0 is the intensity of the spin echo in the absence of a gradient, Dt is the self-diffusion coefficient, and Δ is the delay between the midpoints of the gradients, δ, the length of the gradient pulse, and γ the magnetogyric ratio.The semilogarithmic plots of ln(I/I0) versus G 2 were fitted by using a standard linear regression algorithm, and a correlation factor better than 0.99 was always obtained.(Figure 5) The self-diffusion coefficient Dt, which is directly proportional to the slope m of the regression line obtained by plotting ln(I/I0) versus G 2 , was estimated by evaluating the proportionality constant for a sample of HDO (5%) in D2O (known diffusion coefficients in the range 274-318 K) [38] under the exact same conditions as the sample of interest.The solvent (or 2,3-dimethyl-2-butene, TME) was taken as an internal standard.The Dt data were treated as described in the literature to derive the hydrodynamic dimensions.
Conclusions
Our research demonstrates that the combination of DOSY NMR diffusion experiments and kinetic measurements of gold catalyst activity is a reliable methodology for understanding the behavior of gold catalysts for the alkoxylation of unsaturated hydrocarbon and disclosing the role of ion pairs (the amount of free ions, ion pairs, and higher
Figure 1 .
Figure 1.The generally accepted reaction mechanism for gold(I)-complex-catalyzed alkoxylation of alkynes.
Figure 1 .
Figure 1.The generally accepted reaction mechanism for gold(I)-complex-catalyzed alkoxylation of alkynes.
Figure 2 .
Figure 2. Gold complexes used in this work.
Table
). See text for details. | 7,844.2 | 2024-06-26T00:00:00.000 | [
"Chemistry"
] |
The Role of Slr0151, a Tetratricopeptide Repeat Protein from Synechocystis sp. PCC 6803, during Photosystem II Assembly and Repair
The assembly and repair of photosystem II (PSII) is facilitated by a variety of assembly factors. Among those, the tetratricopeptide repeat (TPR) protein Slr0151 from Synechocystis sp. PCC 6803 (hereafter Synechocystis) has previously been assigned a repair function under high light conditions (Yang et al., 2014). Here, we show that inactivation of slr0151 affects thylakoid membrane ultrastructure even under normal light conditions. Moreover, the level and localization of Slr0151 are affected in a variety of PSII-related mutants. In particular, the data suggest a close functional relationship between Slr0151 and Sll0933, which interacts with Ycf48 during PSII assembly and is homologous to PAM68 in Arabidopsis thaliana. Immunofluorescence analysis revealed a punctate distribution of Slr0151 within several different membrane types in Synechocystis cells.
INTRODUCTION
The ability of plastid-bearing organisms to perform oxygenic photosynthesis was inherited from an ancient cyanobacterium about 2.4 billion years ago. In present-day cyanobacteria, the photosynthetic electron transport chain (PET) is embedded in an internal membrane system made up of thylakoids (Hohmann-Marriott and Blankenship, 2011). The PET is fueled by electrons originating from the water-splitting complex within PSII, which is therefore considered to be the heart of photosynthesis. Recently, the structural analysis of PSII has revealed detailed insights into the architecture and working mode of its Mn 4 CaO 5 cluster, where water is oxidized and molecular oxygen is released (Umena et al., 2011;Kupitz et al., 2014;Suga et al., 2015).
Overall, PSII comprises at least 20 protein subunits as well as numerous organic and inorganic co-factors. All these components have to be assembled in a strictly coordinated manner in both time and space. The emerging picture indicates that the assembly process is initiated at specialized, biogenic thylakoid membrane (TM) regions and proceeds step-wise until the active PSII supercomplex is formed as part of photosynthetically active thylakoids Nickelsen and Rengstl, 2013;Nickelsen and Zerges, 2013;Rast et al., 2015).
In the cyanobacterium Synechocystis sp. PCC 6803 (hereafter Synechocystis), the initial steps in de novo PSII assembly have been proposed to take place at biogenesis centers (BC) where the thylakoids converge on the plasma membrane (PM; van de Meene et al., 2006;Schottkowski et al., 2009a;Stengel et al., 2012;Nickelsen and Rengstl, 2013). The precise architecture of BCs is not yet fully understood; however, they are characterized by the accumulation of the PSII assembly factor PratA, which delivers Mn to the precursor of the D1 reaction-center protein (pD1; Stengel et al., 2012). The C-terminal extension of pD1 is then processed by the protease CtpA (Anbudurai et al., 1994;Zak et al., 2001;Komenda et al., 2007). Concomitantly, the first detectable PSII assembly intermediate, i.e., the reaction-center complex (RC), is formed by the attachment of the D2-Cyt b 559 module which is aided by the assembly factor Ycf48, a homolog of Hcf136 from Arabidopsis thaliana (Komenda et al., 2004(Komenda et al., , 2008. Via the interaction of Ycf48 with the PAM68-homolog Sll0933, the inner core antenna proteins CP47 and CP43 bind successively to the RC complex, forming a PSII monomer that still lacks the lumenal subunits of the oxygen-evolving complex (OEC; Komenda et al., 2004;Rengstl et al., 2013). Finally, the OEC is built with the help of the assembly factors CyanoP and Psb27, yielding a fully functional PSII monomer (Nowaczyk et al., 2006;Becker et al., 2011;Komenda et al., 2012;Cormann et al., 2014).
As outlined above, recent years have seen the discovery of many accessory factors that are involved in catalyzing distinct PSII assembly/repair steps. Many of these have been found to belong to the so-called family of TPR (tetratricopeptide repeat) proteins (Heinz et al., 2016;Rast et al., 2015). TPR proteins represent solenoid-like, "scaffold" proteins which are distributed throughout all kingdoms of life (for a recent review see Bohne et al., 2016). Typically, a TPR domain consists of multiple copies (3-16) of a degenerate motif which comprises 34 amino acids forming two amphipathic α-helices. The crystal structure of TPR domains revealed that these form right-handed superhelices that serve as a platform for protein-protein interactions (Blatch and Lässle, 1999;D'Andrea and Regan, 2003). TPR proteins have been implicated in a variety of functions during the biogenesis of TMs, including chloroplast protein import, gene expression and chlorophyll (Chl) synthesis, as well as PSII and PSI assembly (Bohne et al., 2016). In total, the Synechocystis genome encodes 29 TPR proteins (Bohne et al., 2016). These include Ycf3 and Ycf37, which have been shown to facilitate PSI assembly. The TPR protein Pitt (light-dependent protochlorophyllide oxidoreductase interacting TPR protein) interacts with POR (light-dependent protochlorophyllide oxidoreductase) and regulates Chl synthesis (Schottkowski et al., 2009b;Rengstl et al., 2011). For cyanobacterial PSII assembly, the above-mentioned TPR protein PratA plays an important role and recently the protein Slr0151 has been shown to be involved in the PSII repair cycle (Yang et al., 2014).
The slr0151 gene is part of the slr0144-slr0151 operon, which codes for eight proteins (Kopf et al., 2014;Yang et al., 2014). This operon was first discovered in the course of a microarray analysis in which expression of the cluster was down-regulated under iron-depleted conditions and during oxidative stress (Singh et al., 2004). The authors hypothesized that the gene cluster is involved in PSI assembly (Singh et al., 2004), and indeed a second study found Slr0151 to be associated with PSI complexes (Kubota et al., 2010). Others, however, have pointed to connections between Slr0151 and PSII (Wegener et al., 2008;Yang et al., 2014). Thus, Wegener et al. (2008) referred to the proteins encoded by the slr0144-slr0151 operon as PSII assembly proteins (Pap) because they stabilize PSII intermediates. Furthermore, these authors showed that the entire pap operon is up-regulated upon loss of any of the lumenal proteins CyanoP, PsbV, and CyanoQ (Wegener et al., 2008). In a slr0151 − mutant, however, the expression of the other pap genes was not affected (Yang et al., 2014). In addition, experimental evidence has been provided that links Slr0151 to the PSII repair cycle, and yeast two-hybrid and pulldown analyses have revealed that Slr0151 interacts directly with both CP43 and D1 (Yang et al., 2014). In this study, we further characterize the function and subcellular localization of Slr0151.
Construction and Growth of Strains
Synechocystis wild-type and mutant strains were grown on solid or in liquid BG-11 medium at 30 • C at a continuous photon irradiance of 30 µmol photons m −2 s −1 . The insertion mutant slr0151 − was generated by PCR amplification of the wild-type slr0151 gene with the oligonucleotides 0151/5 ATGATGGAAAATCAAGTTAATGA and 0151/3 TTAACCAAATAGGTTAGCTGC as primers, and subsequent cloning of the resulting fragment into the pDrive vector (Qiagen). The fragment was cut from pDrive and inserted into Bluescript pKS vector via the restriction enzymes SalI and PstI of both multiple cloning sites. A kanamycinresistance cassette was then inserted into its unique HindIII restriction site, and wild-type cells were transformed with the construct as described. For complementation of the slr0151 − mutant, the slr0151 gene (including its own promoter) was PCR-amplified with oligonucleotides 0151/5b CTCGAGTGATGAGTTTTTTTAGCTCTA and 0151/3b CTCGAGAACTGGAGTTTTAACCAAA, and cloned into the single XhoI site in the vector pVZ321, which replicates autonomously in Synechocystis 6803 (Zinchenko et al., 1999). Transfer of this construct into slr0151 − via conjugation was performed as described (Zinchenko et al., 1999). Construction of the mutant lines psbA − (TD41), (Nixon et al., 1992), ctpA − (Rengstl et al., 2011), pratA − (Klinkert et al., 2004), psbB − (Eaton-Rye and Vermaas, 1991), ycf48 − (Komenda et al., 2008), sll0933 − (Armbruster et al., 2010), psb27 − , and pitt − (Schottkowski et al., 2009b) was described previously.
Antibody Production and Western Analysis
For production of the αSlr0151 antibody, the slr0151 reading frame without the N-terminal transmembrane region (amino acid positions 62 to 320) was PCR-amplified using oligonucleotides TH0151a GGATCCGAATTCCA TTTGTTTAACCGTAAGCAGTT and TH0151b GTCGACTT AACCAAATAGGTTAGCTGCGGT. The resulting DNA fragment was inserted into the pDrive vector (Qiagen), sequenced and further subcloned into the BamHI and SalI restriction sites of the expression vector pGex-4T-1. Expression of the GST fusion protein in Escherichia coli BL21 and its affinity purification on Glutathione-Sepharose 4B (GE Healthcare) were performed according to the manufacturers' instructions. Polyclonal antiserum was raised in rabbits (Biogenes). Protein preparation from Synechocystis 6803 and western analyses were carried out as previously reported (Wilde et al., 2001).
Transmission Electron Microscopy
Synechocystis cells (slr0151 − mutant and wild-type) were harvested in mid-log phase by centrifugation at 5000 g and adjusted to an OD 750 nm = 3 in BG-11. Aliquots (2 µl) of the cell suspension were high-pressure frozen at 2100 bar (Leica HPM 100) in HPF gold platelets (Leica Microsystems, Vienna, Austria) and stored in liquid nitrogen (Rachel et al., 2010;Klingl et al., 2011). The cryofixed cells were then freeze-substituted (Leica EM AFS2) at −90 • C with 2% osmium tetroxide and 0.2% uranyl acetate in pure acetone. Freeze substitution was carried out at −90 • C for 20 h, −60 • C for 8 h, −30 • C for 8 h, with a heating time of 1 h between each step, and then held at 0 • C for 3 h. Samples were washed three times with pure, ice-cold FIGURE 1 | Steady-state levels of the indicated PSII-related proteins in the slr0151 − mutant (A) and of Slr0151 in the indicated PSII mutants (B). Each value is expressed relative to that in the wild type. Total cell proteins were isolated, fractionated by SDS-PAGE, blotted onto nitrocellulose membranes and detected immunologically. Signals were quantified with AIDA software (version 3.52.046) after densitometrical scanning. Values plotted are means ± SD of at least three independent experiments. Significance according to Student's t-test with an error probability of 5 and 1% is indicated by one and two asterisks, respectively. acetone followed by infiltration with Epon resin (Fluka, Buchs, Switzerland). After polymerization for 72 h at 63 • C, ultrathin sections were cut, and post-stained with lead citrate (Reynolds, 1963). Transmission electron microscopy was carried out at 80 kV either on a Zeiss EM 912 or on a Fei Morgagni 268 electron microscope (FEI). Data analysis was carried out with the Fiji ImageJ software.
Immunofluorescence and Fluorescence Microscopy
Synechocystis cells were harvested in mid-log phase by centrifugation at 5000 g and adjusted to an OD 750 nm = 3 in PBS (140 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 1.8 mM KH 2 PO 4 ; pH 7.4). The cells were fixed with 2% formaldehyde (35%, for Histology, Roth) in PBS for 20 min at 30 • C on a shaker, then washed twice with PBS-T (PBS supplemented with 0.05% Tween-20). For permeabilization, the cells were incubated with PBS-T for 2 min × 3 min on an overhead rotor. All subsequent steps were performed in the dark. The cells were applied to poly-L-lysine-coated glass slides (Sigma) and incubated for 30 min to allow them to settle, then incubated with blocking buffer (5% non-fat milk powder in PBS-T) for 20 min. The slides were incubated for 3 h with the first antibody (αSlr0151 and αRbcL, diluted 1:500 in blocking buffer, then washed for 3 min × 3 min by incubating them with PBS-T and rinsing the solution off the slide. The secondary HRP-conjugated goat anti-rabbit antibody (Sigma) was labeled with Alexa 488 (Alexa Fluor Dyes, Life Technologies, ThermoFisher Scientific) according the manufacturer's instruction. The slides were incubated with the labeled secondary antibody (diluted 1:2000 in blocking buffer) for 1 h. The slides were washed twice with PBS-T and twice with PBS-G (PBS supplemented with 10 mM glycine) to minimize background fluorescence due to non-labeled fluorophores. The slides were then dried and each was covered with a drop of FluorSave TM Reagent (Calbiochem, Merck Millipore) and a coverslip. Next day, the coverslip was sealed with nail polish. Fluorescence was imaged using a Delta Vision Elite (GE Healthcare, Applied Precision) equipped with Insight SSI TM illumination and a CoolSNAP_HQ2 CCD camera. Cells were imaged with a 100× oil PSF U-Plan S-Apo 1.4 objective. The four-color standard set InsightSSI module (code number: 52-852113-003, GE Healthcare, Applied Precision) was used for imaging. Alexa488 was excited with the FITC/GFP excitation filter (461-489 nm) and emission was detected with the FITC/GFP emission filter (501-549 nm). Chlorophyll autofluorescence was excited with the TRITC/Cy3-filter (528-555 nm) and emission was detected with the TRITC/Cy3 filter (574-619 nm). Images were analyzed using the Fiji ImageJ software.
Molecular slr0151 − Phenotype
The slr0151 open reading frame in Synechocystis encodes a protein of 320 amino acids, which contains two consecutive TPR domains comprising positions 185-218 and 219-252 (analyzed with TPRpred; Yang et al., 2014;Bohne et al., 2016). It has previously been shown that Slr0151 is an intrinsic membrane protein which forms part of a high-molecular-weight complex (Yang et al., 2014).
To analyze the function of Slr0151, we disrupted its cloned reading frame by inserting a kanamycin-resistance cassette into the unique HindIII site 425 bp downstream of the start codon (Supplementary Figure S1A). After transformation of wild-type (WT) cells with this construct, the transformants were tested for complete segregation by PCR analysis (Supplementary Figure S1B). The complete absence of the Slr0151 protein in the slr0151 − mutant was verified by western analysis using an αSlr0151 antibody (Supplementary Figure S1C). Like the previously described slr0151 − mutant, the mutant strain described in this study exhibited a high light (800 µmol photons m −2 s −1 ) sensitive phenotype (Yang et al., 2014). Moreover, and also in agreement with the previous report, less pronounced effects were observed under normal lighting conditions (30 µmol photons m −2 s −1 ). These effects included moderately reduced photoautotrophic growth and oxygen production rates (Yang et al., 2014). Together, these findings suggest that, like other PSII assembly factors such as Ycf48 or Psb27, Slr0151 might be involved in both PSII assembly and repair Rengstl et al., 2013;Mabbitt et al., 2014;Jackson and Eaton-Rye, 2015). FIGURE 3 | Membrane localization of Slr0151. Sucrose step-gradient centrifugation was used to separate membrane types from wild-type Synechocystis cells. Fraction II corresponds to the plasma membrane (PM) and fraction V contains pratA-defined biogenic membranes (PDMs) + thylakoid membrane (TMs). For fractions I-IV 10% of the total volume of each fraction was loaded, while only 0.2% of fraction V was analyzed (Schottkowski et al., 2009a).
To explore this possibility further, the levels of various photosynthetic subunits and PSII biogenesis factors accumulated in the slr0151 − strain under normal growth conditions were analyzed. Almost all analyzed proteins showed no significant differences relative to WT, except CP43 and Pitt had increased levels in the mutant ( Figure 1A; Supplementary Figure S2A). Conversely, however, levels of Slr0151 were significantly altered in different PSII assembly mutants ( Figure 1B; Supplementary Figure S2B). In the D1 mutant, in which all three copies of the psbA gene are inactivated, Slr0151 was reduced to only 20% of its wild-type level (Nixon et al., 1992). The ctpA − mutant lacking the C-terminal processing protease for pD1 accumulated only 50% as much Slr0151 as did wild-type cells (Anbudurai et al., 1994). In the PSII assembly factor mutants pratA − , ycf48 − , and pitt − (Klinkert et al., 2004;Komenda et al., 2008;Schottkowski et al., 2009b) amounts of Slr0151 reached 52, 53, and 62% of the wild-type level, respectively. In sharp contrast, however, more than twice the WT level of Slr0151 was detected in sll0933 − , which lacks the cyanobacterial homolog of the Arabidopsis PSII assembly factor PAM68. This suggests a functional relationship between Slr0151 and Sll0933, despite the fact that levels of the latter were unchanged in the slr0151 − mutant (Figure 1; Supplementary Figure S2). To test if this relationship relies on a physical interaction of both factors, we performed co-immunoprecipitation experiments. However, no interaction between Slr0151 and Sll0933 were detected under the applied conditions suggesting that they do not form parts of a stable complex in vivo.
Ultrastructure of slr0151 − Cells
In order to gain more insight into the subcellular consequences of slr0151 inactivation, the ultrastructure of the mutant was visualized by transmission electron microscopy (Figure 2). In slr0151 − cells grown at normal light intensities, the thylakoids appeared to be less densely packed and thylakoid lumina were swollen when compared to the wild-type (Figure 2). Lumen diameters ranged from 4 to 91 nm in the mutant, whereas the values for wild-type thylakoids fell within the 5-to 9-nm range, as previously observed (Figure 2; van de Meene et al., 2006). Thus, Figure 3) was centrifuged on linear sucrose gradients in order to separate wild-type PDMs from TMs. (B) Distribution of Slr0151 between PDMs and TMs in the indicated PSII mutants. (C) Distribution of various PSII-related proteins in slr0151 − cells. The part of the gradient from 20 to 60% sucrose was apportioned into 14 fractions, which were analyzed by immunoblotting using the antibodies indicated on the right. Fractions 1-6 represent PDMs, and fractions 7-14 represent TMs. To facilitate comparison between gradients, sample volumes were normalized to the volume of fraction 7 that contained 40 µg of protein.
FIGURE 4 | Distribution of Slr0151 between PDMs and TMs. (A) Wild-type fraction V (see
in addition to PSII assembly/repair, Slr0151 deficiency appears to affect TM organization.
Localization of Slr0151 and PSII-Related Factors in Membrane Subfractions
Slr0151 has previously been reported to localize to both the PM and TMs, based on a combined sucrose density/twophase partitioning approach (Yang et al., 2014). Alternatively, cyanobacterial membranes can be fractionated into PM and TMs via a sucrose step gradient, and the latter can be further fractionated into PratA-defined biogenic membranes (PDMs) and photosynthetically active thylakoids on a second, linear sucrose gradient (Schottkowski et al., 2009a;Heinz et al., 2016). When the distribution of Slr0151 in membrane subfractions was followed by applying the latter technique, the protein was accordingly detected in both PM and TM fractions (Figure 3). Further fractionation of thylakoids then revealed that Slr0151 is found in both PDMs and TMs, indicating that the protein is broadly distributed throughout the cell (Figure 4A). Since Slr0151 accumulation was affected in several PSII-related mutants, we next analyzed its TM distribution in the various mutant backgrounds (Figures 2B and 4B). In most cases, Slr0151 distribution followed the wild-type pattern. The only exceptions were the pratA − and sll0933 − mutants (Figures 4A,B). In these strains, a shift of Slr0151-containing material toward the less dense PDM fractions was observed (Figures 4A,B). This again suggested a functional relationship between Slr0151 and the PAM68 homolog Sll0933 and, furthermore, a connection to the biogenic PratA-defined region at the periphery of the cell.
When the distribution of several PSII-related proteins was monitored in a slr0151 − background, no significant effects were seen ( Figure 4C; Rengstl et al., 2011). The only alteration in membrane distribution concerned CP47. This is usually
FIGURE 5 | Continued
Frontiers in Plant Science | www.frontiersin.org seen exclusively in TMs, but accumulates to some extent in PDM fractions in the absence of Slr0151 (Figures 4A,C). However, Slr0151 localization was not affected in a psbB − mutant ( Figure 4B). Taken together, these data revealed a broad membrane distribution of Slr0151 and further confirmed its relationship to PSII assembly/repair. The distribution Slr0151 in the WT is almost identical to the distribution of Ycf48 which has been shown to be involved in assembly and repair of PSII.
Localization of Slr0151 via Immunofluorescence
To obtain a more comprehensive view of the subcellular localization of Slr0151, we next performed immunofluorescence (IF) analyses with affinity-purified αSlr0151, in combination with an Alexa488-labeled secondary antibody. Fluorescence was recorded by wide-field microscopy followed by deconvolution.
FIGURE 5 | Subcellular localization of Slr0151 by immunofluorescence analysis of wild-type Synechocystis cells. Cells grown under normal light conditions (A) and cells that had been exposed to high light for 1 h (B) were treated with αSlr0151 antibody and detected with an Alexa-488-coupled secondary antibody. The constituent subpanels show Z-montages (200 nm spacing) of each separate channel as well as the merged channel. The line plots of relative fluorescence signals are derived from scans of the area defined by the white circle around the indicated cells, and correspond to the Z-slice number of the analyzed slice (given at the top). (C) For controls, cells were grown under normal conditions. In the upper row, Synechocystis wild-type cells were immunostained with αRbcL and visualized with Alexa-488 coupled secondary antibody. In the middle row, slr0151 − mutant cells have been treated with the αSlr0151 antibody followed by Alexa-488-coupled secondary antibody. In the bottom row, wild-type cells were probed with Alexa-488-coupled secondary antibody alone. AF, autofluorescence of chlorophyll; scale bar 1 µm.
In Figure 5, Z-montages from different cells are shown that display sequential slices from the Z-stack to provide a better 3D representation of whole cell volumes. Overall, Slr0151 IF signals in wild-type cells grown under normal light conditions were unevenly distributed, with frequent spot-like concentrations. These partly coincided with the Chl autofluorescence of the thylakoids, but were also detected in regions with low Chl fluorescence, i.e., in the PM at the cell periphery and in thylakoid convergence zones close to the PM. Interestingly, fluorescence signals were also visible in Chl-less central regions of the cells, where fewer thylakoid lamellae tend to traverse the cytoplasm ( Figure 5A). Thus, the fluorescence signal is in accordance with the observations from membrane fraction analysis, i.e., that Slr0151 is located in the PM, and in PDMs and TMs. In addition, these data provide evidence that Slr0151 is found in punctate concentrations within the membrane, reminiscent of the previously described distribution of FtsH2-GFP signals, which are thought to label PSII repair zones (Sacharz et al., 2015). Similar to FtsH-GFP signals, Slr0151 IF patterns were unchanged after a 1-h exposure to high light, suggesting that an enhanced requirement for PSII repair does not provoke any substantial reorganization of Slr0151 localization (Figure 5B; Sacharz et al., 2015).
Control experiments included the omission of the specific αSlr0151 antibody, and analysis of the slr0151 − mutant, which displayed at most diffuse background signals ( Figure 5C). Moreover, we used an αRbcL antibody as a control for a non-membrane protein that gives rise to fluorescence labeling of carboxysomes from Synechocystis (Cameron et al., 2013). Yang et al. (2014) demonstrated that the TPR protein Slr0151 is involved in the repair of PSII in Synechocystis cultures grown at high light. Prompted by the finding that a milder phenotype can be observed under normal lighting conditions, we have carried out a further investigation of the effects of loss of Slr0151 in that context. In particular, our data suggest a functional relationship between Slr0151 and the PSII assembly factor Sll0933, a homolog of the PAM68 protein from A. thaliana, which has been shown to play a role in the conversion of RC complexes into larger PSII precomplexes by facilitating attachment of the inner antenna proteins (Figure 1B; Armbruster et al., 2010;Rengstl et al., 2013).
DISCUSSION
In agreement with the idea that Slr0151 is involved in the transition to larger PSII pre-complexes, reduced amounts of the RC47 complex have previously been detected in slr0151 − cells grown under high light (Yang et al., 2014). Moreover, the direct interaction of Slr0151 with CP43 and D1, as well as the unusual membrane distribution of CP47 in PDMs in the slr0151 − background, suggest a role for Slr0151 in the transition from RC complexes to PSII monomers ( Figure 4C; Yang et al., 2014). Interestingly, PDM-localized CP47 fractions have also been observed in a ctpA − mutant, which further supports the idea that Slr0151 acts during the transition from the RC47 complex to the PSII monomer lacking the OEC (Rengstl et al., 2011). Ycf48 is involved in the formation of the RC complex during assembly and repair of PSII (Komenda et al., 2008;Rengstl et al., 2011). Interestingly, it is distributed like Slr0151 in membrane fractionation experiments (Figure 4). This might suggest that a broad membrane distribution of PSII related proteins is characteristic for factors being involved in both PSII assembly and repair. Taken together, these findings confirm a PSII-related function for Slr0151, and demonstrate that, even under normal lighting conditions, distinct molecular phenotypes are detectable upon inactivation of Slr0151. This is further underlined by the altered ultrastructure of thylakoids, i.e., looser membrane packing and increased lumen volume, seen in the slr0151 − mutant grown in normal light. Swollen lumina of thylakoids have been observed before in WT cells grown at 0.5 µmol photons m −2 s −1 and in WT and mutants with different depletions of carotenoids grown in the dark with 10 min of light per day ( Van de Meene et al., 2012;Toth et al., 2015). Recently, a study using inelastic neutron scattering on living Synechocystis WT cells investigated the membrane dynamics of thylakoids during light and dark periods (Stingaciu et al., 2016). The authors showed that the TM in Synechocystis is less flexible in the light as compared to dark conditions due to formation of the photosynthetic proton gradient across the TMs. Therefore, it appears possible that distorted photosynthesis or an absence of the structural function of Slr0151 itself causes swollen thylakoids in the slr0151 − mutant. Such a structural role would also be in line with the observed broader membrane distribution of Slr0151. Taken together, we propose that Slr0151 -like other PSII assembly factors such as CtpA, Ycf48 and Psb27 -is involved in both PSII assembly and repair (Nowaczyk et al., 2006;Komenda et al., 2007;Nickelsen and Rengstl, 2013;Jackson et al., 2014;Mabbitt et al., 2014). The suggested involvement of Slr0151 in both processes is also consistent with the fact that the RC47 complex represents the point of convergence between them.
Slr0151 is an intrinsic membrane protein that does not accumulate in the cytoplasm (Yang et al., 2014;data not shown). Indeed, it can be found in a variety of specialized membrane domains. Previously, Slr0151 was detected in the PM as well as in the thylakoids of Synechocystis (Huang et al., 2002;Yang et al., 2014). This distribution was confirmed by our membrane fraction experiments (Figures 3 and 4). In addition, substantial amounts of Slr0151 were observed in PDMs, which are localized at sites where thylakoids converge upon the PM. According to rough estimates based on densitometrical signal analysis, approximately 2% of total cellular Slr0151 is found in PMs and 25 and 70% in PDMs and TMs, respectively. IF analyses confirmed this overall distribution and revealed frequent punctate concentrations of Slr0151 in all membrane types ( Figure 5). Intriguingly, a similar localization pattern has been observed for GFP-tagged FtsH2, the protease which degrades damaged D1 protein during PSII repair. The GFP signal co-localized with the Chl autofluorescence and showed patches of increased intensity within as well as between thylakoids at their peripheral convergence sites (Sacharz et al., 2015). The same patterns were maintained under high light conditions for both Slr0151 and FtsH2 (Sacharz et al., 2015). Thus, these data, together with the observation that the synthesis of D1 following photodamage is affected by Slr0151 inactivation (Yang et al., 2014), are consistent with a role of Slr0151 during repair. Furthermore, IF analysis has shown that some Slr0151 is concentrated in thylakoids that traverse the cell center. Whether this reflects any distinct functional role of these regions remains to be discovered.
This work reveals new aspects of the function of the TPR protein Slr0151, i.e., its involvement in PSII assembly in addition to its previously described role in PSII repair. The fact that PSII assembly takes place in BCs does not exclude the possibility that repair and assembly of PSII are co-localized in those regions. Since several steps of assembly and repair involve the same assembly/repair factors as well as some assembly and assembly repair intermediates (Schottkowski et al., 2009a;Rengstl et al., 2011;Stengel et al., 2012). Therefore, the current findings suggest a close relationship between PSII assembly and repair, with regard to the factors involved and their subcellular distribution Mabbitt et al., 2014).
AUTHOR CONTRIBUTIONS
AR, BR, SH, AK, and JN designed the research. AR, BR, and SH performed the research. AR and JN perpared the article.
FUNDING
This work was supported by funding from the Deutsche Forschungsgemeinschaft for Research Unit FOR2092 (Ni390/9-1).
ACKNOWLEDGMENTS
We thank Jürgen Soll for providing αYidC antibody, Marc Bramkamp for help with the Delta Vision and Silvia Dobler for technical assistance. Furthermore, we thank Paul Hardy for critical reading of the manuscript.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fpls.2016.00605 FIGURE S2 | Protein levels in the slr0151 − mutant and of Slr0151 in various PSII mutants. Representative western blots from the protein level analysis shown in Figure 1. Total proteins were isolated from the respective line and analyzed via SDS-PAGE and western blot. 30 µg were loaded for 100% wild type and each mutant. The quantification of at least three independent experiments is summarized in Figure 1. The RbcL signal served as internal standard for relative quantification. (A) Protein levels of the indicated PSII subunits and PSII-related proteins in the slr0151 − mutant. Wild-type and mutant samples were analyzed on the same gel. However, signals from unrelated samples, which were loaded in between, were excised. (B) Representative western analysis of Slr0151 in various PSII mutants. | 6,777.4 | 2016-05-03T00:00:00.000 | [
"Biology"
] |
Dependence of pH Variation on the Structural, Morphological, and Magnetic Properties of Sol-Gel Synthesized Strontium Ferrite Nanoparticles Dependence of pH Variation on the Structural, Morphological, and Magnetic Properties of Sol-Gel Synthesized Strontium Ferrite Nanoparticles
In this research work, an attempt of regulating the pH as a sol-gel modification parameter during preparation of SrFe 12 O 19 nanoparticles sintered at a low sintering temperature of 900 (cid:1) C has been presented. The relationship of varying pH (pH 1 – 14) on structural microstructures and magnetic behaviors of SrFe 12 O 19 nanoparticles was characterized by X-ray diffraction (XRD), field emission scanning microscope (FESEM), thermogravimetric analysis (TGA), Fourier-transform infrared (FTIR), and vibrating-sample magnetometer (VSM). The single-phase SrFe 2 O 19 with optimum magnetic properties can be obtained at pH 1 with a sintering temperature of 900 (cid:1) C. As pH values increase, the presence of impurity Fe 2 O 3 was observed. TGA data-varying pH shows that the total weight loss of most samples was at 30.44% which corresponds to the decomposition process. The IR spectra showed three main absorption bands in the range of 400 – 600 cm (cid:3) 1 corresponding to strontium hexaferrite. SEM micrographs exhibit a circular crystal type of strontium ferrite with an average crystal size in the range of 53 – 133 nm. A higher saturation magnetization M s , remanent magnetization M r , and hysteresis H c were recorded to have a large loop of 55.094 emu/g, 33.995 emu/g, and 5357.6 Oe, respectively, at pH 11, which make the synthesized materials useful for high-density recording media and permanent magnets.
Introduction
Ferrite is a magnetic material in the form of ceramic like. Ferrite is usually brittle, hard, iron containing, and generally gray or black in color. It consisted of iron oxides and reacts with preferable high electrical resistivity of metal oxides. Ferrites have impressive properties such as high magnetic permeability and high electrical resistance [1]. Ferrite magnets have a low hysteresis loss and high intrinsic coercivity [2] which give greater effect in resistance demagnetization from external magnetic field. In addition, a low-cost ferrite magnet has good heat resistance and good corrosion resistance which are useful to many applications like permanent magnet [3,4], solid-state devices, magnetic recording media [5,6], microwave device [5], etc. A generic formula of magnetoplumbite structure of ferrite is MFe 12 O 19 , where M is divalent cations like Ba 2+ [3,4], Sr 2+ [1,2,5,7], and Pb 2+ [8]. Pullar [9] has mentioned that the best known hexaferrite is those containing divalent cations, because it has preferable high electrical resistivity compared to other types of ferrite. SrFe 12 O 19 has been chosen in order to produce a good quality of magnetic recording media due to high electrical resistivity of 10 8 Ω cm [9]. The high coercivity leads to high energy product BH max behavior. Liu [10] has mentioned that a good quality of magnetic recording media should have possible high signal and low noise. In order to meet those criteria, the magnetic materials should have high magnetization; high coercivity but correlated with recording field; single-domain particles or grains; a smaller size of particles or grain size, thermally stable, and therefore a reduced thickness of the active magnetic film of the medium; and a good alignment of the particle or grain easy axis [10]. In recent years, higher levels of recording density have been achieved in the field of magnetic recording. Magnetic tapes employing hexagonal barium ferrite magnetic powder achieve a surface recording density of 29.5 bpsi (bits per square inch). However, when the size of hexagonal ferrite magnetic particles is reduced, the energy for maintaining the direction of magnetization of the magnetic particles (the magnetic energy) tends to become inadequate to counter thermal energy, and thermal fluctuation ends up compromising the retention of recording.
Various techniques are presented for the synthesis of strontium hexaferrite powders such as solid-state synthesis method [11,12], chemical coprecipitation [13][14][15], ceramic method [16], and sol-gel [17][18][19] and hydrothermal methods [20]. The effect of pH variation in this research work via sol-gel method for producing SrFe 12 O 19 is key factor for controlling hexaferrite nanostructure and magnetic properties. Other than that, this proposed method has not yet been reported elsewhere in producing SrFe 12 O 19 nanoparticles. Recently, the sol-gel route has received considerable attention in the last few years because it has lower calcination temperature, the fact that it also enables smaller crystallites to grow [2]. Sol-gel method produces a better outcome than microemulsion and coprecipitation methods. The sol-gel hydrothermal method combines the advantages of the sol-gel method and the high pressure in the hydrothermal condition [7]. In the hydrothermal process, the particle size and particle morphology can be controlled. SrFe 12 O 19 nanoparticles have high purity, ultrafine size, and high coercivity. Some efforts have been carried out to modify the sol-gel process parameters such as pH, basic agent, carboxylic acid, and starting metal salts for further decreasing the calcination temperature and achieving the finer crystallite size [1]. Optimizing the molar ratio of Fe to Sr is very important to produce a single-phase sample, ultrafine particle, and lower calcination temperatures [21]. This ratio varies with the change in starting materials and with the change in method of production [21]. The obtained products that have single-phase particles have a hexagonal shape, the right proportion, and high coercively. The prolonging annealing time has a significant effect on the high saturation magnetization (M s ), and the high annealing rate formed a highly percentage of pure strontium hexaferrite. Masoudpanah and Ebrahimi [2] state that the preferred molar ratio of Fe/Sr is 10, which is the lowest calcination temperature (800 C) on the formation of single phase of SrM thin films. In addition, XRD showed that the crystallite sizes at a range of 20-50 nm. The magnetic properties of this preferred molar ratio exhibit a good saturation magnetization (267 emu/cm 3 ), high coercivity (4290 Oe), and a relatively high remanent magnetization (134 emu/cm 3 ). Minh et al. [7] state that the preferred molar ratio is at 11. The obtained SrFe 12 O 19 has high purity, ultrafine size, and high coercivity at H c = 6315 Oe. This chapter discussed an attempt to employ water as the gel precursor to synthesize nano-sized M-type strontium ferrite (SrFe 12 O 19 ) bulk sample at low sintering temperature 900 C by using a common laboratory chemical. A solution of metal nitrates and citric acid and ammonia has been used to prepare strontium hexaferrite at varying pH.
Brief
The nitrates were calculated as one mole of Sr(NO 3 ) 3 , and 12 moles of Fe(NO 3 ) 2 were needed in order to synthesize one mole of SrFe 12 O 19 nanoparticles. In the process of reaction, CA was used as a chelating agent and fuel of combustion. The CA was then calculated according to the molar ratio of citrate to nitrate of 0.75 which first obtained each number of mole nitrate as below: Then, mass of citrate was calculated as: In this study, NH 4 OH was used to vary the pH value of SrFe 12 O 19 in order to study the effect of pH value in its morphology and magnetic properties.
Sample preparation and characterizations
An appropriate amount of Sr (NO 3 ) 2 , Fe(NO 3 ) 3 , and C 6 H 8 O 7 was dissolved in 100 ml of deionized water for 30 min at 50 C with constant stirrer rotation of 250 rpm. The mixtures were continuously stirred, and NH 4 OH was added in order to vary the pH from pH 1-14 which is measured by HI 2211 pH/ORP meter (HANNA instruments). The solutions then were stirred on the hot plate for 24 h at 60 C. The solution was left in oven at temperature of 80 C for 2 days to turn the solution into a sticky gel. The sticky gel was stirred again stirred on hot plate, and the temperature was increased up to 150 C to dehydrate and form a powder. The powder formed were crushed by using mortar before sintering it at 900 C for 6 h with the heating rate of 3.5 C/min. The crystalline structural characterization of XRD was performed using a Philips X'Pert X-ray diffractometer model 7602 EA Almelo with Cu Kα radiation at 1.5418 Å. The range of diffraction angle used is from 20 to 80 at room temperature. The accelerating current and working voltage were 35 mA and 4.0 kV, respectively. The data are then analyzed by using X'Pert Highscore Plus software. The lattice constant, a, is obtained by Eq. (5): Where d is the interatomic spacing and (h k l) are miller indices. The volume cell V cell was calculated using Eq. (6): Where a and c are lattice constants. The theoretical density r theory of sample was calculated using Eq. (7): Where M is molecular weight of SrFe 12 Where r exp is the experimental density and r theory is the xrd density.
Meanwhile, the crystallite size can be measured by using the Scherrer equation (Eq. 9): Where D is crystallite size, k is the Scherrer constant value of 0.94, λ is Cu Kα radiation wavelength of 1.542 Å, β is half-peak width of diffraction band, and θ is the Bragg angle corresponding to the planes.
The thermal stability of these samples was obtained by using TGA/SDTA 851 of Mettler Toledo thermogravimetric analyzer. The sample weighted about 10 mg was used at operating temperature range from 0 to 1000 C with heating rate 5 C/min. Fourier-transform infrared by Perkin Elmer model 1650 was used to determine the infrared spectrum of absorption and emission bands of sample. It was performed between infrared spectra of 280-4000 cm À1 with resolution of 4 cm À1 . The micrograph of microstructure was observed using a FEI Nova NanoSEM 230 machine to study the morphology and microstructure of solid material. The sample was prepared in bulk pallet at a diameter of 1 cm and coated with gold in order to avoid charge buildup as the electron beams are scanned over the samples' surface. The distribution of grain size image was fixed at magnification of 100,000X with 5.0 kV. The distribution of average grain size of microstructure was calculated by using these images. The distributions of grain sizes were obtained by taking at least 200 different grain images for the sample and estimating the mean diameters of individual grains by using the J-image software. The magnetic properties of samples were measured by VSM Model 7404 LakeShore. The measurement was carried out in the room temperature with sample weight about 0.2 g. The external field applied was 12 kOe parallel to the sample. From this analysis, saturation magnetization, M s ; remanent magnetization, M r ; and coercivity, H c , were recorded, and the hysteresis loop was plotted.
Research findings and outcomes
3.1. Structural analysis Figure 1 shows the XRD spectra of the samples sintered at 900 C with different pH values (pH 1-14 [24]. Hence, more Sr 2+ ions are needed for the formation of the strontium hexaferrite [2]. The diffusion rates increased in the nonstoichiometric mixtures because of the induced lattice defects which could be observed from lower lattice parameter [2]. The average crystallite size ( Table 2) determined from the full width at the half maximum (FWHM) of the XRD patterns was calculated using the Scherrer formula provided from X'Pert Masoudpanah et al. [2,26] and Dang et al. [27]. There is a slight increment in lattice constant c as pH increases and fluctuated data of lattice constant a. It is shown that, at pH 10, the lattice constant a of 5.884 Å was the highest peak with a lower peak of lattice constant c of 23.047 Å. The standard strontium hexaferrite (SrFe 12 O 19 ) with JCPDS reference code of 98-002-9041 [22] has theoretical density of 5.11 g cm À3 [25]. Theoretically, the density of the sample, r EXP , is affected by the lattice constants a and c. The lattice parameter a and c values observed were not far different from the theoretical SrFe 12 O 19 lattice constant, where a = 5.8820 Å and c = 23.0230 Å (Figure 3) [25]. The a and c parameters observed are similar to Masoudpanah et al. [2] and Dang et al. [27].
The lattice constant was fluctuated around the theoretical lattice constant. However, in the experiment, the density was more affected by the preparation of the sample which results in porosity of the sample. The distant the difference of density of XRD (r XRD ) and experimental density (r EXP ), the higher the number of porosity, which results in reducing the mass of the pallet sample by pores. The highest density value for r XRD is at pH 12 (5.1148 gcm À3 ), and the highest density value for r EXP is at pH 10 (4.784 gcm À3 ). The porosity occurs because of the presence of pores in the samples as a result after sintering of bulk samples. The pores occur due to an error from preparing sample and the loosen powder while pressing the sample using hydraulic presser. As the r EXP approaches to the r XRD , the pores' percentage becomes lower. The highest porosity of 13.24% was found at pH 4 with r XRD of 5.1001 g cm À3 and r EXP of 4.425 g cm À3 . Meanwhile, pH 10 exhibits a lower porosity of 6.254% with r XRD of 5.1032 g cm À3 and r EXP of 4.784 g cm À3 ( Table 2).
The powder was synthesized using a control molar ratio of 1:12 with respect to strontium and nitrate. However, the sample with various pH was prepared with an addition of nitrate into the solution. The sample ratio of Sr (NO 3 ) 3 and Fe(NO 3 ) 2 is (Fe/Sr) = 12:1, and the samples were sintered at 900 C. From previous work reported, a single phase of strontium ferrite (SrFe 12 O 19 ) was obtained for samples sintered at 850 C with Fe/Sr molar ratio of 11.5 via sol-gel route [28]. Masoudpanah and Ebrahimi [2] found a single phase of SrFe 12 O 19 at sintering temperature of 900 C prepared using sol-gel technique. In general, the lowest sintering temperature of SrFe 12 O 19 is around 800-1000 C. Hence, the raw powder (non-sintered) was tested by TGA to identify the best temperature by sintering up to 1000 C. The TGA curves as plotted in Figure 3 show a decreasing amount of weight as the powder sintered up to 1000 C in 20 min with a starting weight of 8.3609 mg. Meanwhile, the DTA diagrams reveal three peaks shown at range 86.80-100, 399.55, and 740.40 C due to decomposition process. At a constant heating rate, the endothermic peak at 86.80-100 C had À7.91% of weight loss due to the dehydration of the absorbed water as the powder slowly turns into burnt gel [2]. The first exothermic peak at 399.55 C with a weight loss of À11.53% is due to the elimination of the organic compound which tends to the decomposition of NH 4 NO 3 that liberates NO, O 2 , and H 2 O [2]. Meanwhile, at stage 740.40 C, the exothermic peak with a weight loss of À10.49% shows the decomposition of citric acid and the breakdown of the Fe 2 O 3 to Fe as reported [17]. The stable temperature is at 880 C which permits the completeness of reaction. Hence, sintering temperature at 900 C was used in this work. Figure 4 shows the FTIR spectra of SrFe 12 O 19 nanoparticles for pH variation (pH 1-14), with IR range of 400-4000 cm À1 . It is noticeable that spectrum appeared in the range of 430, 583, 904, and 1446 cm À1 of IR characteristic band. The stretching band of CH 2 appeared at 436 cm À1 attributed to the presence of CH saturated compound, which has been agreed by [29]. The vibration of CH bond could be due to the chemical reaction in a process of hexagonal structure form, where the CH bond of citric acid loses their CH bond. The spectrum of metal-oxygen vibration of Sr-O Fe-O was found at 583 cm À1 [26]. Masoudpanah and Ebrahimi [2] explained that an occurred reaction between citric acid and ferric ions is attributed to the stretching mode of Fe-O, which confirms the formation of chelate in sol-gel route. It is proven by many researchers who claim that the absorption bands at range 443-600 cm À1 were results of the formation of strontium ferrite as the stretching vibration of metal-oxygen bond of Sr-O Fe-O occurs [30][31][32][33]. All pH reveals these two bond bands. However, there were some reducing and vanished bands in the next bond bands at 904 and 1460 cm À1 . It is due to the purity of SrFe 12 O 19 nanoparticles, as there was some interruption of Fe 2 O 3 in the sample as shown in Figure 1 and Table 2. In this study, pH 8, pH 10, pH 13, and pH 14 come out with a percentage of hematite, Fe 2 O 3 . First, the pure SrFe 12 O 19 (pH 1-7, pH 9, pH 11-12) had a relatively strong and broad bands at peak 904 cm À1 , which revealed that there was amine functional group for N-H vibration due to decomposition of NH 3 . Pereira et al. [32] stated that this broad vibration of Sr-O stretching indicates the formation of strontium nanoferrites. It is agreed by Sivakumar et al. [34] that the strontium ferrite was formed and the iron oxide vanished at 900 cm À1 . Meanwhile, pH 6, pH 8, pH 13, and pH 14 show a relative small vibration band at 904 cm À1 due to the presence of Fe 2 O 3 . As pH increases up to pH 14, the amount of ammonia increases gradually. Excess amount of ammonia failed to completely decompose the NH 3 bonds and break down the N-H vibration. Lastly, the absorption bands at 1446 cm À1 found in pure pH sample were attributed to the vibrating bands of Fe-O-Fe due to the decomposition of metal with oxide band [29]. There was some significant data that show in pH 9, pH 11, and pH 12, as a single phase of samples is formed as pH increased.
Microstructural analysis
The microstructure and grain size distribution of bulk SrFe 12 O 19 nanoparticles are shown in Figure 5. The grain size seems to have agglomerated and charged nanoparticles when increasing the pH value. The grain size was found in the range of 53.22-133.25 nm. The pH 4 produces pores of 13.24%. Meanwhile, the most packed grains are for sample at pH 10, with porosity of 6.25% ( Table 2). The microstructure shows that some of the samples have a large porosity due to the presence of polyvinyl alcohol during the preparation of pellet bulk SrFe 12 O 19 nanoferrites. The histogram of the grain distribution was shifted from small grain sizes to exhibiting larger grains from pH 1 to 8. Nevertheless, the grain size was observed to be decreasing as the pH is reaching 9-14 ( Table 3).
Magnetic behaviors
The development of M-H hysteresis loop at various pH is illustrated in Figure 6. The magnetic saturation, M s ; remanent magnetization, M r ; coercivity, H c ; grain size; and porosity of SrFe 12 O 19 nanopowder are shown in Table 4. An obvious erect, larger, and well-defined hysteresis loop can be observed. It is probably due to the strong ferromagnetic behavior, indicating the formation of SrFe 12 O 19 nanoparticles with high volume fraction of the complete crystalline SrFe 12 O 19 phase. Thus a strong interaction of magnetic moments within domains occurred due to exchange forces. This observed phenomenon can be considered as ordered magnetism in the sample. In fact, in order to obtain an ordered magnetism and well-formed M-H hysteresis loop, there must exist a significant domain formation, a sufficiently strong anisotropy field (H a ), and optional addition contributions, which come from defects such as grain boundaries and pores [35]. The saturation magnetization (M s ), remnant magnetization (M r ), and coercivity (H c ) are found to decrease with increasing pH by addition of ammonia in the sol-gel precursor. From the previous study, the H c is 4290 Oe, obtained at pH 7 [2,26]. The microstructures of nanoparticles were affected by the increase of pH value. This is in agreement with findings reported by Yang et al. [36], where the formation of particles became larger [37] with the increase of pH from 5 to 11. This is due to the aggregation of small particle that occurs when there is a strong magnetic interaction between magnetic atoms (Fe or Co) containing in Co-Fe-Al grains as the composition of Co increases and the composition of Al decreases [38]. In this work, it is noticeable that pH 11 has the largest hysteresis loops as well as high magnetic properties. Moreover, the remaining pH exhibit almost the same hysteresis loop with a slight change in M s and M r . Meanwhile, the presence of Fe 2 O 3 impurity in the samples of pH 6,8,13, and 14 shows a decrease in H c , which affects the crystalline and grain boundary.
The H c is observed to reduce as pH increased. The presence of intragranular trapped pores in the grains was due to rapid grain growth of sample. The presence of intragranular pores would pin down the magnetic moment in grains, thus reducing the M s and also the H c . The decrease in H c as pH increases can be attributed to the decrement of magnetocrystalline anisotropy with anisotropic Fe 2+ ions located in a 2A site, and the enlargement of the grain size is evident in FESEM micrographs ( Figure 5). The M s and M r are also observed to decrease as pH increases. The decrement of magnetic parameters as pH increases could be due to the existence of large amount of diamagnetic phases as the amount of ammonia NH 3 increases. It seems that the main roles of the diamagnetic NH 3 are to isolate Sr-ferrite nanoparticles from each other, thus reducing exchange interaction between them, and are known to have a detrimental effect on M s and M r .
Conclusions
Single-phase SrFe 2 O 19 ferrite nanoparticles were successfully synthesized by sol-gel citratenitrate method. From the discussion presented earlier, the influence of pH variation on the SrFe 2 O 19 ferrite nanoparticles on the structural, microstructural, and magnetic properties was discussed. An increment amount of ammonia has changed the purity, average grain size, density, and its porosity, which affected the magnetic properties of the samples. Those characteristics reveal an understanding on how important effects of pH study (linear effect of pH and acidic-alkaline effect) underlining on SrFe 12 O 19 nanoparticles, as most researchers neglect it. | 5,385.8 | 2019-02-13T00:00:00.000 | [
"Materials Science"
] |
Classification of chiral fermionic CFTs of central charge
We classify two-dimensional purely chiral conformal field theories which are defined on two-dimensional surfaces equipped with spin structure and have central charge less than or equal to 16 , and discuss their duality webs. This result can be used to confirm that the list of non-supersymmetric ten-dimensional heterotic string theories found in the late 1980s is complete and does not miss any exotic example.
Introduction and summary
Two-dimensional unitary conformal field theories (CFTs) in general contain both left-moving (or chiral) and right-moving (or anti-chiral) degrees of freedom, with possibly distinct central charges c L and c R . 1 They are known to describe universality classes of many critical systems, and it is a classic result that such theories with c L = c R < 1 can be completely classified [1,2].More precisely,2 what these references classified were bosonic CFTs, i.e. theories that are defined and modular invariant on surfaces without spin structure.We can also consider fermionic CFTs, which are defined and modular invariant on surfaces equipped with spin structure, and those with c L = c R < 1 have been similarly classified at the physical level of rigor3 [3][4][5][6].It is often stated that CFTs with c L = c R = 1 have been classified [7][8][9], and CFTs with c L = c R > 1 are extremely rich and have defied any attempt at classification.
We can also consider theories with c L ̸ = c R .We require our CFTs to be modular invariant, in the sense that we have a partition function, rather than a partition vector, on a closed twodimensional surface, and that the large diffeomorphisms act by phases and not by matrices.Among such CFTs, purely chiral CFTs, i.e. those with c L > 0 and c R = 0, have been used in theoretical physics in a different context.For example, purely chiral bosonic CFTs with c L = 16 can be used as the worldsheet degrees of freedom for spacetime-supersymmetric heterotic string theories [10].Two such examples at c L = 16 have long been known, one with E 8 × E 8 symmetry and another with Spin(32)/ 2 symmetry. 4n general, chiral operators in any CFT form a tight mathematical structure known as a chiral algebra in theoretical physics and formalized as a vertex operator algebra in mathematics.In a purely chiral CFT, the generating function of this algebra needs to be modular invariant by itself, which puts very strong constraints on such systems.This has allowed mathematicians to classify chiral bosonic CFTs with c L = 16 in this century [11], without assuming that they are obtained from either a lattice or a free fermionic construction, thereby confirming that the chiral bosonic CFTs with E 8 × E 8 and Spin(32)/ 2 symmetries are the only possible ones. 5he aim of this paper is to perform a similar classification of chiral fermionic CFTs with c L ≤ 16 and c R = 0, without assuming that they are obtained by a lattice or a free fermionic construction. 6We find that any such CFT is a product of the basic ones listed in Table 1. 7s we will recall in more detail below, any chiral bosonic CFT with c L = 16 leads to a 10-dimensional spacetime-supersymmetric heterotic string, whereas any chiral fermionic CFT with c L = 16 leads to a 10-dimensional spacetime-non-supersymmetric heterotic string.Our classification then implies that the list of 10-dimensional spacetime-non-supersymmetric heterotic strings constructed in the mid-1980s [18][19][20][21] is actually complete.
The rest of the paper is organized as follows.In Sec. 2, we explain the strategy employed for the task at hand.We will see that the classification of chiral fermionic CFTs reduces to the classification of non-anomalous 2 actions on chiral bosonic CFTs. 8In Sec. 3, we will quickly review the classification of chiral bosonic CFTs with c L ≤ 16, carried out mathematically in [11], in a language more palatable to physicists.In Sec. 4 we will then classify all possible non-anomalous 2 actions on them.In Sec. 5, we use the list of 2 actions on chiral bosonic CFTs to construct the corresponding chiral fermionic CFTs, thus completing our classification program.We will conclude our paper in Sec.6 by discussing the implications for heterotic string theories in ten dimensions or less.
We also have a few technical appendices.In App.A and B, we provide the decomposition of the even self-dual lattices of type E 8 and D 16 with respect to the 2 actions we use in detail.
In some sense, all the results in this paper can already be found scattered throughout the literature.Therefore, the authors do not claim to have made a significant new discovery.Rather, the authors would like to gather this scattered information into a single place for future reference.For this purpose, the paper is written aiming for pedagogy and self-containedness.NS be the character of the NS sector, restricted to the part (−1) F being even and odd, respectively.Then the S transformation of χ even NS − χ odd NS corresponds to the partition function of the theory on a torus, where the spatial circle is in the R-sector and the spin structure around the temporal direction is anti-periodic.Its q-expansion coefficients are supposed to count the number of states in the R-sector with each L 0 eigenvalue, but they actually are 2 times non-negative integers.This is essentially due to the fact that a Majorana-Weyl fermion on the R-sector circle gives rise to a single Majorana fermion zero mode ψ 0 .If we have two such zero modes ψ 0 and ψ ′ 0 , they can be realized on a twodimensional Hilbert space with ψ 0 := σ 1 , ψ ′ 0 := σ 2 and (−1) F := σ 3 .But a single ψ 0 would then require a Hilbert space of dimension 2, if we require a tensor-product decomposition into two factors with the same dimension.For a detailed pedagogical discussion of this subtlety, see e.g.[16, Sec.2.1] and [17].
This subtlety in the R-sector of a theory with half-integer c L − c R is now understood as a type of a gravitational anomaly of fermionic theories defined on two-dimensional spacetimes equipped with spin structure.In these cases, we take the convention that the S transformation of χ even NS − χ odd NS is 2χ R , where χ R has a q-expansion with non-negative integer coefficients; it is not possible to assign the fermionic parity (−1) F to the states counted by χ R . 8A very early appearance of this construction can be found in [22], where a fermionized version of the chiral Monster theory with c = 24 was found to have supersymmetry.• In the list, we first specify the central charge c and the name we assign to each theory.The name ψ is for the theory of a free Majorana Weyl fermion.The other names are based on the largest affine symmetry G k contained.When the theory is not simply the vacuum module of the affine symmetry G k but an extension, we use the notation G k to emphasize this fact.The last theory (D 16 ) 1 is also denoted by (Spin(32)/ 2 ) 1 .
• χ even,odd NS,R are the characters of the Hilbert space on S 1 with NS/R spin structure, restricted to the eigenspace of (−1) F being even/odd.
• When c is not an integer, (−1) F on the R-sector is not well defined, and therefore χ even,odd R are not listed separately.For more on this point, see the footnote 7.
• We do not separately list a theory and the theory obtained by multiplying by the Arf theory, for which χ even R and χ odd R are reversed.
• Other characters are affine characters, where the subscript specifies the irreducible representation in which the highest weight vector transforms under the finite-dimensional part of the symmetry.Primed representations are for the second factor in the affine symmetry.
• To specify an irreducible representation, we typically employ its dimension, except for the spinor S and conjugate spinor C representations of D n = SO(2n), and for ∧ n 16 for the n-th antisymmetric power of the fundamental representation of A 15 = SU (16).
Conventions:
In this paper, we mostly use traditional physicists' convention to ignore the difference between Lie groups sharing the same Lie algebra, unless necessary for context and clarity.This is due to the difficulty in actually pinning down the correct global structure of the group.For example, the theory often known as the E 8 × E 8 theory has the symmetry group (E 8 × E 8 ) ⋊ 2 .Also, with the modern terminology, a fermionic theory with NS sector states transforming under the group G and with R sector states transforming under a nontrivial extension G ′ of G by (−1) F , is said to have the symmetry group G and the fact that the action on the R-sector states is extended is ascribed to the mixed anomaly between G and the spacetime symmetry.
The only place we actually use the particular global forms is Sec.4.2 where the 2 actions on the bosonic c = 16 theories are classified.Even there it is only used indirectly, since we first consider order-2 elements in the automorphism group of Lie algebras, and then check whether they lift to order-2 operations acting on the representations contained in the CFT.
Note:
The authors learned that mathematicians Gerald Höhn and Sven Möller have carried out the classification up to c ≤ 24 rigorously using basically the same method [23], and that Brandon Rayhaun did the same classification up to c ≤ 23 using a different method [24].G. Höhn and S. Möller also informed the authors that Höhn already had classified these theories mathematically up to c ≤ 31/2 as Satz 3.2.4 of [25] modulo some physics assumptions, and that the classification up to c ≤ 12 was mathematically done in [26].The authors thank G. Höhn, S. Möller, and B. Rayhaun for discussions and also for coordinating the submission to the arXiv on the same date.They also thank Y. Moriwaki for notifying them about [23] in the first place.
Strategy
Our aim is to classify chiral fermionic CFTs without assuming that they admit a lattice and/or a free-fermionic construction.This is made possible by recent developments in our understanding of anomalies and of fermionization.
Mapping fermionic theories to bosonic theories
We start by noting that a general CFT with left-and right-moving central charges given by c L and c R has a gravitational anomaly specified by its anomaly polynomial (c L − c R )p 1 /24.By the general theory of anomalies of unitary theories [27,28], the gravitational anomaly of a unitary 9 bosonic (or fermionic) two-dimensional theory is such that the integral of the anomaly polynomial on an arbitrary oriented (or spin) four-dimensional manifold M is integral. 10Now, the integral of the Pontrjagin class p 1 on M is three times its signature, and the signature of an oriented 4-manifold can be any integer, while the signature of a spin manifold is a multiple of 16.Therefore, the anomaly polynomial of a bosonic theory is an integer multiple of p 1 /3, while that of a fermionic theory is an integer multiple of p 1 /48.This means that c L − c R is an 9 A non-unitary theory can have anomalies not covered in the framework of [28].See [29] and [30, App.E].This is one point in our argument where the unitary assumption is crucial. 10This condition might not be apparent from the discussion in [28], but is implicitly contained in it in the following manner.The discussion of [28] identifies the anomaly of a n-dimensional theory of spacetime structure S to be given by an invertible (n + 1)-dimensional theory, which is classified by the Anderson dual of the S-bordism groups.The anomaly polynomial is given by the non-torsion part of the Anderson dual, and as explained for physicists e.g. in [31,Sec. 2.8], it has the requirement that it integrates to an integer on any (n + 2)-manifold with S structure.integer multiple of 8 in any bosonic CFT, while it is an integer multiple of 1/2 in any fermionic CFT.
The theory of a single Majorana-Weyl fermion is a fermionic CFT with (c L , c R ) = (1/2, 0).Let us denote it simply by ψ.More generally, we can consider a (c L , c R ) = (k/2, 0) theory of k free Majorana-Weyl fermions, which we denote by kψ. 11Given any fermionic CFT T with arbitrary c L − c R = n/2, we can then multiply T by a number of copies of ψ so that c L − c R for the product theory T × kψ is a multiple of 8. We denote this combination T × kψ by T F .
We can then invoke the modern understanding of bosonization and fermionization of twodimensional quantum field theories [32,33], which says that the summation over the spin structure of a fermionic theory T F whose c L − c R is a multiple of eight gives a bosonic theory T B with a non-anomalous 2 symmetry g: 12 T B = T F /(−1) F . (1) Conversely, the fermionic theory T F can be reconstructed from the bosonic theory T B together with the 2 action g by orbifolding in the following way: Here, Q is a spin invertible theory with 2 symmetry whose partition function on a surface with spin structure q and the 2 symmetry background a is (−1) q(a) := (−1) Arf(q+a)−Arf(q) .Here q(a) is the quadratic refinement of the intersection pairing associated to the spin structure q, and Arf is the Arf invariant.We then perform the orbifolding with respect to the diagonal combination of the 2 action g on T B and the 2 action on the theory Q.
The discussion so far means that any fermionic CFT T can be put in the following form: for some bosonic theory T B with a non-anomalous 2 symmetry g, where k is the smallest (or indeed any) non-negative integer such that k/2 + c L (T ) − c R (T ) is a multiple of 8.Note that we have not imposed the condition that our original theory T is chiral.When T is chiral, we easily see that T B is also chiral.Therefore, to classify chiral fermionic CFTs with c L ≤ 8n, we only have to classify chiral bosonic CFTs with c L ≤ 8n and non-anomalous 2 actions on them.
Dimension-0 operators in fermionic theories
Before proceeding, we need to deal with the subtlety that bosonization/fermionization does not always preserve the property that the theory contains a unique vacuum state.For example, when T F is the trivial theory, T B has two vacua on S 1 , one which is even under g and another which is odd under g. 13 11 In our paper, we consider the combination of two theories A and B as taking products, since the Hilbert space of the combined system is the tensor product of the Hilbert spaces of individual theories.From this viewpoint the notation ψ k or ψ ⊗k might be more logical.We use the notation kψ as this seems more customary in the literature. 12To see that this is possible, note that the partition function of a theory with c L − c R = n/2 is a vector in the one-dimensional Hilbert space of the bulk spin invertible theory SO(n) 1 .When n is a multiple of 16, the bulk theory is actually bosonic and does not require the spin structure.Therefore, the partition function of T F on a 2d surface S with a spin structure q takes values in a single one-dimensional Hilbert space independent of q, when n is a multiple of 16.This allows us to sum over the spin structure without problems, resulting in a bosonic 2d theory.The summation over spin structure is possible when n is 8 modulo 16, but the result is still a fermionic theory.The details will be discussed in a forthcoming paper [34]. 13A trivial fermionic theory T F has a single (−1) F -even state in the NS sector, together with a (−1) F -even state in the R sector; the latter is necessary to satisfy modular invariance.Under the quotient (1), the Hilbert space of the bosonic theory T B inherits both states, resulting in a theory with two states of zero energy.In general, gauging a trivially-acting global symmetry group can result in a nontrivial theory.For example, 4d pure SU(N ) gauge theory is obtained by gauging a global SU(N ) symmetry acting on a completely trivial theory, and this is clearly highly nontrivial.
To study these issues, let us first analyze the dimension-0 operators in general fermionic theories.Suppose we are given a fermionic CFT with a unique dimension-0 operator in the NS sector, the identity, together with n R dimension-0 operators in the R sector.The possible values of n R are 0 and 1.To see this, let us consider the torus partition functions Z NS/R NS/R with different spin structures, where the superscript and subscript represent the fermion boundary conditions (NS for antiperiodic and R for periodic) in time and space, respectively.Under a modular transformation, which in the Cardy regime If n R > 1, then an inverse Laplace transform reveals a negative asymptotic density of NS sector states that are odd under fermion parity (−1) F , contradicting unitarity.If there are n NS dimension-0 operators in the NS sector, then the argument above gives n R ≤ n NS .However, we can say more.The n NS dimension-0 operators form a commutative 14Frobenius algebra under the operator product, and it is well-known that there always exists an idempotent basis Since both the NS and R Hilbert spaces are modules of this Frobenius algebra, the full theory can be decomposed into n NS 'universes' [35] (exact superselection sectors separated by domain walls of infinite tension), which we label by i = 1, . . ., n NS .Any correlator on a connected spacetime must vanish if it involves local operators from two or more universes.The full theory is said to be a direct sum of the component universes, where each universe has n i NS = 1 (counting the idempotent 1 i which acts as the identity operator within its universe) and n i R = 0, 1.Let us now come back to the relation between T B and T F , assuming n NS = 1.It is straightforward to see that T B has a unique vacuum when T F does not have a dimension-0 operator in the R-sector.
Conversely, suppose T F has a dimension-0 operator in the R-sector.We can now use this operator to map any operator in the R-sector to an operator in the NS-sector and vice versa, establishing a 1-to-1 map between the R-sector states and the NS sector states, commuting with the Virasoro action.As the R-sector states have all integer spins, the NS sector states also have integer spins.This means that the restriction of the theory to the NS sector operators actually defines a bosonic theory, which we denote by T ′ B .Depending on the (−1) F eigenvalue of the R-sector dimension-0 operator, T F is then the product of T ′ B with a completely trivial fermionic theory (whose partition function is always 1) or with the Arf theory (whose partition function is the Arf invariant).Barring these two degenerate cases, we are guaranteed that T B has a unique vacuum, and the 2 symmetry g acts nontrivially on T B .
Applications to theories with low c
This program can be pursued below the central charge c L for which the list of all chiral bosonic CFTs is known.Presently, this restricts us to c L = 8, 16 for which the classification can be found in [11], and c L = 24 for which the list had originally been conjectured by Schellekens [36] and later made into a mathematical theorem in [37][38][39][40].As the list for c L = 24 is considerably larger, and as the case c L ≤ 16 has a more immediate application to heterotic string theories, we restrict our attention to c L ≤ 16 in this paper.
Chiral bosonic CFTs with c ≤ 16
In [11], it was shown that the only chiral bosonic CFT at c = 8 is the E 8 level 1 theory (also known as the E 8 lattice theory) and that the only two such CFTs at c = 16 are the properly extended D 16 level 1 theory (also known as the D 16 lattice theory) and the product of two copies of the E 8 level 1 theory.This result is proven without assuming that the CFTs are constructed via lattice or free fermionic constructions.Let us briefly recall their argument in our language.
General constraints on partition functions
We start with a general argument.We have seen that a chiral bosonic CFT has c = 8n for an integer n.Let us then consider its partition function In the case of the well-known bosonic CFT of c = 8 corresponding to the E 8 level 1 current algebra, its partition function Z E 8 (τ) behaves under modular transformations as The phases in these relations come from a gravitational anomaly and are determined solely by the central charge.Therefore, we know that in general the partition function (7) behaves as where c = 8n.We now consider the function where is the Dedekind eta function.Its modular transformations are given by which means that it is a weakly-holomorphic modular form of weight −8n.
Applications to the case c = 8 or 16
Any weakly-holomorphic modular form of weight w is known 15 to be a -linear combination of monomials where i ≥ 0, j ∈ {0, 1}, k ∈ and 4i + 6 j + 12k = w.Here are the normalized Eisenstein series of weight 4 and 6, and is the modular discriminant.
The information above imposes a very strong constraint on W (τ) when c = 8 or c = 16.Indeed, the leading pole given in (10) and the explicit basis (13) uniquely fix W (τ) to be meaning that The analysis so far means that any chiral bosonic CFT at c = 8 has 248 spin-1 currents, which necessarily form a reductive Lie algebra 16 G with dimension 248.We also know that the rank of G is at most 8, since any Lie algebra of rank r contains a U(1) r subalgebra, which leads to c ≥ r. 17 From this one easily concludes that the only possibility is to take G = E 8 ; see Sec. 3.3 below.The level of E 8 is fixed to be 1 from the central charge.As the character of the affine algebra E 8 at level 1 is E 4 (τ)/η(τ) 8 , we conclude that a chiral bosonic CFT at c = 8 is necessarily the E 8 level 1 theory.
For chiral bosonic theories at c = 16, one carries out the same analysis for 496 currents with rank 16; for details, see Sec. 3.3.The only possibilities are G = E 8 × E 8 both at level 1 and G = SO(32) again at level 1.For details, we refer the reader again to Sec. 3.3.The former is simply the product of two copies of the E 8 level 1 theory we just found.
To analyze the latter, we note that the vacuum character χ 1 is not equivalent to the desired partition function Z(τ) = E 4 (τ) 2 /η(τ) 16 , but we do have the equality18 where χ is the level-1 character of SO (32) whose highest weight state is one of the spinor representations of SO (32).This uniquely fixes the decomposition of the Hilbert space of this theory under the SO(32) level-1 affine symmetry.To fix the full theory, we also need to fix the three-point functions.There are four types, 〈111〉, 〈11S〉, 〈1SS〉, and 〈SSS〉.The only ones allowed by the SO(32) symmetry are 〈111〉 and 〈1SS〉, each of which has a unique SO(32)invariant combination, and their normalizations are fixed by the two-point functions.Therefore there can be at most one such theory.The existence is guaranteed by constructing the theory via the even self-dual lattice of type D 16 .This concludes the classification of chiral bosonic CFTs with c ≤ 16.In the following, we denote the two theories of c = 16 by (E 8 ) 1 × (E 8 ) 1 and (Spin(32)/ 2 ) 1 , respectively.
A group theory lemma
Here we give a proof of a statement used above, that the only reductive Lie algebra of rank at most 8 with dimension 248 is E 8 , and similarly that the only reductive Lie algebras of rank at most 16 with dimension 496 are E 8 × E 8 and D 16 .
For this, we consider the quantity dim(G)/ rank(G) for a Lie algebra G.When G is simple, it is known that [44] dim where h(G) is the (non-dual) Coxeter number.We also know that clearly For a direct sum of these simple or U(
2 actions on chiral bosonic CFTs with c ≤ 16
In this section we classify all 2 symmetries of the (Spin(32)/ 2 ) 1 theories.Our approach follows that of [45], where the classification for (E 8 ) 1 was already carried out.
To start, we note that any symmetry of a CFT should act as a symmetry of the three-point functions of spin-1 currents, which form a Lie algebra G. Therefore, it acts as an automorphism of G.As the symmetry acting on the CFT is of order 2, the automorphism of G is either of order 1 or of order 2. Therefore, what we are going to do is to classify such automorphisms of G, and then to study which of them lift to order-2 symmetries of the CFT.
Finite-order automorphisms of G are fully classified by Kac's theorem [46, §8.6]. 19We first review the essential ingredients of Kac's theorem and then apply them to classify the 2 symmetries of the (E 8 ) 1 , (E 8 ) 1 × (E 8 ) 1 and (Spin(32)/ 2 ) 1 theories.For each symmetry, we additionally determine whether or not it has an anomaly.
Kac's theorem
Kac's theorem [46, §8.6] is a procedure for determining all automorphisms of a simple Lie algebra G of a given, finite order m ≥ 1.Although the most general version of Kac's theorem classifies both inner and outer automorphisms, we will only need to classify inner automorphisms, the reason for which will become clear later.We thus give a brief review of Kac's theorem for this special case.
Ingredients:
A simple Lie algebra G, and the desired order m ≥ 1. 19 The analysis of order-two automorphisms of E 8 and Spin(32)/ 2 was done more directly in [47,Sec. 5.1].The analysis for general Spin(4n)/ 2 was also essentially performed in [48,49].We use Kac's theorem for its uniformity.
Recipe: Let ℓ be the rank of G, and let a i (i = 0, 1, ..., ℓ) be the marks on the affine Dynkin diagram of Ĝ, i.e. the unique eigenvector with all positive entries of the affine Cartan matrix, normalized to have a 0 = 1.One begins by listing all solutions to the equation where the s i are non-negative and relatively prime integers.For each solution, there is a standard order-m inner automorphism of the Lie algebra given by where α is a root, and ω i are the fundamental weights.Kac's theorem states that any order-m inner automorphism is conjugate in Aut(G) to a standard one.Furthermore, two standard automorphisms are conjugate in Aut(G) if and only if the corresponding s i are related by a symmetry of the affine Dynkin diagram.
The invariant (fixed-point) Lie subalgebra also has a simple description in terms of the s i .If there are p nonvanishing s i , then it is isomorphic to a direct sum of a (p − 1)-dimensional center and a semisimple Lie algebra whose Dynkin diagram corresponds to the nodes with vanishing s i .
When the automorphism is lifted to a m symmetry of the CFT, it acts in the same way on the vertex operators Γ α with α in the root lattice.Note however that there may be further obstructions and choices involved in defining how the symmetry acts on the other vertex operators.
The anomaly of the m symmetry is determined by the spin of the symmetry line [50] and in our case is given by the formula where A G is the Cartan matrix of the Lie algebra G, and the last equality holds for simply-laced G, i.e. of type A, D, E. The m symmetry is anomaly-free if h = 0 mod 1 m .(32).The inner number is the index i of the node, while the outer number is the mark a i .
Classification of
tions for s i , a 2 : Each automorphism lifts to a unique symmetry of the CFT because (E 8 ) 1 only contains the identity module.From ( 24), the spins of the 2 lines are which implies that a 2 is anomaly-free and b 2 is anomalous.The invariant subalgebras are D 8 and E 7 × A 1 , respectively.One could then fermionize the (E 8 ) 1 chiral CFT using a 2 .After gauging, the theory would have (D 8 ) 1 affine symmetry.
2 symmetries of
We can easily reuse our results from the previous section to find the order-two automorphisms of E 8 × E 8 .Let g i denote the element of Aut(E 8 × E 8 ) given by acting with the automorphism g ∈ Aut(E 8 ) on the ith copy of E 8 , i = 1, 2, and let σ denote the exchange automorphism.
Then the order-two automorphisms in Aut(E 8 × E 8 ) are either of the form g 1 h 2 , satisfying g 2 = h 2 = 1, or of the form σg 1 h 2 , satisfying gh = 1.Noting that the automorphism of the second form is conjugate to σ alone, i.e. σg 1 h 2 = g 2 σg −1 2 , and also that a 2 is conjugate to a 1 by σ, i.e. a 2 = σa 1 σ −1 , there are in total four non-anomalous conjugacy classes in Aut(E 8 ×E 8 ), The first three belong to Aut(E 8 ) × Aut(E 8 ) ⊂ Aut(E 8 × E 8 ), and hence their anomalies follow from those of Aut(E 8 ) acting on a single copy of (E 8 ) 1 .To check whether the 2 generated by σ is anomalous, we compute the torus partition function with a σ line extended along the time cycle, this computation is reviewed later in Sec.5.2.4.This means that the spin of the state in the σ-twisted sector is half-integral, i.e.
Thus σ 2 is anomaly-free.One could then fermionize the , and σ 2 , whose invariant affine symmetries are and (E 8 ) 2 , respectively.We have used the prime to denote the second factor.
2 symmetries of (Spin(32)/ 2 ) 1 chiral CFT
Here, a new subtlety arises, which is that Aut(D 16 ) consists of both inner and outer automorphisms, but only Inn(D 16 ) corresponds to actual symmetries of the CFT.This is because the (Spin(32)/ 2 ) 1 theory contains the trivial and spinor module, and the spinor representation is not invariant under an outer automorphism.
That is, the only difference is that 〈0, 16〉 has split in two.Not all automorphisms act as a 2 in the spinor module.Note that the lattice of the trivial module is spanned by the simple roots α i while that of the spinor module is spanned by the simple roots shifted by ω 16 .It is sufficient to consider a representative in the spinor module, Γ ω 16 .If the symmetry in the spinor module is of order two, then by fusion Γ 2ω 16 must be 2 -even.Let's compute the 2 charge of Γ 2ω 16 : This means that for 〈n〉 with odd n and 〈n 1 , n 2 〉 with odd n 1 + n 2 , the symmetries acting on the spinor module have order four.We thus only focus on the symmetries 〈n〉 with even n and 〈n 1 , n 2 〉 with even n 1 + n 2 , which are The above discussion only fixes the order-two automorphisms on the vertex operators in the spinor module up to an overall sign, and we are free to choose the sign.The η dependence can be absorbed by shifting x as x → x − ηω 1 since e 2πi(ηω 1 ,α+ω 16 ) = (−1) η .Thus each order-two automorphism 〈n〉 and 〈n 1 , n 2 〉 should be further labeled by η, distinguished by an overall minus sign on the vertex operators in the spinor module.We also need to include 〈0〉 1 , which is a trivial (order-one) automorphism on G but acts as −1 on the spinor module.
Let us further determine the anomalies of the 2 symmetries.Since we modified x, the formula (24) should be modified as Substituting (29) into the above, we have 〈n〉 : This implies that only 〈0〉
Chiral fermionic CFTs with c ≤ 16
Since we know all the chiral bosonic CFTs with c ≤ 16 and have classified all the 2 symmetries in these CFTs, we proceed to fermionize them to get the classification of chiral fermionic CFTs with c ≤ 16.The fermionization operation is given in (2).In this section, we mainly work on torus spacetime where (2) can be explicitly written as The superscript and subscript of Z[B] represent the insertion of the g defect line along the spatial and time directions, respectively.The ± in the last equality represents the freedom of stacking with the 1 + 1d SPT (−1) Ar f (q) with spin structure q, and our convention (2) corresponds to the + sign.For convenience, we introduce the notation F [B, g] to denote the fermionization of a chiral bosonic CFT B with respect to a non-anomalous 2 symmetry generated by g.The number n f of free Majorana-Weyl fermions in the chiral fermionic CFT F is counted by the coefficient of q 1 2 − c 24 of the partition function Having extracted n f , since a free fermion is a consistent CFT on its own, we can decouple n f free fermions from the theory to obtain a seed chiral fermionic CFT with central charge c − n f 2 [51].Here we define a seed CFT to have no free Majorana-Weyl fermions, i.e. n f = 0. Below, we will explicitly check that the free fermions indeed decouple from the seed CFT in all of our examples.
In what follows, we will discuss the fermionization map without stacking an additional Arf invariant, which corresponds to a + sign on the left hand side of the last equality in (37).The effect of the Arf invariant will be discussed along the way.
Fermionization of (E 8 ) 1
From Sec. 4.2, the (E 8 ) 1 theory has only one non-anomalous 2 symmetry, generated by a.The invariant affine Lie subalgebra is (D 8 ) 1 .Hence the twisted partition functions of (E 8 ) 1 are expressible in terms of the characters of (D 8 ) 1 , i.e. χ 1 , χ 16 , χ S and χ C .In App.A, we show that the states even and odd under a 2 sum to the χ S of (D 8 ) 1 , respectively, hence the untwisted partition function of (E 8 ) 1 is simply χ S , and that with a a 2 line along the spatial direction S .It is then straightforward to obtain the twisted partition functions: Z[(E 8 ) 1 ] a by performing a modular S transformation, and Z[(E 8 ) 1 ] a a by an additional modular T transformation.The twisted partition functions are The fully-refined characters of an affine Lie algebra depend not only on the torus modulus τ but also on the flavor fugacities.If we suppress the flavor fugacities, then [45,52] Z C are identical as q-series, but are different when the flavor fugacities are turned on.In particular, χ S and χ C is explained by the fact that they come from different lattice sums.The former comes from summing over the lattice 〈β 1 , ..., β 8 〉 + ν 8 , while the latter comes from summing over the lattice 〈β 1 , ..., β 8 〉 + ν 7 , and these respond differently to fugacities.See App.A for more on explicit formulas.
Using (37), the fermionized partition functions with various spin structures are straightforwardly determined, The right hand sides are indeed the partition functions of 16 free Majorana-Weyl fermions.Indeed, the number n f = 16 of free Majorana-Weyl fermions can be computed using (38) and the explicit q-series (40), as In summary, we have Moreover, since stacking with an Arf invariant exchanges χ S and χ C , 16ψ is invariant under stacking an Arf invariant, provided that we shift the background gauge fields for Spin (16) accordingly by an outer automorphism.
By a 1 2
We proceed to fermionize the (E 8 ) 1 × (E 8 ) 1 chiral bosonic CFT by the non-anomalous 2 symmetry associated with the first (E 8 ) 1 component.This directly follows from the results in Sec.5.1.Since the second (E 8 ) 1 component is completely decoupled, the twisted partition functions are simply given by multiplying (39) by χ E 8 1 ′ , and after fermionization, the partition functions are given by multiplying (41) by χ E 8 1 ′ as well.In summary, we have The seed theory (E 8 ) 1 is distinct from (E 8 ) 1 × Arf, as this holds true for any bosonic theory.
By
The twisted partition functions of (E 8 ) 1 × (E 8 ) 1 by are simply obtained by squaring the twisted partition functions of a single copy of (E 8 ) 1 by a 2 .Concretely, Under fermionization, This fermionic theory F [(E 8 ) 1 ×(E 8 ) 1 ; a 1 a 2 ] does not contain any free Majorana-Weyl fermions, as can be seen by checking that the q expansion has a vanishing coefficient of q 1 2 − c 24 = q − 1 6 .Thus this is a seed chiral fermionic CFT, which we denote as (D 8 ) 1 × (D 8 ) 1 . 20In summary, we have Unlike in the 16ψ case, this time flipping the sign of the RR partition function cannot be compensated by a shift in the background gauge fields.Thus (D 8 ) 1 × (D 8 ) 1 and (D 8 ) 1 × (D 8 ) 1 ×Arf are distinct theories.
By
The twisted partition functions of are simply obtained by squaring the twisted partition functions of a single copy of ( Squaring each twisted partition function yields the partition function twisted by b 1 b 2 , Under fermionization, The twisted partition functions factorize, where the first components are those of four free Majorana-Weyl fermions.Indeed, the q-series of the first components are which coincide with those of four free fermions.In summary, we have It can be explicitly seen that (E 7 ) 1 × (E 7 ) 1 can absorb an Arf, since this operation can be compensated for by exchanging the two (E 7 ) 1 factors.
By σ 2
The twisted partition functions of n copies of a purely chiral CFT T under the n cyclic permutation symmetry are 21 Since the theory of interest is (E 8 ) 1 × (E 8 ) 1 , this corresponds to n = 2.As the untwisted partition function for a single copy of (E 8 ) 1 is E 4 (τ)/η(τ) 8 , we obtain the twisted partition functions for 16 , 21 See [53] for a discussion of the permutation anomaly using these formulas.
From the coefficient of q , we obtain n f = 1.This means that the fermionization of (E 8 ) 1 ×(E 8 ) 1 by σ 2 contains one free Majorana-Weyl fermion.Moreover, it is obvious that the affine symmetry left unbroken by σ 2 contains (E 8 ) 2 .Finally, note that the standard coset relation [54,Chapter 18] 22 implies that the σ 2 twisted partition functions of (E 8 ) 1 × (E 8 ) 1 can be decomposed into the characters of (E 8 ) 2 , i.e.
and the c = 1/2 Virasoro characters associated to the Ising CFT, i.e. χ Vir 0 , χ Vir 1/2 , χ Vir 1/16 .In terms of these characters, the twisted partition functions (53) are Under fermionization, Note that the partition functions on the right hand side factorize into the Virasoro and E 8 parts.The first factors are precisely the partition functions of a single free Majorana-Weyl fermion, as expected.The characters χ 248 for the (E 8 ) 2 seed chiral fermionic CFT transform in the conjugate modular representation of the Virasoro characters χ Vir 0 , χ Vir 1/2 , χ Vir 1/16 for the free Majorana-Weyl fermion theory.In summary, we have The factors of 2 in the third line of (57) can be thought of as the 'quantum dimension' of the Hilbert space acted on by a single real fermion zero mode, or by any system with halfintegral c in the R-sector.By combining the ψ part and the (E 8 ) 2 part, we have an integer dimension.
Note that, unlike the previous cases as e.g. in Sec.5.1 where the RR partition function can be made non-zero by turning on fugacities, here F [(E 8 ) 1 ×(E 8 ) 1 ; σ] R R identically vanishes.The fact that it vanishes comes from the single fermion zero mode in one Majorana-Weyl fermion.This leaves the RR partition function of (E 8 ) 2 undetermined.However, it can be shown that it also identically vanishes.This is because it would have to transform into itself up to a phase under S and T transformations, but no nonzero combination of the characters achieves this.Hence the (E 8 ) 2 theory can absorb an Arf invariant.acquires a minus sign.By further using the modular S and T transformations, we can get the remaining twisted partition functions.Denoting by g the generator of 〈0〉 1 , we have
Fermionizations of
Under fermionization, From (38) we find n f = 32.Indeed, the right hand side is the set of partition functions of 32 free Majorana-Weyl fermions.In summary,
〈4〉 2
The 〈4〉 2 symmetry leaves the D 4 × D 12 Lie subalgebra invariant, so the twisted partition functions should be expressible in terms of the characters of D 4 and D 12 .Denote by g the generator of 〈4〉; the decomposition of Z[(Spin(32)/ 2 ) 1 ] and Z[(Spin(32)/ 2 ) 1 ] g is discussed in App.B. By further using the modular S and T transformations, we can get the remaining twisted partition functions, Then, under fermionization, we have The first factors on the right hand side are (in terms of q-series) ( We denote this theory by (D 12 ) 1 , to distinguish it from the chiral algebra (D 12 ) 1 .This theory is isomorphic to Supermoonshine [55,56].See also a ternary code construction in [15].In summary, we find The theories (D 12 ) 1 and (D 12 ) 1 × Arf are distinct.
〈8〉 2
The discussion here goes in parallel with the immediately preceding case.
Under fermionization, we have In particular, the number of free fermions is zero.By comparing F [(Spin(32 , we find that only their RR components differ by a sign.This means that F [(Spin(32
By
〈0,16〉 0 2 The discussion again goes in parallel to the previous subsections.The invariant Lie subalgebra is U(1) × A 15 .The twisted partition functions are where g is the generator of . The first two are derived in App.B, and the remaining ones are obtained by further performing modular transformations.Under fermionization, Using the explicit q-series expression χ where N = 16 in our case, the U(1) components on the right hand side, in terms of q-series, are χ which are precisely the twisted partition functions of 2 free Majorana-Weyl fermions.
In summary, we have The theory (A 15 ) 1 is invariant under stacking with Arf, as this operation can be compensated for by an outer automorphism of A 15 = SU(16).
Webs of chiral fermionic CFTs with c = 8 and c = 16
In Sec.5.1, we started with a c = 8 chiral bosonic CFT (E 8 ) 1 and fermionized it with a nonanomalous 2 symmetry to get a chiral fermionic CFT 16ψ.Further considering how the c = 8 16ψ theories transform under stacking a fermionic SPT Arf leads to a two node web involving (E 8 ) 1 and 16ψ as shown in the upper panel of Fig. 3.We do not consider stacking an Arf on bosonic theories throughout.
In Sec.5.2 and Sec.5.3, we started with c = 16 chiral bosonic CFTs (E 8 ) 1 × (E 8 ) 1 and (Spin(32)/ 2 ) 1 and fermionized them with various anomaly free 2 symmetries, to get various chiral fermionic CFTs.We can also consider stacking an Arf invariant on all the c = 16 chiral fermionic CFTs, and they form a web as shown in the lower panel of Fig. 3.
All the chiral fermionic CFTs with c ≤ 16 are included in Fig. 3, from which we can extract the building blocks, as summarized in Table 1.In short, in ascending order of the central charge, the building blocks are 23Arf , ψ , We also remind the reader that stacking an Arf on the bosonic blocks (E 8 ) 1 , (D 16 ) 1 as well as on (D 12 ) 1 and (D 8 ) 1 × (D 8 ) 1 gives distinct theories, while for the remaining blocks stacking an Arf can be undone by outer automorphisms.It is then possible to enumerate all the chiral fermionic CFTs.For instance, when 0 < c < 8, because the only building blocks are Arf and ψ, and moreover ψ absorbs Arf, the only possible chiral fermionic CFT is 2cψ.
So far, we classified the c ≤ 16 chiral CFTs and only discussed the webs for c = 8 and c = 16.How about the webs for the other c?Note that when c ̸ = 0 mod 8, there is an anomaly in summing over the spin structure, hence it seems that the only topological manipulation is stacking an Arf.In an upcoming work [34], we will find that a modified bosonization operation can be defined when c = 4 mod 8 and we also explore the webs for c ̸ = 0 mod 8.
Connections to heterotic strings
We end this paper by discussing the implications of our findings on the classification of heterotic string theories in ten dimensions.We start by recalling the history surrounding their discovery.
History: Heterotic string theories were discovered in [10] first in the spacetime supersymmetric cases, in the formulation where the c L = 16 part was realized in terms of chiral bosons on even self-dual lattices.An alternative fermionic formulation for the same set of theories was already mentioned in [10] whose detail was then provided in [58,59].These constructions were soon generalized to spacetime non-supersymmetric cases in a short span of time, in e.g.[18][19][20][21].Table 1 of [21] in particular nicely summarized the known models at that time, to which no other models have been added since then. 24 The constructions in those early years, however, rest on explicit constructions, either using alternative GSO projections in the fermionic descriptions [18,20,21], using chiral bosons on odd self-dual lattices [19,62], or using explicit higher-level current algebra conformal blocks [63].The textbook account in [64] was based on the first approach.
It was therefore not at all clear that all possibilities were exhausted.For example, we can think of starting from 32 Majorana-Weyl fermions, which has Spin(32) symmetry.We can then pick a complicated non-Abelian finite subgroup and perform an orbifold, possibly with nonzero discrete torsion.We can also start from a lattice construction and try to orbifold by a subgroup of the lattice automorphism group.More recently, we learned that there can also be non-invertible symmetries, some of which can be used for orbifolding.Therefore, there was no guarantee that no new models could arise in this manner. 25 Generalities on heterotic strings: Our analysis in the previous sections can be used to conclude that there are no new exotic models remaining to be found.This can be seen in the following manner.
The perturbative heterotic string theory can be formulated in its most general context [68] by starting from a unitary fermionic CFT of central charge (c L , c R ) = (26,15), coupling it to the appropriate ghost and superghost systems, and integrating over the supermoduli space of N =(0, 1) super Riemann surfaces.Ten-dimensional heterotic string theories form a special case where the worldsheet CFT with (c L , c R ) = (26,15) is given by a product of the N =(0, 1) 24 The T-duality between supersymmetric and non-supersymmetric heterotic strings was also uncovered soon afterwards, see e.g.[60,61]. 25In this paper, we concentrate on the worldsheet formulations of heterotic string theory.We can also pose the same question from the spacetime point of view, at least when we assume N =1 supersymmetry.In this case, the Green-Schwarz anomaly cancellation, originally found in the foundational paper [65], restricts the gauge group to satisfy various conditions, such as the requirement that its dimension should be 496, and the requirement that a particular quartic Casimir should be the square of a quadratic Casimir.It was known already at the time the textbook [66] was written that the only possible gauge group G is E 8 × E 8 , SO (32), E 8 × U(1) 248 or U(1) 496 .The details of this rather tedious process can be found e.g. in [67].It was later found in [12] that the structure of the Chern-Simons modification of the B-field dictated by the Green-Schwarz mechanism is compatible with N =1 supersymmetry for E 8 × E 8 or SO (32) but not for E 8 × U(1) 248 or U(1) 496 .Therefore, the spacetime analysis tells us that the only anomaly-free gauge group in a ten-dimensional N =1 supergravity system is either E 8 × E 8 or SO (32).It is however not straightforward to generalize this to non-supersymmetric cases.Integration over the supermoduli space of N =(0, 1) super Riemann surfaces automatically implements a GSO projection, i.e. the summation over the spin structure associated to the supercharge of the N =(0, 1) superconformal symmetry.In the fermionic formulation of heterotic string theories, it often happens that many other GSO projections, i.e. summations over the spin structure associated to fermions not affecting that of the supercharge, are performed.From the perspective of the most general construction of heterotic string theories, such additional GSO projections can be thought to be already performed in defining the fermionic CFT T with (c L , c R ) = (16, 0) on the worldsheet.
Therefore, the classification of ten-dimensional heterotic string theories reduces to the classification of chiral fermionic CFTs with c L = 16.From our results, it is easy to conclude that the list of such CFTs are as given in Table 2.This agrees with the heterotic portion of Table 1 of [21], showing that the list of models found in the olden days is actually complete.The reader may also find it useful to compare Fig. 3 above to [69, Figure 1].
Properties of the worldsheet theory vs. properties of the spacetime theory:
The properties of the ten-dimensional spacetime theories can be easily mapped to the properties of the worldsheet fermionic CFT T of c L = 16.The result is summarized in Table 2.The salient points are also summarized in Fig. 4.
We start from the NS sector.The eigenvalues of L 0 are integral or half-integral.
• The identity state at L 0 = 0 leads to the graviton, the dilaton and the B-field.
• The states at L 0 = 1/2 give the spacetime tachyons.The OPE of L 0 = 1/2 operators in a unitary theory is very constrained and they are forced to be free Majorana-Weyl fermions.Therefore the number of tachyons can be easily read off from the number of Majorana-Weyl fermion factors in the theory T .• Each 10d heterotic string theory is given by a chiral fermionic CFT with c L = 16, which we give as a combination of a seed part and a number n f of free Majorana-Weyl fermions.
• The spacetime gauge algebra is the direct sum of the affine part and SO(n f ).
• The number of spacetime tachyons is equal to the number of free Majorana-Weyl fermions, which are in the vector representation of SO(n f ) just mentioned.
• The theory is spacetime supersymmetric when the worldsheet theory is bosonic, corresponding to shaded rows.
• Ψ + and Ψ − are the gauge representations of the spacetime massless spin-1/2 fermions with positive and negative chirality.We typically use the dimension to specify an irreducible representation, except for ∧ 2 16 for the second-rank antisymmetric tensor of A 15 = SU( 16) and for S and C for D n = SO(2n).The primes are for the second factor in the affine part.When the SO(n f ) symmetry carried by the tachyons is non-Abelian, the representations are given in the form (affine part) ⊗ (SO(n f ) part), where the SO(n f ) part is always a spinor of SO(n f ).
• The states at L 0 = 1 give the massless vector bosons.Indeed, on the worldsheet, the OPE of the operators with L 0 = 1 are constrained to satisfy the Jacobi identity, giving the spacetime gauge group.
We next discuss the R-sector.The eigenvalues of L 0 are integral.
• A state at L 0 = 0 gives a spacetime gravitino and an accompanying dilatino, making the theory spacetime supersymmetric.Such a dimension-0 state in the R-sector can be used to establish a one-to-one map between the NS-sector states and the R-sector states by multiplication, meaning that the worldsheet theory T is actually a bosonic theory (possibly multiplied by the Arf theory).Conversely, any such theory has a single state at L 0 = 0 in the R-sector.Therefore, the spacetime supersymmetry follows if and only if the worldsheet theory T is actually bosonic.See Sec.2.2 for some details about dimension-0 operators in fermionic CFTs.
• States at L 0 = 1 give massless spin-1/2 fermions.The GSO projection associated with the sum over the spin structure of the worldsheet supercharge correlates the spacetime chirality with the (−1) F eigenvalues of the internal theory.The gauge representation under which they transform is given in Table 2.
Since we know that the lattice (E 8 ) b 2 -even should be generated by an E 7 lattice and an A 1 lattice that are mutually orthogonal, it is then natural to take the E 7 lattice to be generated by 〈α 2 , ..., α 8 〉, and the A 1 lattice generated by 〈ω 1 〉.Indeed, the Cartan matrix for 〈α 2 , ..., α 8 〉 is that of E 7 .Summing over 〈α 2 , ..., α 8 〉 gives χ E 7 1 , while summing over 〈ω 1 〉 gives χ Hence we should also decompose α 1 into the sum of mutually orthogonal weights of A 1 and E 7 .Since A 1 is of rank 1, its fundamental weight must be proportional to its root, hence the only possibility is Since the weight of E 7 must be orthogonal to that of A 1 , it must be an expansion of the roots 〈α 2 , ..., α 8 〉.Since the weights are defined up to roots, we find 56 .In summary, we find . (A.14)
(E 8 )Figure 1 :Figure 2 :
Figure1: Affine Dynkin diagram of E 8 .The inner number is the index i of the node, while the outer number is the mark a i .
4 C 12 C
= 0 ,(65) which are indeed the partition functions of 8 free Majorana-Weyl fermions.On the other hand, the second factors on the right hand side of (64) are χ = 24 .
Figure 3 :
Figure 3: Interrelations among chiral CFTs with c L = 8 and c L = 16.The theories are distinguished by the maximal amount of Majorana-Weyl fermions contained and the maximal affine symmetry contained in the remainder.The blue arrows indicate fermionizations specified by certain 2 symmetries, while the red arrows specify the stacking of the Arf theory.Bosonic theories are boxed, and their stacking with Arf is omitted despite being a different theory.
Figure 4 :
Figure4: Properties of the spacetime theories v.s.properties of the worldsheet theory T .Bosonic worldsheet theories lead to spacetime supersymmetry; worldsheet theories without Majorana-Weyl fermions lead to tachyon-free spacetime theories; and worldsheet theories with Majorana-Weyl fermions lead to spacetime theories with tachyons.
TABLE
1) components, dim / rank is a weighted average of dim / rank of individual components.So, in order to have dim / rank ≥ 248/8 = 496/16 = 31, we need to use at least one component whose dim / rank ≥ 31.As U(1) does not satisfy this condition, we need to use at least one simple factor, and the only simple factors for which dim / rank ≥ 31 and rank ≤ 16 are E 8 , B 15 , C 15 , D 16 (for which dim / rank = 31) and B 16 , C 16 (for which dim / rank = 33).Therefore, the only possibility at rank 8 is E 8 , and the only possibilities at rank 16 are E 8 ×E 8 and D 16 , as B 15 ×U(1) and C 15 ×U(1) do not have dim = 496.
(Spin(32)/ 2 ) 1 5.3.1 By 〈0〉 1 2
symmetry leaves the entire D 16 Lie algebra invariant, so the twisted partition functions should be expressed in terms of characters of D 16 , i.e. χ The invariant Lie subalgebra is D 8 × D 8 .With g denoting the generator of
Table 2 :
List of ten-dimensional heterotic string theories. | 13,692 | 2023-03-29T00:00:00.000 | [
"Physics"
] |
Transverse modulational instabilities of counterpropagating solitons in photorefractive crystals
We study numerically the counterpropagating vector solitons in SBN:60 photorefractive crystals. A simple theory is provided for explaining the symmetry-breaking transverse instability of these solitons. Phase diagram is produced that depicts the transition from stable counterpropagating solitons to bidirectional waveguides to unstable optical structures. Numerical simulations are performed that predict novel dynamical beam structures, such as the standing-wave and rotating multipole vector solitonic clusters. For larger coupling strengths and/or thicker crystals the beams form unstable self-trapped optical structures that have no counterparts in the copropagating geometry. 2004 Optical Society of America OCIS codes: (190.5330) Photorefractive nonlinear optics; (190.5530) Pulse propagation and solitons References and links 1. For an overview see the Special Issue on solitons, Ed. M. Segev, Opt. Phot. News 13, No. 2 (2002). 2. Y. S. Kivshar and D. E. Pelinovsky, "Self-focusing and transverse instabilities of solitary waves," Phys. Rep. 331, 117-195 (2000). 3. O. Cohen et al., "Spatial vector solitons consisting of counterpropagating fields," Opt. Lett. 27, 2013 (2002). 4. M. Belić et al., "Self-trapped bidirectional waveguides in a saturable photorefractive medium," Phys. Rev. A 68 025601, (2003). 5. S. Trillo and W. Torruellas eds., Spatial Solitons (Springer, New York, 2001). 6. M. Haelterman, A. P. Sheppard, and A. W. Snyder, "Bimodal counterpropagating spatial solitary-waves," Opt. Commun. 103, 145 (1993). 7. O. Cohen et al., "Collisions between optical spatial solitons propagating in opposite directions," Phys. Rev. Lett. 89, 133901 (2002); "Holographic solitons," Opt. Lett. 27, 2031 (2002). 8. L. Solymar, D. J. Webb, and A. Grunett-Jepsen, The physics and applications of photorefractive materials, (Clarendon Press, Oxford, 1996). 9. M. Belić et al., "Anisotropic nonlocal modelling of counterpropagating photorefractive solitons," Opt. Lett. 29 xxxx (2004). 10. M. Belić, A. Stepken, and F. Kaiser, "Spatial screening solitons as particles," Phys. Rev. Lett. 84, 83 (2000). 11. P. R. Holland, The quantum theory of motion, (University Press, Cambridge, 1995). (C) 2004 OSA 23 February 2004 / Vol. 12, No. 4 / OPTICS EXPRESS 708 #3511 $15.00 US Received 20 January 2004; revised 17 February 2004; accepted 17 February 2004 12. O. Sandfuchs, F. Kaiser, and M. R. Belić, "Self-organization and Fourier selection of optical patterns in a photorefractive feedback system," Phys. Rev. A 64, 063809 (2001). 13. M. Belić, J. Leonardy, D. Timotijević, and F. Kaiser, "Spatio-temporal effects in double phase conjugation," J. Opt. Soc. Am. B12, 1602 (1995). 14. J. J. Garcia-Ripoll, et al., "Dipole-mode vector solitons," Phys. Rev. Lett. 85, 83 (2000); W. Krolikowski et al., "Observation of dipole-mode vector solitons," Phys. Rev. Lett. 85, 1424 (2000). 15. K. Motzek et al., "Dynamic counterpropagating vector solitons in self-focusing media," Phys. Rev. E 68 06xxxx (2003). 16. M. R. Belić et al., "Isotropic vs. anisotropic modeling of photorefractive solitons," Phys. Rev. A 65, 066609 (2002). 17. G. F. Calvo et al., "Two-dimensional soliton-induced space charge field in photorefractive crystals," Opt. Commun. 227, 193 (2003). 18. T. Carmon et al., "Rotating propeller solitons," Phys. Rev. Lett. 87, 143901 (2001).
Introduction
Whenever a new class of solitary waves [1] is discovered, the question of their stability is raised.It pertains to structural stability of a homogeneous wave with regard to symmetrybreaking transverse and modulational instabilities [2] during its evolution.Loosely speaking, transverse means instabilities induced in the plane transverse to the propagation direction, and modulational denote spatial changes to the wave as it evolves.Recently introduced counterpropagating (CP) vector solitons in photorefractive (PR) media [3,4] present no exception.
The formation and interactions of spatial screening solitons [5] have been studied mostly in the copropagation geometry, with few exceptions [3,4,6,7].Steady-state CP solitons were considered theoretically in one transverse dimension (1D), in Kerr and local PR media.In addition, Ref. [4] introduces a time-dependent model for the generation of CP solitons, and Ref. [7] contains an experimental observation of stable CP solitons in an SBN:60 crystal, in the form of narrow stripes.
Here we present theoretical and numerical study of 2D CP vector solitons in a similar crystal, display symmetry-breaking instabilities of such solitons when the propagation distance and/or the coupling strength are varied, and demonstrate the rich dynamics of different self-trapped beam structures that can form in the crystal.We formulate a simple theory that explains the split-up transverse instability of CP solitons as a first-order phase transition and display the corresponding threshold curve in the parameter plane.We present evidence of a second phase transition when, due to modulational instabilities, the steady-state asymmetric waveguides loose stability to time-dependent periodic and aperiodic optical structures.We perform numerical simulations of CP beams in PR media with time-dependent nonlinearity, and predict novel stable dynamical beam structures that possess no counterparts in the copropagating geometry.
The study of CP beams is performed in a geometry that resembles an actual experimental setup.A Nd:YAG laser beam at 532 nm is split and focused onto the opposite faces of an SBN:60 crystal.The beam components are assumed to be colinear and partially coherent in the medium, which in experiments can be achieved by reflecting one component off the vibrating mirror or by propagating it through a diffuser.The c axis of the crystal is placed perpendicular to the propagation direction.To exploit the dominant electro-optic component r 33 ≈ 200 pm/V of a typical SBN sample, the incident laser beams are linearly polarized, parallel to the c axis.A DC electric field of the order of 1 kV/cm is applied across the crystal, along the c axis, and the crystal is illuminated by a uniform white light, to create artificial dark conductivity.Such a geometry is appropriate for the formation of screening spatial solitons.
The slowly varying beam components F and B counter-propagate in the crystal in the z direction, perpendicular to the c axis, which is also the x axis of the coordinate system.The space charge field generated in the crystal couples to the electro-optic tensor, giving rise to a change in the index of refraction, of the form ∆n=−n 0 3 r eff E/2, where n 0 is the unperturbed index, r eff is the effective component of the electro-optic tensor, and E is the x component of the space charge field.The optical field is given as the sum of the amplitudes [Fexp (ikz)+Bexp (-ikz)+cc]/2, k being the wave vector in the medium, so that the total light intensity (uniform plus beams) is modulated in the z direction where I 0 =|F| 2 +|B| 2 is the sum of individual beam intensities, ε is the degree of coherence of the beams, m=2FB * /(1+I 0 ) is the modulation depth, and φ ∆ is the phase difference between forward and backward beams.The intensity is measured in units of the background light intensity.It modulates the space charge field as well, which acquires the form where E 0 is the homogeneous part of the space charge field, not to be confused with the external electric field, and E 1 is the first Fourier component, proportional to ε.It is E 0 that screens the external field, and E 1 is responsible for the formation of gratings (with the wavenumber 2k) in the index of refraction along the z direction.
The propagation of beams in the crystal is governed by the paraxial wave equations where ∆ is the transverse Laplacian, Γ = (kn 0 x 0 ) 2 r eff E e is the coupling strength, E e being the external electric field, and E 0 and E 1 are the dimensionless x components of the space charge field.The equations are put in the dimensionless form using the rescaling (x,y) → (x/x 0 , y/x 0 ), z → z/L D , (F,B) → (F,B) exp (-iΓz).Here x 0 is the typical beam waist and L D = 2kx 0 2 is the diffraction length.Propagation equations can be put in a universal dimensionless form that contains no parameters or coupling constants.All the parameters are then hidden in the scaling quantities and the initial and boundary conditions.We prefer the form given here, with one explicit intensive control parameter Γ.The corresponding extensive control parameter is the crystal length L.
In a local, isotropic approximation to the space charge field one assumes the following relaxation-type dynamics of its components [4,8]: where τ = τ 0 /(1+I 0 ) is the relaxation time of the crystal, which also depends on the total intensity.The nonlocal, anisotropic theory of CP solitons [9] suggests that for the geometries of interest here the coherent aniso-description offers similar results to the incoherent isodescription.The gratings induced in the z direction affect the propagation of beams little, in contrast to the standard two-wave mixing in PR media, when the gratings wavevector is aligned with the c axis.Therefore, we will consider only incoherent beams, ε = 0, so that E 1 is absent.The propagation equations are then coupled only through the dependence of E 0 on I 0 .
Theory of beam displacement
To explain the behavior of beams observed in numerical simulations, we adopt the particle point of view on solitons, whereby they are considered as bundles of focused rays boring an optical path through the crystal [10].The trajectory of a soliton is represented by the expectation value of its transverse coordinates:
( , (5)
where A is the soliton envelope and t is the total transverse intensity, I t = ∫ ∫ |A| 2 dxdy, which plays the role of an effective mass.The motion of the soliton particle is governed by the Hamilton equations for the center of mass and a similar pair of equations for ‹y› and p y , where the conjugate momenta p x and p y are represented by the optical direction cosines, and the potential V = ΓE 0 is the (negative) change in the refractive index.This amounts to viewing an optical soliton as a particle of mass proportional to the intensity, moving in a potential created by the change in the refractive index, which is caused by the soliton itself.Such a point of view is akin to the usual mechanics, except that the role of time is played by z, and the ''dynamics'' is in 2D.
Particle picture is obtained from the ray optics.Quantization of the ray optics leads to the paraxial wave optics.In fact, the construction of rays from the wave optics is equivalent to the construction of classical mechanics from the quantum mechanics.The paraxial wave equation then corresponds to the time-dependent Schrödinger equation, ( ) where p = -i ∇ is the transverse gradient and ρ = (x,y) is the transverse position vector.The expectation values are determined using A for the wave function.Hence, one can view an optical soliton as a ''quantum mechanical'' object in the transverse plane, whose wave function is given by the slowly varying envelope of the beam, the potential by the induced change in the refractive index, and the momentum by the transverse gradient.The transition from the wave picture to the particle picture, i.e. the geometrical optics approach to spatial solitons, is well defined, as the size of beams is much larger than the wavelength, diffraction is absent, and incoherent beams are considered.The transverse displacement of the trajectory along z axis obeys the Ehrenfest theorem [11], The idea is then to express a transverse displacement of the center of mass d = (d x ,d y ) in terms of the expectation values of the gradient of the space charge field.The expectation values are evaluated for the shifted and unshifted states A d = A (ρ + d, z) and A 0 = A (ρ, z), and then subtracted: In evaluating the expectation values one utilizes the relation between the shifted and unshifted where ( ) ( ) and ê is the unit vector in the transverse plane.The equation for d(z) is a harmonic oscillator equation, with the solution d(z) = a sin (K 1/2 z)+b cos (K 1/2 z), where the constants a and b are fixed by the boundary conditions.In the case of head-on collision of CP beams, the lowest order steady state mode for the F beam has d(0) = 0 at the entrance face of the crystal and d(L)≠0 at the exit face.This is similar to a vibrating elastic string whose one end is fixed and the other free.The lowest order mode for the B beam is the mirror image.For such a state there exists an obvious threshold condition ( ) To see the form of this threshold condition in the (L, Γ) plane one must include Γ.K seems to be linear in Γ.However, the integral in Eq. ( 11) carries another Γ.It comes from the scaling used to write dimensionless propagation equations.Had we used the scaling where no Γ appears, the threshold condition would appear the same, (L' K' 1/2 ) c = π / 2, but the quantities in the primed coordinates would be connected to the unprimed through L' = ΓL, and K=Γ 2 K'.Since K' does not contain Γ, it can be transferred to the other side of the threshold equation, and the threshold line in the (L, Γ) plane acquires the form (Γ L) c = const.Thus, the theory predicts a simple "pV = const."equation of state, with unshifted solitons existing below the threshold curve, and shifted solitons, or bidirectional waveguides, existing above the curve.
A note of caution is required here.The theory developed thus far applies to steady-state situations.No mention is made of the ''true'' dynamics.Time enters into the picture through the explicit dependence of K on t.A variety of dynamical behaviors becomes allowed.Analytical analysis becoming prohibitive [12], we resort to numerical analysis, in both 1D and 2D.
Numerical simulations
The spatial propagation equations and the temporal equations for the space charge field are solved concurrently.The numerical procedure consists in solving Eqs.(4) for the components of the space charge field in time, with the light fields obtained at every step as guided modes of the induced common waveguide.This is achieved by an internal spatial relaxation loop, i.e. nested within the temporal loop, that utilizes a beam-propagation method for the right-and left-propagating components.The spatial loop is iterated until convergence, and the temporal loop is advanced for a time step.The convergence in the temporal loop signifies that steady states are found, however it need not be reached.In that case time-dependent, dynamical states are observed.The procedure is described in Refs.[12,13].
To capture the transition from a CP soliton to a waveguide clearly, we consider head-on collision of two identical Gaussian beams.In the absence of the other, each beam focuses into a soliton.We are interested in what happens when they are both present, and when the coupling constant Γ and the crystal length L are both varied.We note no qualitative difference between the 1D and 2D behavior, and present the 1D case.The situation is depicted in Fig. 1.
It is seen that in the plane (L, Γ) of control parameters there exists a critical curve below which stable CP solitons exist.At the critical curve a new type of solutions appears, in which the two components do not overlap anymore, but split and cross each other.A few examples are depicted in the insets to Fig. 1.As the beams split, a portion of each beam remains guided by the other, therefore we term these solutions bidirectional waveguides.Both the solitons and the waveguides are steady-state solutions.According to the theory, one can easily define higher order steady-state modes.However, they are not easily detectable, since the system may become dynamically unstable before reaching them.As one moves away from the critical curve, into the region of higher couplings and longer crystals, a new critical curve is approached, where the steady-state waveguides loose stability.The second critical curve is also drawn in Fig. 1, and is similar in shape to the first one.The shape of these curves suggests an inverse power law dependence, in accordance with the theory presented here.In fact, the analytical expression for the fitting curves drawn in Fig. 1 contains a constant term close to Γ th = 2, and the terms linear and quadratic in 1/L.This suggests van der Waals-type corrections to the ''equation of state'' Γ L = const., and implies that the simple linear theory presented here is insufficient to account for all the varied behavior of the full nonlinear system.At and beyond the second critical curve dynamical solutions emerge.The time dependence varies from periodic to aperiodic.A richer dynamical behavior is observed in 2D, as compared to 1D, since there one has a larger phase space at disposal, and can launch beams carrying angular momentum and/or topological defects in their structure.Some 2D examples are presented below.
Common to all simulations are the following data: diffraction length 5.55 mm, transverse scaling length 10 µm, laser wavelength 532 nm, electro-optic coefficient 180 pm/V, bulk refractive index n 0 =2.35.The value of the external electric field (of the order of 1 kV/cm) is used to fine-tune the coupling strength.The propagation lengths are given in units of L D , and the time in units of τ 0 .All initial fields are head-on.All the simulations are movie files (MOV), depicting temporal evolution of the optical field in either the transverse (x,y), or the longitudinal (y,z) plane.In 2D, below the first critical curve, stable CP solitons are observed.In Fig. 2 we present a stable displaced bidirectional waveguide just above the threshold curve, for Γ=9.2, L=1.8, obtained by colliding two Gaussian beams of 15 µm FWHM.As seen, the Gaussians settle fast into a quasi-stable CP soliton, remaining steady until t ≈ 160τ, at which time they split within a few cycles to new transverse positions.A part of each beam remains at the old position, being guided by the other beam.This is in accordance with theory, which predicts that the center of mass of each beam will become transversally displaced at the exit face of the crystal as soon as the threshold is reached.For this rotationally symmetric geometry the direction in which the beams split is random.For rotationally non-simmetric beam configurations there exists a preferential direction, and this is the direction in which K grows the fastest.According to the threshold condition, the value of the critical crystal length is then the shortest, and, as the threshold is reached, the instability will preferentially grow in that direction.
The enhanced stability of dipole beams, as compared to other beam structures in PR crystals, has been noted [14].This applies to CP dipoles as well.In an earlier publication [15] we reported symmetry-breaking displacement of CP dipole-mode vector soliton (dipole beam plus the fundamental), similar to the displacement of simple CP solitons.Here we display the robust nature of two CP dipole beams, placed either parallel-parallel or parallel-perpendicular to the external field (Fig. 3).The initial fields are two incoherent dipole pairs, made of antiphased Gaussian beams.Within a couple of cycles in each geometry the dipole beams reach steady state, in the form of two bean-like deformed dipoles.They remain very stable, in comparing with other beam compositions considered here.Interest in optical beams carrying angular momentum and topological defects in the structure has risen recently, owing to their useful features for pattern formation.In contrast to dipole-mode vector solitons, beam compositions containing optical vortices as components are noted for instability [16,17].In the copropagating geometry a vector beam made of a fundamental Gaussian and a vortex beam disintegrates into a deformed two-peak central beam and a dipole beam.The fragmentation of the vortex component can proceed in more than two filaments, depending on the charge and the size of the vortex.An interesting theoretical difference is noted in the copropagation geometry: in the anisotropic modeling of PR nonlinearity, which is closer to the experimental situation, the breakup proceeds much faster than in the isotropic modeling.While the anisotropic vortex breaks within one L D , the isotropic vortex can propagate for tens of L D 's before disintegrating.
As mentioned, in the CP case there is not much difference between the incoherent isotropic modeling and the coherent anisotropic modeling.Hence we observe that vortices generally disintegrate within one L D .In addition, owing to time-dependent nature of the isotropic model employed, we observe interesting dynamical effects.In the next few figures we report different cases of two colliding vortices: a standing-wave structure when the vortices carry the same charge, a stable rotating structure when the charges are the opposite, and various unstable outcomes of vortex-vortex collisions.Figure 4 depicts standing-wave beams that are the result of the breakup of two identical CP vortices of topological charge +1.From (a) to (c) only one parameter is changed, the crystal thickness is increased from 0.8 to 1.1 to 1.5.After some time the vortices break in each case, and it is seen that the stable output changes from a tripole to a quadrupole.In Fig. 4(b) the beam first breaks into a meta-stable quadrupole, which eventually reverts to the stable tripole configuration.Such multipole clusters -dipoles, tripoles, quadrupoles, etc. -constitute the basic stable breakup modes of colliding vortices.
Figure 5 presents a stable rotating beam structure resulting from the collision of two CP vortices with opposite charges.For the chosen set of parameters the beams continue to rotate indefinitely.They can be considered true rotating propeller solitons [18], as they rotate in time and can have more than two blades.Input vortices are the same as in Fig. 4(a), only the sign of the charge of vortex B is reversed.The beams co-rotate, and such a state in time represents a limit-cycle.It can not be accessed by the usual steady-state theory of vector solitons [14,18].The initial beam structure of Fig. 5 is used to present spatio-temporally unstable states in Fig. 6(a), only the crystal thickness is increased to 1.5.By increasing L, or Γ, unstable situations arise.After a long quasi-periodic regime the system jumps to a disordered state.In our limited time of observation no repetition of the mode structure is observed.Instabilities can also be reached by changing the initial composition of beams.Initial conditions in Fig. 6(b) are the same as in Fig. 6(a), only the initial vortices are made wider.The system reverts to a periodic motion, however not a simple rotation of a stable structure.Finally, another unstable state is displayed in Fig. 6(c), it is still with wide initial vortices, but with a smaller coupling strength and crystal thickness.When these beams are made narrower, as in Fig. 5(a), the system similarly rotates (not shown).In the wider case the beams perform a motion similar to Fig. 6(a).However, it should be noted that the beams in that figure have acquired a higherorder transverse mode structure before disintegrating, namely a ring plus a central peak.In Fig. 6(c) they attain only a simple ring structure.Nonetheless, the dynamics on the ring looks similar, and the final state is a disordered spatiotemporal mode structure.In summary, we have considered various counterpropagating self-trapped beam structures in an SNB:60 photorefractive crystal.A time-dependent model for the beam propagation and interaction is treated numerically, and various self-focused solutions presented, including pure CP vector solitons.A symmetry-breaking transverse instability of these solitons is noted, and a simple theory provided for explaining such a transition.A corresponding phase diagram is produced that depicts the transition from stable CP solitons to bidirectional waveguides and unstable beam structures.Simulations are performed displaying novel dynamical beam structures, such as the standing-wave and rotating vector solitonic clusters.For larger coupling strengths and thicker crystals the beams form unstable optical structures that have no analogues in the copropagating geometry.
Fig. 1 .
Fig. 1.Phase diagram in the parameter plane for the symmetry-breaking instabilities of bidirectional solitons in 1D.Below the lower curve CP solitons exist, above the curve stable bidirectional waveguides appear.The insets depict typical beam intensity distributions in the (x, z) plane at the points indicated.At and above the upper critical curve the waveguides loose stability.The points are numerically determined, the curves represent inverse power polynomial fits.L is measured in units of L D , while Γ is dimensionless.
Fig. 5 .
Fig. 5. Movies of stable rotating forward beam at the output face, resulting from the collision of two oppositely charged vortices.The backward beam executes a mirror-image rotation.(a) The (x,y) plane (1.322 MB).(b) The (y,z) plane (1.841 MB).Parameters as in Fig. 4 (a). | 5,573.4 | 2004-02-23T00:00:00.000 | [
"Physics"
] |
Study on Electrohydrodynamic Rayleigh-Taylor Instability with Heat and Mass Transfer
The linear analysis of Rayleigh-Taylor instability of the interface between two viscous and dielectric fluids in the presence of a tangential electric field has been carried out when there is heat and mass transfer across the interface. In our earlier work, the viscous potential flow analysis of Rayleigh-Taylor instability in presence of tangential electric field was studied. Here, we use another irrotational theory in which the discontinuities in the irrotational tangential velocity and shear stress are eliminated in the global energy balance. Stability criterion is given by critical value of applied electric field as well as critical wave number. Various graphs have been drawn to show the effect of various physical parameters such as electric field, heat transfer coefficient, and vapour fraction on the stability of the system. It has been observed that heat transfer and electric field both have stabilizing effect on the stability of the system.
Introduction
The potential flow of an incompressible fluid is a solution of the Navier-Stokes equation in which velocity u can be expressed as a gradient of potential function which satisfies Laplace's equation. The viscous potential flow (VPF) theory is also based on the assumption that velocity is given by the gradient of the potential function, but viscosity is nonvanishing. In this theory, the irrotational shearing stresses are assumed to be zero and viscosity comes through normal stress balance. The instability of the plane interface separating two fluids having different densities when the lighter fluid is accelerated toward the heavier fluid is called Rayleigh-Taylor instability. In 1999, Joseph et al. [1] studied the viscous potential flow analysis of Rayleigh-Taylor instability and observed that the wavelength of the most unstable wave increases strongly with viscosity. In 2002, Joseph et al. [2] extended their study of Rayleigh-Taylor instability to viscoelastic fluids at high Weber number (the ratio of the inertial force to the surface tension force) and concluded that the most unstable wave is a sensitive function of the retardation time, which fits into experimental data when the ratio of retardation time to that of relaxation time is of order 10 −3 .
In recent years, a great deal of interest has been focused on the study of heat and mass transfer on the stability of fluids flows because heat and mass transfer phenomenon is encountered in a wide variety of engineering applications such as boiling heat transfer and geophysical problems. Linear stability analysis of the physical system consisting of a vapor layer underlying a liquid layer of an inviscid fluid was carried out by Hsieh [3,4]. He used the potential flow theory to solve the governing equations and observed that the heat and mass transfer phenomenon enhances the stability of the system if the vapor layer is hotter than the liquid layer. Ho [5] studied the problem of Rayleigh-Taylor instability taking heat and mass transfer into the analysis, but his study was restricted to the fluids of same kinematic viscosity. Adham-Khodaparast et al. [6] restudied the linear stability analysis of a liquid-vapor interface, but they considered liquid as viscous and motionless and vapor as inviscid moving with a horizontal velocity. Awasthi and Agrawal [7] extended the work of Hsieh [3] considering both fluids as viscous. The Kelvin-Helmholtz instability occurs when there is a relative motion between the fluid layers of different physical parameters. The study of heat and mass transfer on the 2 The Scientific World Journal Kelvin-Helmholtz instability of miscible fluids using viscous potential flow theory was made by Asthana and Agrawal [8]. Awasthi and Agrawal [9] studied the capillary instability when the fluids are miscible and viscous.
The presence of an electric field may change the fluid behaviour and its flow. The study of effects resulting from electric fields on fluid flows is called electrohydrodynamics (EHD). The impact of electric field on the stability of two fluid systems is one of the important problems in electohydrodynamics. The discontinuity of the electric properties of the fluids across the interface affects the force balance at the fluid-fluid interface, which may either stabilize or destabilize the interface in question. The study of the electrohydrodynamic Rayleigh-Taylor instability of two inviscid fluids in the presence of tangential electric field was considered by Eldabe [10]. He found that the tangential electric field has stabilizing effect. Mohamed et al. [11] studied the nonlinear electrohydrodynamic Rayleigh-Taylor instability of inviscid fluids with heat and mass transfer in presence of a tangential electric field and observed that heat and mass transfer has stabilizing effects in the nonlinear analysis. The effect of tangential electric field on the Rayleigh-Taylor instability when there is heat and mass transfer across the interface was studied by Awasthi and Agrawal [12].
In the VPF theory, we assume that the tangential part of viscous stresses is zero in case of free surface problems, but it is not possible in practical situations. To incorporate this discontinuity, Wang et al. [13] included an extra pressure term known as viscous pressure in the normal stress balance. Using the global energy balance, they found that this viscous pressure term will include the effect of tangential stresses. This theory is called viscous corrections for the viscous potential flow (VCVPF) theory. VCVPF analysis provides a new direction to deal with stability problems and it is getting attention of many researchers in recent times. Awasthi [14] applied VCVPF theory on the Rayleigh-Taylor instability of two viscous fluids when there is heat and mass transfer across the interface and observed that the irrotational shearing stresses stabilize the interface.
In view of the above investigations and keeping in mind the importance of electrohydrodynamics in a number of applications such as heat exchanger manufacturing [15], power generation, and other industrial processes, a study of the linear electrohydrodynamic Rayleigh-Taylor instability of the plane interface when there is heat and mass transfer across the interface is attempted. We use potential flow theory and the fluids are considered to be incompressible, viscous, and dielectric with different kinematic viscosities and permittivities, respectively, which have not been considered earlier. The effect of free surface charges at the interface is neglected. A dispersion relation that accounts for the growth of disturbance waves is derived and stability is discussed theoretically as well as numerically. A critical value of the electric field as well as the critical wave number is obtained. The effect of ratio of permittivity of two fluids on stability of the system is also studied and shown graphically. Various neutral curves are drawn to show the effect of various physical parameters such as electric field and heat transfer coefficient on the stability of the system.
Problem Formulation
A system consisting of two incompressible, viscous, and dielectric fluid layers of finite thickness separated by a plane interface = 0 is considered, as demonstrated in Figure 1.
The lower fluid (1) occupies the lower region −ℎ 1 < < 0, having thickness ℎ 1 , density (1) , viscosity (1) , and dielectric constants (1) , and is bounded by the rigid plane surface = −ℎ 1 while the upper fluid (2) occupies the outer region 0 < < ℎ 2 , having thickness ℎ 2 , density (2) , viscosity (2) , and dielectric constants (2) , and is bounded by the rigid plane surface = ℎ 2 . The temperatures at = −ℎ 1 , = 0, and = ℎ 2 are taken as 1 , 0 , and 2 , respectively. We assume that in the basic state, interface temperature 0 is equal to the saturation temperature because the fluids are in thermodynamic equilibrium. The external force at the interface is taken as the gravitational force in the direction of (− ). In the present analysis, the fluids are taken as irrotational and incompressible.
To study the stability of the system, small disturbances are imposed on the equilibrium state. Then, the equation of the interface can be written as where represents the varicose interface displacement. The outward unit normal vector can be defined as where e and e are unit vectors along -and -directions, respectively. Our analysis is based on the potential flow theory; therefore, velocity can be expressed as the gradient of the potential function; that is, For incompressible fluids, the density is constant; the continuity equation takes the form Combining (3) and (4), we have The Scientific World Journal 3 In the present analysis, it is assumed that the two fluids are subjected to an external electric field 0 , acting along -axis and therefore We are assuming that the quasistatic approximation is valid; hence, the electric field can be written in terms of electric scalar potential function ( , , ) as Using Gauss's law, the electric potentials will satisfy Laplace's equation; that is, The normal component of velocity at the rigid surfaces = −ℎ 1 and = ℎ 2 should be zero; that is, The normal component of electric potential also vanishes at the rigid surfaces; that is, The tangential component of the electric field must be continuous across the interface; that is, where (= |n×E|) is the tangential component of the electric field and [ ] represents the difference in a quantity across the interface; it is defined as [ ] = (2) − (1) . There is discontinuity in the normal current across the interface; charge accumulation within a material element is balanced by conduction from bulk fluid on either side of the surface. The boundary condition, corresponding to normal component of the electric field, at the interface is given by where (= n ⋅ E) is the normal component of the electric field. The interfacial condition, which expresses the conservation of mass across the interface, is given by the equation In the present analysis, we have assumed that the amount of latent heat released depends mainly on the instantaneous position of the interface. Therefore, the interfacial condition for energy transfer is expressed as where is the latent heat released during phase transformation and ( ) denotes the net heat flux from the interface. If 1 and 2 denote the heat conductivities of the two fluids, the heat fluxes in positive -direction in the fluid phases 1 and 2 will be − 1 ( 1 − 0 )/ℎ 1 and 2 ( 0 − 2 )/ℎ 2 , respectively. Therefore, the expression for net heat flux ( ) can be written as On expanding ( ) in the neighbourhood of = 0, we have Since (0) = 0 in the equilibrium condition, we obtain from (15) Since the fluids are miscible and there is heat and mass transfer across the interface, the interfacial condition for conservation of momentum will take the form where is the pressure, is the surface tension coefficient, and n is the normal vector at the interface, respectively. Surface tension has been assumed to be a constant, neglecting its dependence on temperature.
Viscous Corrections for Viscous Potential Flow (VCVPF) Analysis
The viscous correction for the viscous potential flow analysis is another irrotational theory in which the shear stresses do not vanish. However, the shear stress in the energy balance can be calculated in the mean by the selection of an irrotational pressure which depends on viscosity.
Here, we have ignored the small deformation in the linear analysis. Suppose that n 1 = e denotes the unit outward normal at the interface for the lower fluid; n 2 = −n 1 is the unit outward normal for the upper fluid and t = e is the unit tangent vector. We will use the superscripts " " for 4 The Scientific World Journal "irrotational" and "V" for "viscous" and subscripts "1" and "2" for lower and upper fluids, respectively. The normal and shear parts of the viscous stress will be represented by and , respectively.
The mechanical energy equations for upper and lower fluids can be written as where D ( = 1, 2) denote the symmetric part of the rate of strain tensor for lower and upper fluids, respectively. As the normal velocities are continuous at the interface, we have The sum of (19) and (20) can be written as On introducing the two viscous pressure correction terms V 1 and V 2 for the lower and upper sides of the flow region, we can resolve the discontinuity of the shear stress and tangential velocity at the interface, so We assume that the boundary layer approximation has a negligible effect on the flow in the bulk liquid, but it changes the pressure and continuity conditions at the interface. Hence, (22) becomes Now, we can obtain an equation which relates the pressure corrections to the uncompensated irrotational shear stresses by comparing (22) and (24): It has been shown by Wang et al. [13] that in linearized problems, the governing equation for the pressure corrections is given by Using the normal mode method, the solution of (20) can be written as At the interface = 0, the difference in the viscous pressure is expressed as The equation of conservation of momentum (18) on including the viscous pressure can be written as Here, for ( = 1, 2) is the irrotational pressure obtained by Bernoulli's equation.
The Scientific World Journal 5
The normal mode technique has been used to find the solution of the governing equations. We have considered the interface elevation in the form where represents the amplitude of the surface wave, denotes the real wave number, is the growth rate, and c.c. refers to the complex conjugate of the preceding term. Now, using normal mode analysis and using the boundary conditions (30)-(33), the solution of (5) and (8) can be written as The contribution of irrotational shearing stresses will be obtained by solving (25) along with (28). So, we have (37)
Equation (42) can also be written as (1) coth ( ℎ 1 ) + (2) (2) coth ( ℎ 2 )) . (43) From the above expression, it can be concluded that the system is stable for ≤ and unstable for > , where is the critical value of the electric field. The condition for neutral stability can be written as If the fluids are considered to be inviscid, that is, (1) = (2) = 0, heat and mass transfer has no effect on the stability criterion. Also, if there is no heat and mass transfer across the interface, that is, = 0, the inviscid potential flow (IPF), VPF, and the VCVPF solutions predict the same critical wave number.
Here, denotes the vapour fraction, represents the kinematic viscosity ratio, and Λ denotes the alternative heat transfer coefficient.
The dimensionless form of (40) can be written as and non-dimensional form of (44) is given by
Results and Discussions
In this section, we have carried out the numerical computation using the expressions presented in the previous sections for a film boiling condition. We have taken vapour and water as working fluids identified with phase 1 and phase 2, respectively, such that 1 > 0 > 2 . We are treating steam as incompressible since the Mach number is expected to be small. The water-vapour interface is in saturation condition in film boiling situation and the temperature 0 is equal to the saturation temperature. We have considered the following parametric values for the analysis: Since the transfer of mass across the interface represents a transformation of the fluid from one phase to another, there is regularly a latent heat associated with phase change. It is basically through this interfacial coupling between the mass transfer and the release of latent heat that the motion of fluids is influenced by the thermal effects. Therefore, when there is mass transfer across the interface, the transformation of heat in the fluid has to be taken into the account. Neutral curves for wave number divide the plane into a stable region above the curve and an unstable region below the curve while neutral curves for the electric field divide the plane into a stable region below the curve and an unstable region above the curve.
The effect of alternative heat-transfer capillary dimensionless group Λ on the neutral curves for critical wave The Scientific World Journal number has been shown in Figure 2 when the electric field intensitŷ= 1. Here, we have found that if we Λ constant and increase , the critical wave number reduces for fixed value of vapor fraction ; hence, the VCVPF theory predicts longer stable waves. As alternative heat-transfer capillary dimensionless group Λ increases, the stable region also increases. As Λ increases, the stable region also increases. The coefficient Λ is directly proportional to the heat flux and therefore, heat flux has stabilizing effect on the system. This is the same result as the one obtained by Awasthi [14] for the Rayleigh-Taylor instability with heat and mass transfer in the absence of electric field. Therefore, we state that the behaviour of heat flux is not affected by the presence of an electric field. We can explain the effect of heat and mass transfer on the stability of the system taking local evaporation and condensation at the interface. Crests are warmer at the perturbed interface because they are closer to the hotter boundary on the vapour side; thus, local evaporation takes place, whereas troughs are cooler and thus condensation takes place. The liquid is protruding to a hotter region and the evaporation will diminish the growth of disturbance waves.
The effect of electric field intensitŷon the neutral curves for the critical wave number is illustrated in Figure 3. We observe that for a fixed value of and Λ, the critical wave number decreases on increasing electric field intensitŷ . Therefore, it is concluded that̂has stabilizing effect. If electric field is present in the analysis, the term contributed from the applied electric field is added in the left-hand side of (47) so that critical value of wave number decreases and system will become more stable. The concept of polarization can explain the physical mechanism of this phenomenon. The polarization forces due to differences in permittivities and perturbed velocities have the effect of pushing the disturbance waves and therefore, electric field stabilizes the interface. It is also observed from Figure 3 that as vapour thickness increases, the stable region decreases and so vapour thickness plays a destabilizing role. On increasing the vapor fraction, more evaporation takes place at the crests. This additional evaporation will increase the amplitude of the disturbance waves and the system becomes destabilized. In Figure 4, the effects of irrotational viscous pressure on the Rayleigh-Taylor instability with heat and mass transfer have been studied. Here, a comparison is performed between the neutral curves of wave number obtained from the present analysis (VCVPF solution) and those obtained from the VPF solution when the electric field̂= 1. We observe that as the values of heat transfer coefficient increase, the stable region increases in the VCVPF solution in comparison with the VPF solution; this indicates that the effect of irrotational viscous pressure stabilizes the system in the presence of heat and mass transfer. Figure 5 shows the comparison between the neutral curves of wave number obtained by the VPF analysis and those obtained by VCVPF (present) analysis for different electric fields. As its intensity increases, the critical wave number decreases for both VPF and VCVPF analyses; however, in case of VCVPF solution it decreases faster, Hence, at the higher values of electric field, VCVPF solution is more stable than VPF solution.
Conclusion
The effect of tangential electric field on the Rayleigh-Taylor instability is studied when there is heat and mass transfer across the interface. The viscous correction for viscous potential flow theory is used for investigation. The dispersion relation is obtained, which is a quadratic equation in growth rate. The stability condition is obtained by applying Routh-Hurwitz criterion. A critical value of electric field as well as critical wave number is obtained. The system is unstable when the electric field is greater than the critical value of electric field; otherwise, it is stable. It is observed that the heat and mass transfer has stabilizing effect on the stability of the system and this effect is enhanced in the presence of an electric field. The heat and mass transfer completely stabilizes the interface against capillary effects even in the presence of an electric field. It is also observed that the tangential electric field increases the stability of the system. The VCVPF solution is more stable than the VPF solution at the high electric field intensity as well as high heat transfer. | 4,771.2 | 2014-01-06T00:00:00.000 | [
"Engineering",
"Physics"
] |
6d, N=(1,0) Coulomb Branch Anomaly Matching
6d QFTs are constrained by the analog of 't Hooft anomaly matching: all anomalies for global symmetries and metric backgrounds are constants of RG flows, and for all vacua in moduli spaces. We discuss an anomaly matching mechanism for 6d N=(1,0) theories on their Coulomb branch. It is a global symmetry analog of Green-Schwarz-West-Sagnotti anomaly cancellation, and requires the apparent anomaly mismatch to be a perfect square, $\Delta I_8={1\over 2}X_4^2$. Then $\Delta I_8$ is cancelled by making $X_4$ an electric / magnetic source for the tensor multiplet, so background gauge field instantons yield charged strings. This requires the coefficients in $X_4$ to be integrally quantized. We illustrate this for N=(2,0) theories. We also consider the N=(1,0) SCFTs from N small $E_8$ instantons, verifying that the recent result for its anomaly polynomial fits with the anomaly matching mechanism.
6d QFTs have chiral matter, so anomalies provide a useful handle. Gauge anomaly cancellation highly constrains the matter content [2,9,12,[16][17][18][19]. The analog of 't Hooft anomalies, for global symmetries, usefully constrains the low-energy theory: these anomalies must be constant along RG flows, and on the vacuum manifold, even if the symmetry is spontaneously broken. In the broken case, as in 4d [20], anomaly matching can require certain WZW-type low-energy interactions, to cancel apparent anomaly mismatches. This was discussed for 6d theories in [21], and applied to the case of N = (2, 0) theories on the Coulomb branch. We here apply analogous considerations to N = (1, 0) theories. Consider a 6d, N = (1, 0) theory with a Coulomb branch moduli space of vacua, associated with φ for the real scalar(s) of tensor multiplets. Let S origin denote the lowenergy theory at φ = 0. Moving to φ = 0, the theory reduces at low-energy as [3,4,5]. 2 The notation is because it reduces, on an S 1 , to a 5d N = 1, U (1) vector multiplet.
remaining interactions in the low-energy theory. We here discuss an anomaly matching mechanism, which cancels ∆I 8 provided that it is a perfect square: More generally, with multiple tensors, we need where the I index runs over the tensor multiplets, and Ω IJ is a positive definite, symmetric metric on the space of tensor multiplets, which is implicit in the ∧· product in (1.3).
The mechanism is analogous to that of [22,23] for canceling anomalies of local symmetries. A reducible gauge anomaly I 8 can be cancelled via an additional tensor multiplet contribution ∆I 8 of the form 3 (1.3). This is achieved by making X I 4 into electric / magnetic sources for the tensor multiplet field strengths H I . Our sign conventions 4 are such that Ω IJ is positive definite. The full theory is then gauge anomaly free if I 8 + ∆I 8 = 0.
We apply a similar mechanism to global symmetries; rather than canceling an unwanted I 8 of opposite sign, here the tensor multiplet's ∆I 8 provides the 't Hooft anomaly matching deficit. This is achieved by making X 4 (the · is shorthand for multiple tensors, i.e. the I index in (1.3)) act as electric / magnetic sources for the tensor multiplets, so in (1.6) is not invariant under global symmetry background gauge transformations, B 2 must also correspondingly transform, such that H is invariant, δH = 0: 2 . (1.7) 3 In [22,23], the H I also includes the tensor from the gravity multiplet, which has opposite chirality from those of the matter multiplets, and correspondingly enters into Ω IJ with opposite signature [23]. Here we decouple gravity, so Ω IJ has a definite signature. We take it to be positive. 4 We take matter fermions to contribute positively to I 8 , while gauginos contribute negatively.
Then the positive ∆I 8 (1.3) from tensor multiplets can e.g. cancel a negative I 8 gauge anomaly.
Then variation of (1.4) will compensate for the apparent discrepancy from (1.2).
Because B 2 has quantized charges, the coefficients in X 4 must be correspondingly appropriately quantized. The general X 4 can be expanded in characteristic classes is the Pontryagin class for the rigid, background spacetime curvature, p 1 (T ) ≡ 1 2 tr(R/2π) 2 , c 2 (R) and c 2 (F i ) are Chern classes of the SU (2) R and F i flavor symmetry background field strengths. The Chern classes c 2 (R) and c 2 (F i ) will here always be normalized to integrate to one for the minimal associated instanton configuration in the background gauge fields; as we will discuss, the corresponding statement for p 1 (T )/4 is less clear. Such background gauge field instanton configurations are codimension 4 strings 5 , with H charge given by n SU(2) R or n i (the i index runs over all global symmetries). These charges must reside in an integral lattice, so there is a quantization condition We expect that n grav in (1.8) is also quantized, but are uncertain about the normalization.
Note also that the susy completion of (1.4) will give terms L ef f ∼ −φF µν F µν , as in [2], now coupling the real scalar φ of the tensor multiplets to the background field strengths.
The outline is as follows. In section 2, we elaborate on the above anomaly matching mechanism. In section 3, we discuss the N = (2, 0) theories, from a N = (1, 0) perspective.
In section 4, we review the 6d N = (1, 0) theories associated with small E 8 instantons, and their recently-obtained anomaly polynomial [27]. In section 5, apply the anomaly matching mechanism to the small E 8 instanton theory on its Coulomb branch.
Note added: Just prior to posting this paper, the outstanding paper [28] appeared. It uses essentially the same kind of anomaly matching mechanism as discussed here, to derive new results for anomaly polynomials for many classes of N = (1, 0) theories.
6d 't Hooft anomalies, and a new mechanism for their matching
By the descent procedure [29][30][31][32], the anomalous variation of the effective action of a 6d theory is given in terms of the anomaly polynomial 6 8-form I 8 : where δ denotes the variation, M 6 is 6d spacetime 7 , the subscript on X 6 is the form number, and the superscript the order in the gauge or global symmetry variation parameter. Now suppose that the theory has a moduli space of vacua, and the theory at the origin has anomaly polynomial I origin 8 , while the theory away from the origin has a naively different anomaly polynomial I away,naive 8 . The naive difference leads to an apparent mismatch (2. 2) The variation of the low-energy effective action must make up for this difference: As an example, consider N = (2, 0) theories on their Coulomb branch: with dΩ 3 = φ * (ω 4 ) the volume form on the S 4 Nambu-Goldstone manifold, and ∂M 7 = M 6 .
It was conjectured in [21] that c(G) = |G|h G , which fits with the G = SU (N ) cases [34], and also SO(2N ) [35], as derived via M-theory M5 branes and bulk anomaly inflow.
The interaction (2.5) remains even when the global symmetry background is turned off, F Sp(2) → 0. This is related to the fact that the 't Hooft anomaly difference, ∆I 8 ∝ p 2 (F Sp (2) ), is irreducible (i.e. it includes tr F 4 Sp (2) , not just (tr F 2 Sp(2) ) 2 ). This is similar to the 4d Wess-Zumino-Witten interaction [20] for matching the irreducible 't Hooft anomaly differences of non-Abelian SU (N ≥ 3) global symmetries. Reducible t Hooft anomaly differences, on the other hand, lead to WZW-type interactions that become trivial when the background symmetry gauge fields are set to zero. That will be the case for the reducible differences (1.2) to be discussed here.
For 't Hooft anomaly discrepancies of the form (1.2) on the Coulomb branch (1.1), the needed compensating variation (2.3) is where we define X (0) 3 and X (1) 2 via the usual descent notation, as in (2.1): 2 . (2.7) The variation (2.6) arises from the term (1.4) in the low-energy effective action. Unlike Note that a self-dual string's charge Q is quantized as 8 which expresses the compactness of the gauge invariance of B. More generally, the lattice of allowed dyonic string charges must be self-dual [37]. The general 4-form X 4 in (1.2) can be expanded as in (1.8), in terms of properly normalized characteristic classes. So 8 The 1 2 here is from the 6d string's Dirac quantization, eg = 1 2 2π n, see e.g. . boundary, Σ 4 p 1 ∈ 24Z if Σ 4 is spin (this follows from the spin 1/2 index theorem, since A = 1 + p 1 /24 + . . .); for compact Σ 4 that is not necessarily spin, Σ 4 p 1 ∈ 3Z. But here we are interested non-compact Σ 4 , or Σ 4 with boundary, where the index theorems include boundary contributions, η, and the quantization conditions are weaker, see e.g. [40]. The Q contribution from n grav could likewise have boundary contributions. We will not consider the n grav quantization issue further here. We will see that the E 8 instanton example gives n grav = 1 with the normalization in (1.8). where it was noted that (2.5) can be obtained by taking dΩ 3 to source H 3 with coefficient α m and ⋆H 3 with coefficient α e , see also [41]. This seemed to require ∆c/12 = α e α m , with α e = α m , apparently in conflict with self-duality of H 3 , and unclear quantization of α e,m . 9 I. e. c 2 (F G ) = λ(G) −1 1 2 tr(F G /2π) 2 , where λ(G) can be computed as in e.g. [38,39].
Review: the small E 8 instanton theory, E 8 [N ], and its anomaly polynomial
We will illustrate the anomaly matching mechanism for the case S origin = The Coulomb branch corresponds to moving the M5 branes to φ ∼ x 11 = 0 (the Higgs branch corresponds to dissolving the M5s into E 8 instantons, necessarily at x 11 = 0). The added free-hypermultiplet corresponds to the CM location of the M5 branes in the x 6,7,8,9 directions. By considering anomaly inflow, as in [34] but including the effect of the M9 brane, the anomaly polynomial of this theory was obtained in [27] Here +f.h. denotes "free-hyper:" The notation in (4.1) is much as in [27] 3) Our normalization is such that all Σ 4 c 2 (F ) = 1 for the minimal instanton configuration.
In this notation, the anomaly polynomial of the N = (2, 0) theory of N M5 branes, keeping only SO(4) ⊂ SO (5) in anomalies between the LHS and RHS of (5.1) is It's indeed a perfect square, as required. Moreover, writing this X 4 as in (1.2), the coefficients are indeed integrally quantized (the 1 2 's in (5.2) all cancel or combine to 1) The N = 1 case of (5.4) and (5.5) coincides with the N = 1 case of (5.1) and (5.2). More generally, all N tensor multiplets on the RHS of (5.4) participate in the anomaly matching mechanism, hence the overall N in (5.5), with an associated lattice of integral charges. | 2,699.2 | 2014-08-28T00:00:00.000 | [
"Physics"
] |
STATE OF THE ART OF THE LANDSCAPE ARCHITECTURE SPATIAL DATA MODEL FROM A GEOSPATIAL PERSPECTIVE
Spatial data and information had been used for some time in planning or landscape design. For a long time, architects were using spatial data in the form of topographic map for their designs. This method is not efficient, and it is also not more accurate than using spatial analysis by utilizing GIS. Architects are sometimes also only accentuating the aesthetical aspect for their design, but not taking landscape process into account which could cause the design could be not suitable for its use and its purpose. Nowadays, GIS role in landscape architecture has been formalized by the emergence of Geodesign terminology that starts in Representation Model and ends in Decision Model. The development of GIS could be seen in several fields of science that now have the urgency to use 3 dimensional GIS, such as in: 3D urban planning, flood modeling, or landscape planning. In this fields, 3 dimensional GIS is able to support the steps in modeling, analysis, management, and integration from related data, that describe the human activities and geophysics phenomena in more realistic way. Also, by applying 3D GIS and geodesign in landscape design, geomorphology information can be better presented and assessed. In some research, it is mentioned that the development of 3D GIS is not established yet, either in its 3D data structure, or in its spatial analysis function. This study literature will able to accommodate those problems by providing information on existing development of 3D GIS for landscape architecture, data modeling, the data accuracy, representation of data that is needed by landscape architecture purpose, specifically in the river area.
BACKGROUND
From several definitions of landscape architecture, it could be summarized that landscape architecture is the design, planning, management and land arrangement which integrates science and art, for the benefit of humans, taking into account the mutual interaction between the environment and man and between human.
Architects tend to only use GIS (Geographic Information System) for the base map or landuse or visualization, but GIS function is beyond that.GIS provides dynamic way to represent a pattern that is invisible and contextual relationships throughout the researched object.GIS is able to help the planning cycle in landscape architecture.Data capturing for inventory, analysis on scientific base, defining objectives, and alternative scenarios of future impacts and planning can be done using GIS (Pietsch, 2012) One of the stages in the design of landscape architecture is spatial analysis.Spatial analysis that based on database can help architects to perform the analysis quickly and scientifically, which means helping architects to have knowledge and understanding of current conditions to be used in objective design, and to present a basis for the stages of planning and design (Xu, 2011).
2-dimensional (2D) GIS is not able to describe the earth in accordance with reality or close to reality, because the earth is a three-dimensional field.Some fields of science are already requiring 3D GIS such as: 3D urban planning, flood modeling, as well as landscape planning (Stoter and Zlatanova, 2003) to support the steps in the modeling, analysis, management, and integration of related data, which describe human activity and geophysical phenomena more realistically (Breunig and Zlatanova, 2011).3D model could improve understanding on the real world since it is easier for everyone to understand, could help better on communication of the data since 3D makes it easier to articulate ideas, and it could solve 3D problems since some spatial problems can only be solved in 3D.3D GIS is a 3 dimensional Geographic Information System, which is not only descrives the real world visualization in 3 dimensional view, but it is also the data modelling, geo-objects, structuring, manipulation, and spatial analysis in 3 dimensional-field (Stoter and Zlatanova, 2003).
In this paper, the requirements for building 3D model in landscape architecture will be targeted in the river area.River has certain definite environmental, social, cultural and economic values, as well.Rivers have many functions like providing connection between landscapes and communities and they also gather people around the same idea for a creative and sustainable environment.Floodplains are susceptible to dangers of flooding in relation with the human and natural activities (Cengiz, 2013).In order to achieve the suistanable landscape in the river area, it is important to take human behavior, flood risk, morphology, and ecology structure into account.
To help understand the modeling that will be used in the landscape design and planning, the advantages of geospatial data for landscape architecture will be discussed.
This paper is written as preliminary study for research in Three Dimensional Geographic Information System for Landscape Architecture in the river area.That research is implementing fractal method in the landscape design.The researched object of that study is what is the data needed and the data detail requirement for 3D landscape design and the data acquisition method that could be used in order to achieve certain data resolution and how those requirements could be implemented in landscape architecture.
Landscape architecture
Landscape planners often use scenarios as a basis for simulating and assessing possible future landscape configurations (alternative futures).A GeoDesign approach to landscape planning could help planners to develop, alter and evaluate alternative futures more rapidly (Albert and Vargas-Moreno, 2012).
The common stages in landscape planning which is illustrated in Figure 1 are (Widodo et al., 2012): 1. Preparation: formulation of the problem and the research purposes as first step, initial information collection, administration preparation.2. Data collection: including spatial data or social, economy, cultural data that affecting surrounding researched environment.This includes field surveying or literature review.3. Analysis and synthesis: analyzing the collected data.4. Landscape planning: This stage begins with the preparation for the concept of landscape planning, which is then presented in the form of spatial planning, circulation, activities and facilities.The concept was later developed in the landscape plan in written or drawn form.
The main stages in landscape planning and designing along with the application of GIS can be seen in Figure 2. The GIS can be applied in spatial analysis, visual expression, and management of spatial data.Spatial analysis with database basis can help architects to do a quick analysis and scientific, which means helping architects to have knowledge and understanding of current conditions to be used in the design objective, and presenting the base for the planning and design stages (Xu, 2011).
Preparation Data Collection Analysis and Synthesis
Landscape Planning Figure 1 General Methodology in landscape planning, modified from (Widodo et al., 2012) Figure 2 Main steps of landscape planning and design and their application in GIS (Xu, 2011)
Geodesign framework
Geodesign was introduced in 2010.Michael Flaxman and Stephen Irvin describe geodesign as a method which tightly couples the creation of proposals for change with impact simulations informed by geographic contexts and systems thinking, and normally supported by digital technology (Steinitz, 2013).Geodesign is a set of techniques and technologies as an integrated process for planning a built or natural environment.It is a systematic process of measuring, modeling, interpreting, designing, evaluating, and making decisions.Geodesign includes project conceptualization, analysis, design specification, stakeholder participation and collaboration, design creation, simulation, and evaluation (among other stages).(Wheeler, 2010).Geodesign is a new way of thinking about the design process, by utilizing site data with software such as a GIS to create urban or landscape designs.
Geodesign is integration between the geospatial technologies such as GIS with design.By utilizing the spatial databases in GIS, geodesign could benefit from its ability to acquire and manage geospatial information.GIS could also have the ability to analyze geospatial information by using its geoprocessing function.By using GIS database to generate a 3D model for planning and design, architects will be able to evaluate the design better, creating a way to experience the design beforehand, and enable residents and citizens to become better informed about the planned development to facilitate feedback (Tae-Woo Kim et al., 2010, Szukalski, 2011).
Figure 3 The geodesign framework conceived by Carl Steinitz (Steinitz, 1979).This component consists of sets of spatial information model.At this stage, the potential impact from the input design is assessed.There are evaluation parameters that are used for the assessment.3. The result (or the report): At this stage, the outcomes of the impact evaluation are being communicated to the user in an understandable way.The feedback from the user is used as input in an iterative process.These processes in geodesign are iterative process.
GEOSPATIAL DATA FOR LANDSCAPE
In the analysis phase, the landscape architect collects several amount of data and information, such as natural information (e.g., vegetation, mineralogy, geology, hydrology), infrastructure information (e.g., cadaster, buildings, networks, architecture), and social and economic information (e.g., census, economic and geo-political factors and actors, resources, site history) (Favetta and Laurini, 2006).
These data and information are collected from different sources, such as local government, library, internet, etc.In order to understand the needs and requirements for spatial data in landscape architecture, in this chapter will be explained the data resolution, data acquisition technique, and data modeling for landscape architecture.
Resolution / Level of Detail Requirement
In landscape design, it is needed to assess the landscape unit by assessing the physical quality, condition and function of the landscape features and the processes within the landscape unit including landscape, ecological, archaeological and amenity studies.
For design development and assessment, the accurate topographic and land-use maps of the area are needed in order to get better understanding of the parameters in the design.Dong, et al, studied the evolution and optimization of the landscape patterns in order to increase the ecological security.They used three TM images from 1990, 2000, and 2010 with spatial resolution 30 m and used them as the basis for landscape classification (Dong et al., 2015).
Parmehr et al., took the images, recorded with a Ground Sampling Distance (GSD) of 10cm, were processed using the digital photogrammetric system Leica Photogrammetric Suite 9.0 (LPS) to detailed designs of buildings, roads, green zones and playgrounds for landscape planning use (Parmehr et al., 2011).Cocco et al., implement geodesign to evaluate the urban quality of two neighbor-hoods in Pampulha, Belo Horizonte, Brazil.They evaluate the evolution dynamics of those locations using a multi-criteria analytical approach to explore what could affect the urban quality level and transformation risk in the area based on their spatial phenomenon.The results of the study highlight the role of knowledge as an essential starting point for urban interventions, in order to inform the design by the specific characteristics of the area and the needs of the citizens (Cocco et al., 2015).The data requirements that are used to represent the process in the study area are shown in Table 1.(Cocco et al., 2015) Apart from the geodata listed above, the virtual 3D city model can be enhanced by classical georeferenced 2D raster-data sources (e.g., rasterized 2D maps) and vector-data sources (e.g., transportation networks).
Data Acquisition Technique
Architects and planners should be enabled to quickly assess feasibility, errors or areas of conflict between alternative designs.These factors must be considered before choosing the suitable data acquisition technique (Li and Petschek, 2014) The main data source for landscape architecture are topographic maps and aerial photographs or satellite images (Sadek et al., 2002), (Parmehr et al., 2011).Land survey is the basis for landscape architecture project.The topographic maps are used to represent, visualized, and shows the geographic reference system of the buildings, roads or transportation systems, trees, terrain, and landuse/landcover.Whereas the aerial photographs and satellite image are used for better representation of the topographic maps since they are not generalized.Contours and spot heights from topographic map and aerial photographs are used to generate Digital Elevation Model (DEM).Terrestrial photographs of an object from multiple view point are needed to construct the 3D model.
Dong, et al, used three TM images for the basis for landscape classification (Dong et al., 2015).Sadek at al., and Parmehr et al., use terrestrial photographs that were captured using a conventional photographic technique by utilizing digital camera (Sadek et al., 2002), (Parmehr et al., 2011) Another data acquisition technique is by using laser scanner.
Laser scanner could obtain 3D data in high resolution rapidly.Landscape planning could benefit from the laser scanning method.
Li and Petschek did an experiment in applying a 3D-laser scanner in a landscape design project (Li and Petschek, 2014).They found this method was easier to achieve high-resolution point clouds data for 3D spatial data, although it has some limitations.This method is not recommended for rainy, foggy and snowy weather conditions, and if there are too many moving targets.It is also not recommended for sites covered with many irregular vegetation or objects, because of the effort and time required to delete noise points in the data processing.
In the virtual 3D city model of Berlin that was researched by Döllner et.al., these geodata sources were used (Döllner et al., 2006): 1. Cadastral Data: The cadastral database delivers the official footprints of buildings and land parcels.
Digital Terrain Model: The available grid-based
DTMs vary in resolution and extension.DTM of 20 m resolution builds the framework; a higher-resolution DTM is used for the core part of the virtual 3D city model.In areas of special interest, an explicit 3D model of the terrain surface structure replaces the grid-based DTMs.3. Aerial Photography: A collection of digital aerial photography is linked to the virtual 3D city model.They can be projected on top of the digital terrain model.4. Building Models: captured and processed by laserscanning and photogrammetry-based methods.The buildings are represented at various levels of detail, including block-models (LOD-1), geometry-models (LOD-2), architectural models (LOD-3), and detailed indoor models (LOD-4). 5. Versions and Variants: A given city object can be updated and, therefore, have multiple versions.In a similar way, a given area can contain different variants of city object collections.
Sheppard did observation on the impact of using laser scanner for landscape planning.In his research was mentioned that there are some advantages of data that were obtained by laser scanner used in landscape planning which are in visualization, level of detail, high level of trust in data, and the high-tech image (Sheppard, 2004).
There are some consequences of using high detailed 3D data such as the one that were produced from the laser scanner, it could expand our understanding of environmental perceptions, improve public involvement processes, contribute to more informed designs, and manage various visual/spatial phenomena of importance to society in certain landscape types (Sheppard, 2004).
Landscape architects could take advantage of Mobile Mapping System for their needs.Landscape architects could build a database of GIS shapefiles for their design phase of their project before the data is used in the GIS for mapping the landscape objects.These data can be imported into spatial database prior to site mappings.Having shapefiles beforehand can provide a smooth continuity of data throughout the landscape architecture design phase throughout the management phase (Rybka, 2013).
In Table 2 the data acquisition technique and data resolution that is used in landscape architecture, especially for spatial data that is used in river area are mentioned.
Different scales of planning require different data and techniques.Raster data are more useful for planning, because large areas are involved and high resolution is not required.The processing of raster data is much faster than that of vector data, especially in map overlay and buffer analysis.On the other hand, vector data are generally used for district and local action area planning because of the need for very high resolution analysis (Rong LIU, 2002).
Table 2 Data resolution and their data acquisition technique
Data Modeling
Landscape architecture could be modeled in 2 dimension and 3 dimension , while today it is quite common to capture in models time dimension as well (4 dimension).It is common to visualize changes in landscape architecture works during different seasons of year or to see the impact of the design in the future.Since landscape architecture works with living material, there have not been made perfect systems, which would enable unification of data and easier work with them in the future yet.
Creating 3D models of landscape could go beyond visualization purposes, but they are also a source of wide-range information.
The data model for landscape architecture could be built in a database basis of spatial data.
In reconstruction of urban modeling or landscape, cloud points are the most common and basic data used (Oesau, 2015).Other works propose large city descriptions and offer complementary advantages to the street level representations, in particular fine roof descriptions.Such city descriptions are usually obtained either from airborne data for reconstructing in 3D existing landscapes, or from urban grammars in order to artificially create realistic cities (Lafarge and Mallet, 2012).Sadek et al.Develop their 3D city model using following modeling technique that is divided into several tasks and short structural summary presented by workflow scheme (Figure 4) (Sadek et al., 2002).Project (Sadek et al., 2002) For data modeling (construction and validation), 3D topology is needed relating to the processing and structuring of data into topological primitives and according to topological data models.
In order to determine relations between 3D objects, it should be examined primitive object relationships that build the 3D objects (3D, 2D, 1D, and 0D), which means the topological requirements of 2D and 1D objects had to be determined beforehand (Ghawana and Zlatanova, 2013).
REPRESENTATION MODEL
The needs for 3D modeling for landscape architecture are growing and expanding rapidly in various fields includes urban planning and design, landscape architecture, environmental visualization and many more.
Modeling objects in 3 dimensional field of the real world is more representative and could be more understandable visually by the planners and designers.
3D city models could represents data that can be used in urban applications and/or landscape architecture, which include buildings, roads or transportation systems, trees, terrain, and landuse/landcover.3D city models is basically a computerized models or digital models of a city (Sadek et al., 2002), (OGC, 2007).There are several representation models that are used in the 3D planning and design which will be discussed in the latter.
CityGML
The City Geography Markup Language (CityGML) is a new and innovative concept for the modeling and exchange of 3D city and landscape models.CityGML is standardization for interconnected data with different spatial references.
CityGML represents four different common aspects of virtual 3D city models, i.e. semantics, geometry, topology, and appearance for the representation of 3D urban objects that can be shared over different applications which helps to make the cost for maintenance of 3D models more effective.
CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models.It is an application schema for the Geography Markup Language version 3.1.1(GML3), the extendible international standard for spatial data exchange issued by the Open Geospatial Consortium (OGC) and the ISO TC211 (OGC, 2007).
CityGML is built upon a modular structure (Figure 5).The horizontal represents The vertical modules provide the definitions of the different thematic models like building, relief (i.e.digital terrain model), city furniture, land use, water body, and transportation etc.The horizontal modules (CityGML core, appearance, and generics) define structures that are relevant or can be applied to all thematic modules (Kolbe, 2009).
Figure 5 Modularization of CityGML 1.0.0 (Kolbe, 2009) CityGML represents four different aspects of virtual 3D city models, i.e. semantics, geometry, topology, and appearance.In the CityGML there are five Levels of Detail (LOD) that are more detailed along with the increasing number of LODs, which are (Kolbe, 2009): 5. LOD 0regional, landscape This LOD represents the 2.5 dimensional Digital Terrain Model (DTM) that are laid over aerial image.
LOD 1city, region
The buildings are represented in 3 dimensional blocks with flat roofs.
LOD 2city districts, projects
The buildings have structured roof and walls.8. LOD 3architectural models (outside), landmarks The architectural models are more detailed with detailed wall, roof structures, balconies, and projections.The textures are obtained from high resolution image.9. LOD 4architectural models (interior) The interior structures for 3D objects of the architectural model are added.
Each LOD could be mixed in one scene.
Figure 6 Mixing Levels-of-Detail in one Scene (Kolbe and Gröger, 2005) Included in the CityGML are generalization hierarchies between thematic classes, aggregations, relations between objects, and spatial properties.These thematic information go beyond graphic exchange formats and allow to employ virtual 3D city models for sophisticated analysis tasks in different application domains like simulations, urban data mining, facility management, and thematic inquiries which could help the design phase on 3D modelling of landscape architecture (Kolbe and Gröger, 2011).
Esri CityEngine
Esri CityEngine is a three-dimensional (3D) modeling software application developed by Esri R&D Center Zurich (formerly Procedural Inc.) and is specialized in the generation of 3D urban environments.With the procedural modeling approach, CityEngine supports the creation of detailed large-scale 3D city models (CityEngine, 2016).CityEngine uses procedural modeling methods combined with shape and split grammars for generation of 3D content from 2D polygon (Muller et al., 2006).It is the tool of choice for smart 3D city modeling in urban planning, architecture, simulations, game development, and film production (Esri CityEngine).It is the way of modeling that is used to model geometry that is recursive and too tedious to be modeled manually e.g.plants (a single tree pattern can be used to create an entire forest) and landscapes.
Procedural modeling is a modeling by using shape grammars (CGA shape).These shape grammars use production rules by creating more and more details (iterative process).In the context of buildings, the production rules first create a crude volumetric model of a building, called the mass model, then continue to structure the facade and finally add details for windows, doors and ornaments.The advantage of this method is it can create hierarchical structure and annotation which could be reused for creating architecture to populate a whole city (Figure 7) (Muller et al., 2006).
Figure 7 Application of CGA shape on building (Muller et al., 2006) There are some elements that needed in modeling 3D city using CityEngine, which are: 1. Terrain (heightmap/texture map) and control map layers (images) 2.
Street network (automatically/manually created in CityEngine, or imported from DXF, SHP files) 3.
CGA rule file
CityEngine allows for various degree of user's control on the city generation, from a semi-automatic way to a 3D landscape generation based on real data, e.g. by importing GIS data and writing ad hoc CGA rule files that describe the required architecture typology (Piccoli, 2013).
Visualization and Analysis
Landscape architects are often charged in tackling interdisciplinary design tasks, where visual communication becomes a key in demonstrating project outcomes, which is why 3D technologies are very useful in landscape architecture.3D has the advantage on visualizations that is more realistic and could represent more complex data to its viewer.Among the landscape architects, Google SketchUp is the most popular 3D software for visualization, along with ArcGIS, AutoCAD Civil 3D, and 3D Studio Max (Li et al., 2013).
Current 3D technologies allow landscape architects to integrate various data sets and analyses (e.g.hydrology, visual impact assessment) into their work.Another research used 3D analyses within 3D city models using proximity, spread analyses, 3D density and visibility analysis (Li et al., 2013).
3D GIS should be able to do following spatial operation (Held et al., 2004): 1.Data retrieval, e.g.Latest information of particular object.
2.
Query operation; e.g.retrieve data that meets certain conditions 3.
Spatial analysis and semantic data integration, e.g.classification, measurement, overlay operations.4.
Calculating the distance, area, and additional volume calculation in three-dimensional GIS.
THEMATIC MODEL FOR RIVER LANDSCAPE DESIGN
In this chapter the thematic model of CityGML for river area is presented in Figure 8. _CityRiverObject is the base class of CityGML.It is a subclass of the class _Feature.All spatial objects inherit the properties from _CityRiverObject.Most thematic classes are (transitively) derived from the basic classes of _Feature and _FeatureCollection.They are the basic notions defined in ISO 19109 and GML3 for the representation of spatial objects and their aggregations.Features contain spatial as well as non-spatial attributes.
In _CityRiverObject there are subclasses that consist of several thematic fields for landscape modeling purpose in the river area, which are: soil type, terrain, transportation, landuse, and climate.In this thematic model, vegetation is derived from landuse and rain fall is derived from climate.The thematic fields were mentioned in
DISCUSSION AND CONCLUSION
The uses of 3D GIS are already being developed in landscape architecture.Conventional two-dimensional GIS is not able to provide an overview of existing conditions that is used in landscape designs realistically and systematically, the architecture had to use their imagination to determine the existing condition.
Development of the use of geospatial data in landscape design can be seen one of the development geodesign.Geodesign able to bring landscape design to another level, and it could create responsible and sustainable solutions to problems related to the existing landscape condition.
By utilizing the spatial databases in GIS, geodesign could benefit from its ability to acquire and manage geospatial information.GIS could also have the ability to analyze geospatial information by using its geoprocessing function.These abilities along with geodesign is not enough to describe the real world if it is only represented in 2 dimensional or even 2.5 dimensional spatial data.It needs 3 dimensional spatial data, which by means not only in 3D graphic representations, but also in 3D modeling, so planner would be able to do spatial analysis in 3 dimensional spaces.
In this paper, several data acquisition techniques related to data resolution that is needed for each thematic layer is described.The thematic layers are adopted from CityGML core thematic layer, by adjusting them for landscape design in the riverbanks area.The layers were taken from several literatures that were researched particularly in the riverbanks area that were using conventional method.
In the future research, the techniques for designing landscape should be described, and the advancement of those techniques should be mentioned.The idea of using fractal method to be implemented in landscape design, especially in 3D form, will be introduced by understanding what kind of geospatial data, what are the requirements and constraints, and what kind of spatial analysis method could be used.Fractal method is used because in recent research that suggests human perceptual systems have evolved to process fractal patterning and that we have a visual preference for images with certain fractal qualities (Perry et al., 2008).Many natural forms and processes possess a common ordering characteristic which could be described by fractal geometry.Fractal concept could be used in the surface modelling for constructing TIN model in order to construct basic model of DTM.Later on, the DTM used along with constraints for landscape architecture could be used to help the architects in their design.
Albert and Carlos summarize the basic components of geodesign into three categories, which are(Albert and Vargas- Moreno, 2012):1.The input process (or the design): This component is the part of the sketching interface which is the part where the design is still in sketches.It allows quick generation of analyzed alternative designs.It consists of spatial feature with geographical attributes.2. The evaluation (or the impact):
Figure 4
Figure 4 A workflow scheme of methods used in Sadek et al.Project(Sadek et al., 2002)
Table 2
This is not to be confused with the texture in CityGML's top level class hierarchy in the CityGML OGC standard since it is mentioned in the standard that the use of TexturedSurface is strongly discouraged(Consortium, 2012).
. These objects are not specifically modeled yet.Element names without a prefix are defined in the other module.Each field of CityGML's thematic model is covered by a separate CityGML extension module. | 6,330.8 | 2016-10-05T00:00:00.000 | [
"Computer Science"
] |
Characterisation of recombinant GH 3 β-glucosidase from β-glucan producing Levilactobacillus brevis TMW 1.2112
Levilactobacillus (L.) brevis TMW 1.2112 is an isolate from wheat beer that produces O2-substituted (1,3)-β-D-glucan, a capsular exopolysaccharide (EPS) from activated sugar nucleotide precursors by use of a glycosyltransferase. Within the genome sequence of L. brevis TMW 1.2112 enzymes of the glycoside hydrolases families were identified. Glycoside hydrolases (GH) are carbohydrate-active enzymes, able to hydrolyse glycosidic bonds. The enzyme β-glucosidase BglB (AZI09_02170) was heterologous expressed in Escherichia coli BL21. BglB has a monomeric structure of 83.5 kDa and is a member of the glycoside hydrolase family 3 (GH 3) which strongly favoured substrates with β-glycosidic bonds. Km was 0.22 mM for pNP β-D-glucopyranoside demonstrating a high affinity of the recombinant enzyme for the substrate. Enzymes able to degrade the (1,3)-β-D-glucan of L. brevis TMW 1.2112 have not yet been described. However, BglB showed only a low hydrolytic activity towards the EPS, which was measured by means of the D-glucose releases. Besides, characterised GH 3 β-glucosidases from various lactic acid bacteria (LAB) were phylogenetically analysed to identify connections in terms of enzymatic activity and β-glucan formation. This revealed that the family of GH 3 β-glucosidases of LABs comprises most likely exo-active enzymes which are not directly associated with the ability of these LAB to produce EPS.
Introduction
The exopolysaccharide (EPS) formation by lactic acid bacteria (LAB) gained increased interest by the food industry in the past decades due to health-promoting effects and their application as natural viscosifier and thickening agents (Goh et al. 2005;Korcz et al. 2021;Moradi et al. 2021;Ruas-Madiedo et al. 2005;Zannini et al. 2016). The major advantages are the generally recognised as safe (GRAS) status of EPS forming LAB and further an in situ EPS enrichment of food products makes the use of additives (e.g., guar gum or pectin) redundant (Freitas et al. 2011;Velasco et al. 2009;Zannini et al. 2016). EPSs formed by LABs are either homopolysaccharides (HoPS) or heteropolysaccharides (HePS) (Badel et al. 2011;Fraunhofer et al. 2018b;Freitas et al. 2011;Notararigo et al. 2013). β-glucans (consisting solely of glucose monomers) are produced intracellularly by activated sugar nucleotide precursors and compared to HoPS have lower yields (Mozzi et al. 2006;Notararigo et al. 2013). Regarding the fermentation of foods, low yields and Abstract Levilactobacillus (L.) brevis TMW 1.2112 is an isolate from wheat beer that produces O2-substituted (1,3)-β-D-glucan, a capsular exopolysaccharide (EPS) from activated sugar nucleotide precursors by use of a glycosyltransferase. Within the genome sequence of L. brevis TMW 1.2112 enzymes of the glycoside hydrolases families were identified. Glycoside hydrolases (GH) are carbohydrate-active enzymes, able to hydrolyse glycosidic bonds. The enzyme β-glucosidase BglB (AZI09_02170) was heterologous expressed in Escherichia coli BL21. BglB has a monomeric structure of 83.5 kDa and is a member of the glycoside hydrolase family 3 (GH 3) which strongly favoured substrates with β-glycosidic bonds. K m was 0.22 mM for pNP β-D-glucopyranoside demonstrating a high affinity of the recombinant enzyme for the substrate. Enzymes able to degrade the (1,3)-β-D-glucan of L. brevis TMW 1. 2112 have not yet been described. However, BglB showed only a low hydrolytic activity towards the EPS, which was measured by means of the D-glucose releases. Besides, characterised GH 3 β-glucosidases from various lactic acid bacteria (LAB) were phylogenetically analysed to identify connections in terms of enzymatic activity and β-glucan formation. This revealed that the family of GH 3 β-glucosidases of LABs 1 3 Vol:. (1234567890) the degradation of in situ synthesized EPS are critical parameters for industrial applications (De Vuyst et al. 2001). Previous studies described the decrease of EPS concentrations with increasing fermentation periods of LAB either through enzymatic activity or physical parameters (Cerning et al. 1992;Degeest et al. 2002;Dierksen et al. 1995;Vuyst et al. 1998;Zannini et al. 2016). Degeest et al. (2002) and Pham et al. (2000) reported EPS degradation by cell extract of Streptococcus thermophilus LY03 and Lacticaseibacillus rhamnosus R. Glycohydrolases or glycoside hydrolases (GH), such as α-D-glucosidase, β-Dglucosidase, α-D-galactosidase or β-D-galactosidase, were found to be involved in EPS degradation, thus reducing the viscosity of the LAB culture broths. The GHs are grouped in more than 170 families which are classified based on their amino acid sequences. These enzyme families possess hydrolytic activities towards glycosidic bonds of carbohydrates and non-carbohydrate fractions. Furthermore, GHs can be classified into retaining and inverting enzymes depending on their catalytic mechanism. Inverting enzymes perform nucleophilic substitution and retaining enzymes form and hydrolyse covalent intermediates (Ardèvol et al. 2015;Koshland Jr. 1953;Naumoff 2011). The β-glucosidases of the GH 3 family, for example, retains the anomeric configuration of substrates and have a frequently occurring (β/α) 8barrel structure (Naumoff 2006;Rigden et al. 2003). GH 3 β-glucosidases could act as exo enzymes, able to hydrolyse terminal, non-reducing β-D-glycosyl residues including β-1,2-: β-1,3-; β-1,4-; β-1,6linkages and/or aryl-β-glucosides with subsequent β-D-glucose release (Cournoyer et al. 2003;Harvey et al. 2000). It was demonstrated that in general the GH 3 family is one of the more abundant GH families in bacterial genomes. Moreover, the bacterial genome size correlated with the presence of this family, which means that smaller genomes (1066 ± 294 open reading frames (orf)) lacked the presence of GH 3 enzymes (Cournoyer et al. 2003).
Levilactobacillus (L.) brevis TMW 1.2112 is a wheat beer isolate which produces O2-substituted (1,3)-β-D-glucan, a HoPS. In this study, the genome sequence of L. brevis TMW 1.2112 was screened for GHs by in silico genome mining. One orf (AZI09_02170) was identified as putative β-glucosidase BglB (GH 3). BglB was heterologously expressed, characterised, and analysed for its ability to degrade isolated and purified β-glucan. Since β-glucosidases of LAB were previously described to be involved in EPS degradation (Degeest et al. 2002;Pham et al. 2000), BglB was of interest in this study also considering the β-linked EPS. Furthermore, the enzyme was compared with previously characterized lactic acid bacterial GH 3 β-glucosidases from the literature for a brief overview and to infer relations between the EPS forming and non-forming LAB.
Material and methods
Bacterial strains, plasmids, and cultivation The EPS forming wheat beer isolate L. brevis TMW 1.2112 was cultivated in modified Man, Rogosa, and Sharpe medium (mMRS) with pH 6.2 at 30 °C as static cultures as previously described by (Fraunhofer et al. 2017;Schurr et al. 2013). L. brevis TMW 1.2112 and Pediococcus claussenii TMW 2.340 (isogenic with DSM 14800 T , and ATCC BAA-344 T ) were cultivated in a modified semi-defined (SDM) at pH 5.5 with 20 g L −1 maltose as sole carbon source for EPS isolation. The isolation was performed according to Bockwoldt et al. (2021) except perchloric acid treatment (Dueñas-Chasco et al. 1997).
Escherichia (E.) coli BL21 (StrataGene®) cells and pBAD/Myc-His A (Invitrogen) were used for cloning and expression of the enzyme. Recombinant E. coli cells were grown in lysogeny broth (LB) Lennox medium (pH 7.2) at 37 °C with and 200 rpm or on solid LB medium with 1.5% (w/v) agar. Transformed cells were selected by adding 100 μg ampicillin mL −1 to the LB medium. The pBAD vector was constructed by introducing the appropriate DNA fragment of the β-glucosidase (AZI09_02170) into the NcoI and SalI sites of pBAD/myc-His by Gibson Assembly.
Bioinformatic analysis
The previously sequenced genome of L. brevis TMW 1.2112 (Fraunhofer et al. 2018a) was used for similarity analysis of GH 3 by genome mining (Ziemert et al. 2016). The DNA and protein sequences were analysed by BLASTn and BLASTx, respectively (Altschul et al. 1990). Further characterizations of the enzymes and the GH family affiliation were performed by using CAZy (Lombard et al. 2014), functional information of the enzymes by UniProt (Consortium, 2020), and homology modelling was performed by SWISS-MODEL (Waterhouse et al. 2018). Prediction of a putative signal peptide was performed by using SignalP-5.0 (Armenteros et al. 2019).
Construction of heterologous expression vector
The GH 3 β-1,3-glucosidase (AZI09_02170) gene was identified from the genome sequence of L. brevis TMW 1.2112 (GenBank accession No.: CP016797). The appropriate DNA sequence was amplified by PCR with Q5 High Fidelity DNA-Polymerase (NEB, Germany) using forward and reverse primers with pBAD overlaps 5′-CGT TTA AAC TCA ATG ATG ATG ATG ATG ATG TTG GCG TAA TAA GGT GTT TGC CCG -3′ and 5′-CGT TTT TTG GGC TAA CAG GAG GAA TTA ACC ATG GAC ATC GAA CGA ACG CTT GCT GAA CTC -3′, respectively. Amplicons were generated by the PCR program as follows: initial denaturation at 98 °C for 30 s, followed by 30 cycles of 10 s at 98 °C, 20 s at 71 °C and 90 s at 72 °C with a final extension at 72 °C for 2 min. The PCR product was purified and integrated into the previously digested pBAD/Myc-His A vector by Gibson assembly (Gibson Assembly® Master Mix, NEB, Germany). The vector was digested using the enzymes NcoI and SalI (NEB, Germany) which simultaneously excised the Myc-region. The recombinant plasmid pBAD_bGLU was transformed into E. coli BL21 by the heat-shock method (Froger et al. 2007).
Expression and purification
Positive clones of E. coli BL21 carrying the vector pBAD_bGLU were screened and selected for enzyme expression. LB medium containing 100 μg ampicillin ml −1 was inoculated with E. coli pBAD_bGLU and incubated at 37 °C and 200 rpm until OD 600 nm ≈ 0.5. The cells were induced with 0.25% L-arabinose (v/v) overnight at 15 °C and 200 rpm. In the next step, the cells were harvested by centrifugation at 3,000 × g for 10 min at 4 °C and resuspended in lysis buffer: 50 mM sodium phosphate, 300 mM NaCl, 5% glycerol, 1 mM phenylmethylsulphonyl fluoride (PMSF), 10 mM β-mercaptoethanol, pH 7.5. Cell disruption was performed using glass beads (Ø 2.85-3.45 mm) and a benchtop homogenizer (FastPrep®-24 MP, MP Biomedical Inc, Germany) in three cycles each 30 s. The cell debris was harvested by centrifugation 17,000 × g for 30 min at 4 °C and discarded. The supernatant including the his-tagged recombinant protein was added to nickel-nitrilotriacetic acid (Ni-NTA) crosslinked agarose resins (SERVA Electrophoresis GmbH, Germany) and purified according to the manufacturer's protocol. The purified fractions were analysed and visualised on 12% sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) gels by staining with Coomassie Brilliant dye Roti® Blue (Carl Roth GmbH + Co. KG, Germany). The protein concentration within the several fractions was determined by Coomassie (Bradford) protein assay kit using bovine serum albumin (BSA) as the standard (Thermo Fisher Scientific, Germany). Imidazole was removed by dialysis against 50 mM PBS buffer pH 6.8 overnight at 4 °C using 3.5 kDa dialysis tubing (SERVA Electrophoresis GmbH, Germany).
In addition, API® ZYM (bioMérieux, Marcy-l´Étoile, France) test stripes were used for enzyme characterisation of the cell lysate samples from E. coli BL21 and induced E. coli pBAD_bGLU. The cell pellet of 5 mL culture volume was washed and resuspended with 2.5 mL PBS buffer (pH7). Cell disruption was done as previously described. The analysis was performed by inoculating each cupule of the test stripe with 65 μL of cell lysate and subsequently incubated for 4 h at 37 °C (Gulshan et al. 1990;Martínez et al. 2016). Water was added into the plastic trays to creating a humid atmosphere, preventing the enzymes from drying out. The reaction was terminated according to the manufacturer's protocol. Colour changes were read after 5 min using a range from 0 to 5. While 0 represented no changes in the colour (0 nM substrate hydrolysed), represented a 5 a clear and strong colour change (≥ 40 nM substrate hydrolysed) and therefore a positive enzyme reaction (Baldrian et al. 2011). Determinations were done using biological duplicates.
Influence of temperature and pH on β-glucosidase activity and stability The optimal pH range of the recombinant BglB was measured at 37 °C in 50 mM PBS buffer containing 2 mM pNPβGlc with pH values ranging between 4 to 11 for 20 min. The temperature optimum was determined using 50 mM PBS buffer containing 2 mM pNPβGlc with the optimal pH incubated for 20 min at temperatures between 10 and 60 °C. The pH stability of the enzyme was determined in 50 mM PBS buffer with pH 4 to pH 11 for 2 h at 37 °C. The effect of the temperature on enzyme stability was tested by incubating the enzyme in 50 mM PBS (pH 7) for 2 h at various temperatures from 10 to 60 °C. The relative activities were calculated by released pNP from 2 mM pNPβGlc measured at 405 nm with a microtiter plate reader. Determinations were done using biological duplicates.
Kinetic parameters of β-glucosidase
The Michaelis Menten constants (K M ) and maximum reaction rate (V max ) of the enzyme were determined in 50 mM PBS buffer (pH 7) at 37 °C using pNPβGlc concentrations between 0.01 and 20 mM (Johnson et al. 2011). An increase in absorbance by released p-nitrophenol was recorded at 405 nm with a microtiter plate reader. The recorded absorbance values of the first 4 min directly after adding the enzyme to buffers containing different pNPβGlc concentrations were used for the claculations. The kinetic constants of the β-glucosidase were calculated using Lineweaver-Burk plots (Lineweaver et al. 1934). Determinations were done using biological duplicates.
Hydrolytic activity against isolated β-glucans Isolated and purified bacterial β-glucan of L. brevis TMW 1.2112 and P. claussenii TMW 2.340 and curdlan (Megazyme Ltd., Ireland) were dissolved in 50 mM PBS buffer (pH 7) with a final concentration of 1 mg β-glucan mL −1 . The β-glucan samples were inoculated with the recombinant β-glucosidase and incubated at 37 °C for 4 h. In addition, negative controls of the dissolved β-glucans were incubated without enzyme addition. Released D-glucose was enzymatically determined by glucose oxidase/peroxidase assay (GOPOD, Megazyme Ltd., Ireland) according to the manufacturer's protocol, except adjustments of sample and reagent volumes. The assay was adapted to microtiter plate volumes with 50 μL sample volume and 150 μL of the GOPOD reagent. A standard curve using D-glucose was used to determine hydrolytic enzyme activity. Determinations were done using biological triplicates.
Neighbour-joining tree of characterized GH 3 β-glucosidases of LAB The visualization of the relationship of the GH 3 β-glucosidases was performed by reconstruction a phylogenetic tree.). A phylogenetic tree-based similarity matrix of amino acid sequences was constructed by the neighbour-joining method (Saitou and Nei 1987) using the Bionumerics R software package V7.62 (Applied Maths, Belgium). Bootstrapping analysis was undertaken to test the statistical reliability of the topology of the tree using 1000 bootstrap resampling of the data.
Results and discussion
In silico characterization of L. brevis TMW 1.2112 glycoside hydrolases Several glycoside hydrolases were identified within the genome sequence of L. brevis TMW 1.2112. The bioinformatic analyses revealed i.a. the GH 3 a β-glucosidase (bglB), GH 30 a glycosylceramidase, GH 65 a maltose phosphorylase and GH 88 a d-4,5 unsaturated β-glucuronyl hydrolase. The enzymatic activity of the β-glucosidase (BglB) was characterised. In addition, the putative hydrolytic activity towards bacterial β-glucan was tested.
Characterisation of the GH 3 β-glucosidase gene and its ubiquity in other Lactobacillus strains
The BglB encoding gene AZI09_02170 (GenBank accession No.: ARN89439) of the beer spoiling and β-glucan forming L. brevis TMW 1.2112 which consists of 2256 bp was annotated as a putative intracellular glycoside hydrolase. The hydrolase with homology to the glycoside hydrolase family 3 encodes 751 amino acids with a molecular mass of 83.5 kDa. Sequence analysis with the BLAST program resulted similarities to several L. brevis glycoside hydrolases e.g., L. brevis ZLB004 (GenBank accession No.: AWP47268) with a 98% identity, a β-glucosidaserelated glycosidase of L. brevis ATCC 367 (GenBank accession No.: ABJ65020) with 96% identity and two described thermostable β-glucosidases of L. brevis LH8 Bgy1 (GenBank accession No.: BAN07577) and Bgy2 (GenBank accession No.: BAN05876) isolated from Kimchi with 96% similarity. The thermostable β-glucosidases were analysed for the ability to form compound K from ginsenosides (Quan et al. 2008;Zhong et al. 2016aZhong et al. , 2016b. Michlmayr et al. (2010a) described a β-glucosidase of L. brevis SK3 isolated from a starter culture preparation for malolactic fermentation related to aroma compounds formation. Further sequence analysis resulted in a 67% identity with a thermostable β-glucosidase B ( (LH 8). Though only L. brevis TMW 1.2112 carry the gtf2 gene for β-glucan formation (Fraunhofer et al. 2018a;Michlmayr et al. 2015;Quan et al. 2008). The Bifidobacteria and Li. antri DSM 16,041 were isolated from gastrointestinal tract of humans and gtf2 negative (Mattarelli et al. 2008;Reuter 1963;Roos et al. 2005). The phylogenetic analysis revealed that GH 3 β-glucosidases appear in LAB of different origins not specifically related to EPS production ability of the strains. In past studies possible degradation of EPSs by glycoside hydrolases of LABs was observed as decreased EPS yields over fermentation and lowered viscosity e.g., by Lacticaseibacillus rhamnosus R (formerly Lactobacillus rhamnosus R (Zheng et al. 2020)) and Streptococcus thermophilus LY03 (Cerning et al. 1992;Degeest et al. 2002;Pham et al. 2000;Vuyst et al. 1998;Zannini et al. 2016). However, the lack of hydrolytic enzymes from EPS forming LABs associated with its degradation was also described (Badel et al. 2011;Patel et al. 2012).
Expression and purification of recombinant β-glucosidase
Within the sequence of bglB no signal peptide sequence was predicted and only the stop codon was removed regarding Ni-NTA affinity purification via the poly-histidine tag coded within the expression vector. The sequence of bglB was amplified by PCR and integrated into the expression vector pBAD/Myc-His and expressed in E. coli BL21. To maximize the protein yield, different inducing agent concentrations and inducing temperatures were tested and resulted an optimum concentration of 0.25% L-arabinose (v/v) at 15 °C overnight (García-Fraga et al. 2015;Sørensen et al. 2005). The intracellular formed enzyme was purified with Ni-NTA from the crude cell extract. The molecular mass of the enzyme was calculated via the amino acid sequence and resulted 83.5 kDa which corresponded with the bands of the elution fractions in SDS-PAGE gel stained with Coomassie (Fig. 2).
Substrate spectrum
Seven different pNP substrates were used analysing the specific enzyme activity at 37 °C within 2 h with a microtiter plate reader (Table 1). The results for BglB indicated specificities for β-D-linked glycosides.
Furthermore, the cell lysate of untransformed E. coli Bl21 was tested for enzymatic activity using the pNP substrates, which was negative i.a. for pNPβGlc. A significantly higher specificity of BglB was observed with pNPβGlc compared to the other substrates tested. This was confirmed by API® ZYM analyses resulting a strong colour change (≥ 40 nM substrate hydrolysed) and subsequently a positive enzyme reaction. However, a difference was observed for the β-galactosidase activity which was negative with the API® test and positive using pNPβGal. This might be associated with the different substrates type used in both analyse. The specificity of β-glucosidases for pNPβGlc is well described in several studies of different bacterial hosts (Chen et al. 2017;Fusco et al. 2018;Méndez-Líter et al. 2017;Michlmayr et al. 2010aMichlmayr et al. , 2010bZhong et al. 2016a). Due to the high affinity of the enzyme to pNPβGlc this substrate was used in the following analysis.
Effects of temperature and pH on the enzyme activity and stability The pH stability (Fig. 3A) of the recombinant β-glucosidase was analysed at a range of pH 4-11 and resulted in a hight stability at pH values between Naphthol-AS-Bl-β-D-glucuronide + a 2 β-glucosidase 6-Br-2-naphthyl-β-D-glucopyranoside + 5 β-glucosaminidase 1-naphthyl-N-acetyl-β-D-glucosaminide − 0 α-mannosidase 6-Br-2-naphthyl-α-D-mannopyranoside − 0 α-fucosidase 2-naphthyl-α-L-fucopyranoside − 0 Values are means of triplicates including standard deviations 7 and 9 with ≥ 95% relative activities. Under acidic conditions (pH 4-6) the enzyme stability decreased and was < 40%, the stability values at pH 10 and 11 were similar. The optimum pH for enzyme activity was observed at pH 7. Next to the pH conditions, the enzyme stability at different temperatures (10-60 °C) was determined and displayed the maximum at 37 °C (Fig. 3B). Between 10 and 37 °C the relative activity was ≥ 80% decreasing to 12% at 60 °C. The temperature optimum for enzymatic activity was measured at 37 °C, temperatures above or below resulted only ≤ 40% relative activity. The described β-glucosidases of L. brevis SK3 and L. brevis LH8 showed optimal activities at pH 5.5 and 45 °C and pH 6-7 and 30 °C, respectively. Furthermore, characteristics of described GH 3 β-glucosidases including O. oeni species, Bifidobacteria and other LAB were compared ( Table 2). Revealing that the temperature optima of β-glucosidases from L. brevis strains were in average lower compared to thermostable β-glucosidases of Bifidobacteria or O. oeni strains. In general, the pH optima ranged between 4.5 and 7 and temperature optima between 30 and 55 °C (Michlmayr et al. 2010b(Michlmayr et al. , 2015Zhong et al. 2016b).
Kinetic parameters
The kinetic parameters of BglB were calculated by Lineweaver-Burk plot using pNPβGlc as substrate at various concentrations. The enzyme had a high affinity for the substrate revealed by a low K m which was 0.22 mM. The maximal rate (V max ) was 77 μM · min −1 , k cat was 59.58 s −1 and the catalytic efficiency (k cat /K m ) was 8.3 · 10 3 s −1 mM −1 . The K m value of the β-glucosidase from L. brevis SK3 measured with pNPβGlc was 0.22 mM (Michlmayr et al. 2010a). Further K m values of GH 3 β-glucosidase (Table 2) from LABs ranged between 0.17 mM and 16 mM using pNPβGlc as substrate (Coulon et al. 1998;Sestelo et al. 2004).
Enzymatic hydrolysis of β-1,3-linked glucan by recombinant β-glucosidase The motivation of this study was to characterise the carbohydrate active enzyme BglB of the β-1,3-linked glucan producing LAB L. brevis TMW 1.2112. The involvement of BglB in the degradation of cell-own EPS was additionally ivestigated. Three β-glucan isolates including cell-own β-glucan of L. brevis TMW 1.2112 were incubated with the purified recombinant enzyme and released D-glucose was quantified (Fig. 4). Curdlan a linear β-1,3-linked glucan resulted a negligible amount of free D-glucose after incubation with the enzyme which was rather a result of dissolving than enzymatic activity. Furthermore, since curdlan is insoluble in water, this could affect the availability of the polymer for enzymatic degradation (Koumoto et al. 2004;Zhang et al. 2014). Released D-glucose from β-glucan produced by L. brevis TMW 1.2112 and P. claussenii TMW 2.340 were significantly higher, however the D-glucose concentration were still low with a maximum of ~ 8 μg D-glucose · mL −1 (L. brevis TMW 1.2112 β-glucan). The solubility of the isolated bacterial β-glucans was likewise low which could be caused by the extraction conditions, structure, and degree of polymerization (Bohn et al. 1995;Havrlentova et al. 2011;Virkki et al. 2005). Furthermore, the purification process in some cases affects the structure integrity due to harsh chemicals and physical methods as used in this study e.g. ethanol precipitation, benchtop homogenizer, and freeze drying with subsequent resuspending (Goh et al. 2005). In addition, L. brevis TMW 1.2112 and P. claussenii TMW 2.340 synthesize likewise highmolecular weight β-glucans similar to that of P. parvulus 2.6R and O. oeni IOEB 0205, with molecular mass of 3.4 · 10 4 to 9.6 · 10 6 Da and 8.0 · 10 4 to ≥ 1 · 10 6 Da, respectively. (Ciezack et al. 2010;Dols-Lafargue et al. 2008;Werning et al. 2014). High-molecular β-1,3-linked glucan are described as insoluble in water (Bohn et al., 1995). Moreover, the degradation of β-glucan is more likely performed by more than one hydrolytic enzyme, especially as the characterized β-glucosidase (AZI09_02170) is an intracellularly expressed enzyme of L. brevis TMW 1.2112. Furthermore, in our previous study, we showed that the decrease in viscosity of L. brevis TMW 1.2112 culture broth could not be explained by the degradation of late expressed enzymes including BglB. However, the viscosity decrease indicated the degradation of high-molecular β-glucan which may have been caused by so far unknown enzymes of this strain (Bockwoldt et al. 2022).
According to the finding of this study and by the comparison of the GH 3 β-glucosidases from other LAB, BglB seemed to be an exo-active enzyme able to hydrolyse terminal, non-reducing β-D-glycosyl (Florindo et al. 2018) residues of substrates. This restricted hydrolytic activity could be an explanation of the low released D-glucose amounts from β-glucan. Moreover, the β-glucosidase is most likely active on smaller carbohydrates and not high-molecular weight β-glucan. However, it might be involved to a later stage in polymer degradation e.g., after digestion with an endo-glucanase or if (partial) cell lysis occurs (Degeest et al. 2002;Pham et al. 2000). In preliminary experiments endo-and exo-glucanases of different origin (Trichoderma sp., and Aspergillus oryzae) further including a β-glucosidase from Aspergillus niger were used for the hydrolysis of the isolated bacterial β-glucan. Among others the GEM-assay (Danielson et al. 2010) was performed and resulted similar low D-glucose amounts after enzymatic digestion (data not shown) which again could be associated to the hurdles of β-glucan purification and resuspension.
In conclusion, we have identified and characterised the β-glucosidase BglB of the beer spoiling and β-glucan forming L. brevis TMW 1.2112 with a molecular mass of 83.5 kDa which strongly favoured substrates with β-glycosidic bonds and is apparently an exo-active enzyme. Even though the start of β-glucan degradation was observed and might be in greater extent after a longer incubation period, the in vivo identification of involved enzymes in bacterial Temp. = temperature, n.d. not determined ⁕ Sources included in Neighbour-joining tree of characterized GH 3 β-glucosidases a K m was analysed using pNPβGlc, as substrate β-glucan degradation e.g., by proteomic analysis is more favourable. Thus, the weak solubility of isolated β-glucan and feasible structural changes are eliminated and analysis of the enzymes activity under native conditions is enabled. However, it also looks like, given the phylogenetic analysis and characterization of GH 3 β-glucosidases from LABs, that this very enzyme family is not explicitly relevant to the EPS degradation. Data availability Data sharing not applicable.
Code availability Not applicable.
Conflict of interest
The authors have no conflicts of interest to declare.
Ethics approval Not applicable.
Consent to participate Not applicable.
Consent for publication Not applicable.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 6,214.6 | 2022-06-04T00:00:00.000 | [
"Biology",
"Engineering"
] |
Performance Analysis of Multi-User OTFS, OTSM, and Single Carrier in Uplink
In this paper, we develop a multiple access (MA) mechanism where users transmit upsampled and circularly shifted orthogonal frequency division multiplexing (OFDM) signals in the uplink. These signals pass through MA doubly-dispersive channel and get combined at the base station (BS). We show that the composite signal received at the BS forms an equivalent single-user orthogonal time frequency space (OTFS) system. We show that such low-complexity transmission of OFDM symbols in the uplink from low-capability devices yields diversity gain as the OTFS. We extend this MA scheme for orthogonal time sequency multiplexing (OTSM) and block based single carrier (block SC) transmissions, where the composite received signal at the BS forms an equivalent single-user OTSM or block SC received signal depending on the transmission. We also develop successive interference cancellation (SIC) and turbo decoding principles-based receivers for multi-user OTFS, OTSM, and block SC transmissions, and a frequency domain SIC receiver for OFDM. We analyze their complexity with both analytical and simulation methods. Further, we compare the multi-user uncoded and coded performance of OTFS, OTSM, and block SC with multi-user OFDM. We also evaluate their performance under the effects of nonlinear high-power amplifiers in multi-user scenarios.
I. INTRODUCTION
T HE dawn of 6G excites several use cases, broadly catego- rized into scenarios of eMBB+, mMTC+, and URLLC+ [1], [2], [3].The eMBB+ use cases, such as holographic telepresence, require high data rates up to a few Tbps [4].The mMTC+ scenario describes the use cases unmanned aerial vehicles (UAVs), smart cities, healthcare, and smart industries, which require high mobility and massive connectivity with support for reduced capability IoT (Internet of Things) devices [5].The URLLC+ covers remote robotic surgeries and autonomous driving, which require high reliability and low latency in the order of a few fractions of a millisecond [6].For 6G to support these diversified scenarios, investigation of waveforms for air interface with an efficient multiple access scheme which support various mobility conditions, higher reliability and connectivity to a large number of low power and low capability devices is critical.
The waveforms from 2G to 5G have evolved from Gaussian minimum shift keying (GMSK) in 2G, to spread spectrumbased wideband code division multiple access (WCDMA) in 3G, up to the popular orthogonal frequency division multiplexing (OFDM) in 4G.In 5G, a variant of OFDM is used, which has flexible subcarrier bandwidth [7] to handle high Doppler and phase noise, while other contending waveforms such as FBMC, UFMC, GFDM [8] were explored in the past decade.In recent years, a new waveform, namely orthogonal time frequency space (OTFS) [9] has been proposed, which has attracted researchers' attention across the world.
In OTFS, information bearing quadrature amplitude modulation (QAM) symbols are placed in a two-dimensional (2D) delay-Doppler (de-Do) domain (grid) against placing these in the popular time-frequency (TF) domain as is done in OFDM.OTFS is known to outperform OFDM by several dB [10], [11], [12] due to its resilience to Doppler effects and its ability to extract the diversity gain.This promises to meet some of the requirements of 6G such as high mobility and reliability.However, to support various use cases of 6G, where user devices range from having high capability to simple lowcost devices, which are massive in number and characterized by low-rate, low-energy, and low peak-to-average power ratio (PAPR) requirements, efficient multiple access (MA) schemes for OTFS need to be developed [13], which is the goal of this work.
The OTFS scheme presented in [14] multiplexes users' data in the de-Do domain using delay division multiplexing (deDM) and Doppler division multiplexing (DoDM) with guard delay bins and guard Doppler bins.In [15], an interleaved delay-Doppler multiple access (IDDMA) scheme is proposed, where users are allocated interleaved de-Do resource blocks with no guard bins.Interleaved time-frequency multiple access (ITFMA) is proposed in [16], where users are allocated interleaved TF resource blocks.Angle-de-Do domain resource allocation is considered in [17] for massive MIMO scenarios.Recently, time-reversal precoding-based MA schemes [18], [19] have also been developed to improve spectral efficiency and support high data rates, however the schemes presented are applicable for downlink and use channel state information (CSI) at transmitter.
In OTFS, the discrete symplectic Fourier transform (DSFT) is used to transform QAM symbols from the de-Do domain to the TF domain.The unitary transform DSFT acts as orthogonal precoding (OP) for QAM symbols before being placed in the TF domain.In [20], the DSFT is replaced with the Walsh-Hadamard Transform (WHT), while in [21], sparse WHT is used in place of DSFT, both of which have similar error performance as that of OTFS.
The OFDM operation is performed on the TF symbols obtained after the DSFT to generate the time domain OTFS signal.This two-step procedure for converting de-Do symbols to the time domain is simplified by using an IDFT along the Doppler dimension of QAM symbols in the de-Do domain, followed by vectorization of the resultant 2D symbols [12], [22] to obtain the time domain OTFS signal.As the IDFT is a unitary transform, [23] explored replacing IDFT with WHT, and the corresponding scheme is named orthogonal time sequency multiplexing (OTSM), which is reported to perform as OTFS in terms of error probability.In a similar way, the identity matrix may be used in place of IDFT / WHT, and the resultant waveform becomes block-based single carrier (block SC), which is compared with OTFS in [21], [23], [24], [25], and [26].
The peak-to-average power ratio (PAPR) is an important characteristic of waveforms, as it affects the bit error rate (BER), energy efficiency, and link budget in the presence of non-linear effects of high-power amplifiers (HPAs).In the patent [27], a method to reduce the PAPR of an MA scheme with DoDM-based resource allocation is described.The works [28] and [29] analyze the PAPR for OTFS, while [30] studies the effects of non-linear HPAs on its performance.
Receiver signal processing is an integral part of waveform/air-interface design, and hence we analyse receiver performance for MA schemes for OTFS, OTSM, and block SC.The work [31] uses MMSE for OTFS uplink multi-user reception, while [16] employs MMSE as well as maximum likelihood detection (known to be highly complex) based receivers.
The works on MA and multi-user receivers for OTFS and related schemes presented above have some limitations, which are discussed below.The use of guard bins in [14] for OTFS MA results in a loss of spectral efficiency.The MA schemes in [15], [16], [17], and [27] require users to generate an entire OTFS frame for transmission, irrespective of the allocation size of resources, leading to unnecessary high complexity for IoT devices.The MA scheme in [17] requires a larger number of antennas at BS than the number of users, and the scheme is relevant only for massive MIMO scenarios.The OTFS-related waveforms, namely WHT and sparse WHT based OP schemes in [20], [21], and OTSM in [23], are compared with OTFS in only single-user scenarios, while the performance comparison of MA schemes is not available in the literature to the best of the authors' knowledge.Although [21], [23], [24], [25], [26] show the performance of OTFS and block SC, the analysis is limited to single-user scenarios.The PAPR analysis of OTFS and the performance of OTFS under HPA non-linear effects presented in [28], [29], and [30] are also limited to single-user scenarios.The multi-user receivers for OTFS in [16] and [31] were designed by considering ideal transmit and receive pulse shapes, rendering them unrealizable in practical situations.These works present only the uncoded error performance where degradation in the probability of error is reported with an increasing number of users.To the best of the authors' knowledge, the coded performance for OTFS in multi-user scenarios is not yet available in the literature.The works [20], [21], [24], [32], [33], and [34] describe low complexity OTFS receivers for practical pulse shapes; however, they are designed for processing signals from single-user.
Motivated by the potential of OTFS and similar OP based schemes to be contenders for air interface of 6G while considering the limitations of the available literature, we investigate MA schemes for OTFS and related waveforms, which are spectrally efficient and equally applicable to low-capability IoT and regular devices.In the mechanisms discussed here, one OTFS frame is split into M discrete-time OFDM symbols [35] and a group of such symbols is allocated to a user in the uplink.Each OFDM symbol carries N QAM symbols.This scheme, in contrast, does not require any guard bins as used in [14].The transmission complexity of a user is limited to the number of OFDM symbols that the user transmits, which is similar to the transmission complexity of 4G, WiFi, and existing 5G systems.We further analyze an MA scheme for OTSM and block SC.We develop successive interference cancellation (SIC) and turbo decoding principle based multi-user receiver structures for practical rectangular pulseshapes, which are used for OTFS, OTSM and block SC.We developed a mathematical model for multi-user OFDM and a frequency domain SIC receiver for uplink reception, and compared its error performance with the three waveforms.Finally, we compare OTFS, OTSM, and block SC in terms of PAPR from an uplink perspective and analyze their multi-user performance in presence of HPA non-linearities using solid-state power amplifier (SSPA) model [36].
A. Paper Plan
In Section II-A, we introduce the system model, including a brief introduction to OTFS.We relook at OTFS signal generation to derive an MA scheme in II-B.In Section III-A, we describe an MA scheme for OTFS and derive an expression for the multi-user uplink received signal in III-B.MA schemes for OTSM and block SC are developed in III-C.A description of MA for OFDM is given in III-D.In Section IV, we develop SIC and turbo iterative receivers for multi-user reception of OTFS, OTSM, and block SC, and a frequency domain SIC receiver for OFDM.We then compare their complexity using analytical expressions.In Section V, we discuss the simulation results, including a comparison of receiver performance and complexity, as well as the PAPR and HPA non-linear effects on the performance of OTFS, OTSM, and block SC.Finally, we provide our conclusions in Section VI.
B. Notations
Throughout the paper, we use the following notations.We consider x and x as vectors.X and x as matrices and scalars, respectively.N[a b] is the set of natural numbers in the range Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
between a and b.C M ×N denotes the set of all matrices of dimension M × N with complex numbers.x(i, j) represents the element in the i th row and j th column of the matrix X. x(t) (x[n]) represents a continuous (discrete) time signal as a function of t (n).I N , F N , and W N denote an Identity, normalized discrete Fourier transform (DFT), and normalized Walsh-Hadamard matrices of order N , respectively.A zero vector with length N is represented by 0 N .The variables xy denote multiplication between x and y. () T and () H represent the transpose and conjugate transpose operations, respectively.vec (X) vectorizes X by stacking all the columns serially, while vec −1 M ×N (x) produces a matrix of size M ×N from the vector x of length M N , which is the inverse of vectorization.If x ∈ C N ×1 , diag{x} is an N × N diagonal matrix.The (−) N represents the modulo N operation, which results in an integer between 0 and N − 1.
A. Orthogonal Time Frequency Space
The source bits are channel encoded using low-density parity-check (LDPC) codes.The coded bits are mapped to QAM symbols, which are placed in the de-Do domain over a grid (l∆τ, k∆ν) : l = 0, 1, . . ., M − 1, k = 0, 1, . . ., N − 1 of size M × N for transmission in each frame.The grid parameters, M and N , represent the number of delays and Doppler bins, and ∆τ and ∆ν represent the delay and Doppler resolutions, respectively.The bandwidth B and the frame duration T f are given as where ∆f = 1/T .These QAM symbols denoted as x(l, k) are arranged in a matrix form as expressed in (2), according to their placement over the de-Do grid, where row indices represents delays and column indices for Dopplers.
The generation of the time domain signal corresponding to the symbols in de-Do domain, which is regarded as OTFS modulation, given in discrete-time [37] as where s ∈ C M N ×1 is the transmission vector whose elements are the samples of the OTFS signal s[n] for n = 0, 1, . . ., M N − 1 and G is an M × M diagonal matrix diag{[g(0), g(∆τ ), . . ., g((M −1)∆τ )] T } for a transmit pulse shape g(t) of duration T = M ∆τ .For practical rectangular pulse shapes with unit amplitude, G reduces to an identity matrix I M .Thus, we can write (3) as where S is the delay-time matrix.CP included OTFS signal is transmitted as an analog waveform s(t) using a pulseshape c(t) as, where l cp is the CP length.Then the received time domain signal for a doubly-dispersive channel with an input delay spread function g(t, τ ) from [38] is where v(t) is the continuous time white Gaussian noise with power spectral density (PSD) of N 0 .For a finite P number of propagation paths and one Doppler frequency per path, the g(t, τ ) as in [39], becomes where a p , τ p , and ν p are the complex gain, delay, and Doppler frequency associated with the p th path, respectively.At the receiver, the output of the matched filter with impulse response c * (−t) is sampled at intervals of ∆τ .Thus, we obtain where g[n, l] is given in (91) and w[n] is a sample of filtered noise with variance σ 2 w = N 0 .The proof of ( 9) is given in the appendix.After discarding the CP samples, the received signal y = {y[n]} M N −1 n=0 can be expressed as where , and where H ∈ C M N ×M N is the single-user time domain channel matrix, with Π as a cyclic (forward) permutation matrix of order M N [37], and Now, we explore OTFS signal so as to identify the possible MA scheme that is inherently present in the signal.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
We observe that each l th term of ( 15) is vec e l x T l , which represents an upsampled and circularly shifted OFDM symbol x l of length M N samples.Therefore, one may consider the OTFS signal as an orthogonal aggregation of non-zero samples of M upsampled OFDM symbols, each of which is circularly shifted with orthogonal delays from l ∈ [0, M − 1].
Since in WiFi, 4G, and 5G, OFDM is already used as the signaling mechanism, we can consider an MA scheme for OTFS, where each user generates such vec e l x T l to form a component of OTFS signal, thereby the transition in technology is minimal.Unlike [15], [16], [17], and [27], a user does not need to generate the complete OTFS signal for transmission.Furthermore, since OFDM technology is now mature and is used in IoT devices, including 5G supporting NB IoT [5], we can extend a similar philosophy to OTFS.
Each part of the OTFS signal will go through different channels corresponding to the link between the users and the base station (BS), which is in contrast to (10) where the entire OTFS signal (all parts) go through the same channel.It remains to see whether received signals combined at BS to form OTFS signal, which is analyzed in Section III.
III. MULTIPLE ACCESS SCHEMES
In the following subsections, we describe an MA scheme for OTFS in III-A and derive an expression for the combined received signal at the BS in III-B.We further develop an MA scheme for OTSM and block SC in III-C, based on the scheme described for OTFS.Finally, we provide a signal model for an MA scheme with OFDM in III-D.
A. OTFS
The circular delays l = 0, 1, . . ., M − 1, which are row indices of X in (2), are partitioned into J number of sets and allocated to J users.The set of delay indices allocated for the j th user, for j = 1, 2, . . ., J, is denoted as Ω j , and its cardinality as |Ω j |.The following othogonality condition is ensured while allocating delay sets to users.
Each of j th user considers a block of QAM symbols x j ∈ C |Ωj |N ×1 for transmission in a frame.The x j is divided into |Ω j | number of N length vectors, and each of them is assigned with a row index l ∈ Ω j using an operator Ψ j as where x j,l = [x j (l, 0), x j (l, 1), . . ., x j (l, N − 1)] T are the QAM symbols of the j th user associated with the index l.Then the j th user de-Do plane symbol matrix X j is constructed as Defining, which is a de-Do plane symbol matrix, formed by aggregating all symbol matrices of J users.Since the orthogonality condition in ( 16) is ensured for allocation of delays to the users, X j can be obtained from X M U as where D j is an M × M diagonal matrix given as The j th user's delay-time matrix is obtained using (5) as, The j th user's transmission vector is given as After adding CP, the samples of s j are transmitted as an analog waveform s j (t), following (6).
B. Multi-User Reception at the BS
The signal received from the J users at BS in discrete-time, after discarding the CP, assuming ideal synchronization, is where H j is the time domain channel matrix for the j th user, as given in (11).This is explained with the help of Fig. 1 for a 4-user scenario.In the figure, user U1, U2, U3, and U4, with channel matrices H 1 , H 2 , H 3 , and H 4 , transmits s 1 , s 2 , s 3 , and s 4 , respectively.Using ( 22) and ( 23), ( 24) can be written as, Using (20), Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Using the identity, vec (BA) = (I ⊗ B) vec (A), we get, Defining, which is a combination of J number of channel matrices.H ef f can be referred to as an effective time domain channel matrix for the multiple access channel and it can also be expressed alternatively with a set of M N column vectors as with, where, h j,n is the n th column vector of H j .Further, we define, The last two terms of ( 31) are represented by ( 19) and ( 23).
The vector s M U describes an OTFS signal for the symbol matrix X M U , which is an aggregation of individual users' transmission vectors.Using ( 27), (28), and ( 31), the signal received in the uplink can be expressed as It shows that the upsampled and circularly shifted OFDM symbols from different users after going through the multiple access channel H 1 , . . ., H J when combined at the receiver forms an equivalent OTFS system, similar to (10).The eqs. ( 28) to (32) are illustrated for a 2-user scenario with M = 2, N = 2 in Fig. 2 when noise is ignored.Here, each user transmits one OFDM symbol of two samples with upsampling and circular shifts where We see that two column vectors from H 1 and two from H 2 forms the H ef f .As the H ef f contains the column vectors from multiple users' channel matrices, which are drawn from different impulse responses, we hypothesize the H ef f is more diversified than a single user channel matrix H j .This we verify in the result section by comparing single user and multi-user error performance.
C. MA Scheme for OTFS-Related Waveforms
We extend the described MA scheme for OTFS in III-A to OTSM [23] and block SC.These are closely related to OTFS, and have a unified expression for signal generation given below.
where, P is an N × N matrix given for each waveform in Table I.Accordingly, the QAM symbols in X which are assumed to be placed in the de-Do domain for OTFS, are considered to be in the delay-sequency domain for OTSM [23], and the delay-time domain for the block SC.We refer to the operation in (33) as waveform modulation and (34) below as waveform demodulation.
Therefore, we can replace the IDFT (F H N ) matrix in (22) with P to express the j th user transmission vector for all three waveforms.Similarly, P replaces the F H N in the expression for s M U in (31).The expression for the effective channel matrix H ef f in (28) remains identical for all three waveforms.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
D. MA Scheme for OFDM
An M subcarrier CP-OFDM scheme is considered where CP is added for each OFDM symbol.Each j th user is allocated with an orthogonal set of subcarriers as Ωj , similar to the allocation of delay sets for OTFS (OTSM, and block SC).The time-domain received signal y of dm ∈ C M ×1 for all J users at the BS, corresponding to one OFDM symbol after discarding CP, is given as where Hj ∈ C M ×M is the j th user time domain channel matrix, follows (11), for one OFDM symbol of length M samples, d j ∈ C M ×1 is the j th user symbol vector, and w of dm ∈ C M ×1 is the noise vector.After the DFT operation at the receiver y of dm = F M y of dm is expressed as where Λ j = F M Hj F H M is a frequency domain channel matrix, Dj is the diagonal matrix for Ωj as given in (21) for OTFS (OTSM or block SC), w of dm = F M w of dm , and Defining Rewriting (36) as In this work, a frequency domain SIC receiver described in IV-F is used to process y of dm , for detecting each j th user's symbol vector d j .
IV. MULTI-USER ITERATIVE RECEIVERS
We describe two receiver architectures for OTFS, OTSM, and block SC: (1) SIC-based and (2) turbo decoding-based.Before the SIC or turbo iterations begin, both receivers require a common signal processing stage.We use Fig. 3 to describe this signal processing stage.In IV-F, we describe frequency domain SIC receiver for OFDM.It is assumed here that the receiver has perfect knowledge of the channel.The H ef f can be estimated at the BS by placing user-specific pilots either in the de-Do domain or time domain, as described in [40] and [11], respectively, during transmission.
A. MMSE Equalization
The received signal y in ( 32) is first equalized using where G is the MMSE equalization matrix given as
B. Gauss-Seidel (GS) Iterative Detector
The GS iterative detector requires the following computations and where L, D, and U are strictly lower triangular, diagonal, and strictly upper triangular components of the matrix H H ef f H ef f , respectively.During each u th GS iteration, where u ∈ N[1 U ], the following steps are taken to obtain the estimate of s M U (denoted as š(u) M U ).It begins with performing waveform demodulation on the output of the previous iteration as We set š(0) M U = ȳ, which is the MMSE equalized sequence given in (40).The symbol estimates in X(u) M U are mapped to the nearest constellation points using hard decisions and the resultant QAM symbols are waveform modulated as where D is an operator for the hard decisions.An L2 norm for the difference between y and H ef f š(u) M U is calculated as If ∆e (u) < ∆e (u−1) , š(u) M U is obtained as For ∆e (u) ≥ ∆e (u−1) , the iterations are stopped and we continue with š(u−1) as the GS detector output ŝMU .
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.For single-user scenarios, zero padding (ZP) in the transmission is suggested for OTFS and OTSM to achieve GS iterative detection with low complexity [23].In ZP, the last l max rows of the matrix X are assigned zero row vectors which ensure zeros in the last l max positions of every subblock of M samples in the OTFS signal of (15).This helps in avoiding inter subblock interference at the receiver.To obtain similar advantage with the MA scheme described for OTFS (OTSM or block SC), the users are forbidden from using the last l max delays of [0 M − 1] for transmission as shown in Fig. 4. We consider the ZP based transmission in this work.Thus, from (28), we get the effective channel matrix H ef f as a lower triangular with block diagonal structure as where, each H ef f,p is an M × M lower triangular matrix, for p = 0, 1, . . ., N − 1.Using (48), the received vector y in (32) can be divided into N number of M length vectors as y T = [y T sub,0 , y T sub,1 , . . ., y T sub,N −1 ] where, with y sub,p ∈ C M ×1 , s sub,p ∈ C M ×1 and w sub,p ∈ C M ×1 are the p th subblock of y, s M U , and w, respectively.After the steps ( 44), (45), and verifying the condition ∆e (q) < ∆e (q−1) , the estimate of each s sub,p in u th GS iteration is obtained as, where, L p and D p are strictly lower triangular and diagonal components of the matrix H H ef f,p H ef f,p , respectively, š(u) sub,p ∈ C M ×1 is the p th subblock of the š(u) M U of (45), and z p = H H ef f,p y sub,p .The estimate of s M U for ZP based transmissions in each u th GS iteration is given as
C. User-Wise Demodulation and Channel Decoding
From ( 20), (22), and (31), we can decouple the estimate of the each j th user transmission vector ŝj from ŝMU as ŝj = (I N ⊗ D j ) ŝMU . (52) The j th user's estimated delay-time matrix becomes The estimate for the j th user's symbol matrix is obtained with the waveform demodulation as The estimates of the QAM symbols associated with delay l are The estimates of all the QAM symbols of the j th user are gathered into xj as The soft demodulation provides the bit-level log-likelihood ratio (LLR) values for each element xj (γ) of xj as where b α j,γ is the LLR value for the α th bit of R bits b R−1 j,γ b R−2 j,γ . . .b 0 j,γ .S 1 α and S 0 α are sets of all constellation points for which the α th bit is 1 and 0, respectively.The σ 2 γ is the noise power associated with xj (γ), which can be set to 1, as all the symbol estimates have equal noise power.
The bit-level LLR values are stored in a vector b j of length of |Ω j |N R. If coded bits are interleaved before being mapped to QAM symbols in the transmitter, then they are de-interleaved at the receiver.In the case of the turbo receiver, the coded bits are interleaved.Therefore, the LLR values are de-interleaved before LDPC decoding to obtain bj .This is shown with a dotted block in Fig. 3. On the other hand, in the case of SIC receivers, the interleaving of coded bits is not performed during transmission.As a result, the de-interleaver block is bypassed for SIC, whereby bj = b j .Each j th user's LLR values bj are reshaped into a C l × L j matrix Cj = [c j,0 cj,1 . . .cj,Lj−1 ], where C l is the code block (CB) length and L j is the number of CBs transmitted by the j th user.The total number of CBs from all users is given by The LDPC decoder decodes each column of Cj , and the resulting output CBs are stored as columns in another C l × L j matrix as where c j,η is the η th CB of j th user obtained form the LDPC decoder, for η = 0, 1, . . ., L j − 1.The indices of the correctly decoded code blocks (CCBs) and the wrongly decoded code blocks (WCBs) are noted in a L j × 1 vector a j as The total number of CCBs for all users is given as Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
and the total number of WCBs are In subsections IV-D and IV-E, we describe the multi-user SIC and multi-user turbo iterative receivers, respectively.These receivers utilize the LDPC output C j to iteratively detect the CBs which were decoded in error for each user.
D. Multi-User SIC Iterative Receiver
In each q th SIC iteration, where q ∈ N[1 Q], the bits in the CCBs obtained from the LDPC decoder output C (q−1) j of the previous iteration are mapped to QAM symbols, while bits in the WCBs are mapped to zero symbols for each j th user as where M is an operator for QAM modulation, which converts bits in given CB c (q) j,η to a vector of QAM symbols u which are obtained from the first stage of signal processing given in ( 59) and (60), respectively.Vectorizing the matrix The x(q) j is a reconstructed vector for the block of QAM symbols x j , transmitted by the j th user.Using (17), we get For cancellation of interference pattern generated by the known symbols in time domain, the following operation is performed for OTFS and OTSM.
x(q) j,l = 0 N if P ̸ = I N and ℵ(x where ℵ is an operator to find the number of non-zero elements in the given vector.Using ( 18), (22), and ( 23), the j th user's transmission vector is reconstructed as The combined reconstructed vector for all J users is given as Let us denote A (q) as the set of indices (positions) of the non-zero elements in s(q) M U , which are expected to be the same as the corresponding elements in s M U at the identical positions, and B (q) be the set of indices of the zero elements in s(q) M U as and The size of set A (q) (B (q) ) increases (decreases) with the total number of CCBs obtained by the receiver.The interference from previous iterations is cancelled by Substituting (32) in (71), and using (69) we get Accordingly, for channel equalization of y (q) , the channel matrix is updated following [41] as with The MMSE equalization matrix with the updated channel matrix ef f is given by From (40), Using equation (52), each ŝ(q) j is decoupled from ŝ(q) M U .Userwise waveform demodulation is then performed following eqs.( 53) to (56).The LLR values for the symbol estimates are computed using equation (57), and they are reshaped into Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.the matrix Cj .Since each column of Cj represents a CB, the channel decoder selectively decodes only those columns or CBs which were incorrectly decoded in the previous iteration.The columns of C (q) j , which represent the LDPC decoder output in the q th iteration, are obtained as otherwise. (77) The a (q−1) j [η] is then updated to a (q) j [η] following equation (60), which is used in the next iteration.Similarly, we update the total number of CCBs and WCBs for all users, denoted by L (q) c and L (q) w , respectively, using equations (61) and (62).As iterations progress, L (q) c increases with each iteration, and the size of the set A (q) (B (q) ) will increase (decrease).The channel matrix for equalization in (73) becomes more sparse with each SIC iteration, and the complexity of the MMSE operation, using sparse matrix methods, progressively decreases.
The SIC iterations will continue until all the CBs from all users are turned correct, or a maximum number of iterations is reached, or no new CBs are decoded correctly in the present iteration.The sequence of steps followed by the SIC receiver is given in Fig. 5.
E. Multi-User Turbo Receiver Using GS Detector
The turbo receiver for multi-user reception, based on [24], is described below and explained with the help of the schematic diagram shown in Fig. 6.In each q th turbo iteration, for q ∈ N[1 Q], it obtains the reconstructed s M U using all output CBs of the LDPC decoder from the previous iteration.The C (q−1) j of each j th user is reshaped into vector form as We set C (0) j = C j , obtained from the first stage of signal processing given in (59).As mentioned in Section IV-C, for the turbo receiver case, the channel coded bits are interleaved before mapping them to a QAM symbol in each j th user transmission.Therefore, for reconstruction, the elements in vector c (q−1) j are interleaved, and the resulting c(q−1) j is used for the QAM modulation as Using ( 65), (67), and (68), we obtain s(q−1) M U , which is applied to GS iterative detector as After the final iteration of the GS detector, which is described by equations ( 44), ( 45), (50), and (51), each ŝ(q) j is separated from the output ŝ(q) M U using (52).User-wise waveform demodulation and QAM soft demodulation are performed using eqs.( 53) to (57).The LLR values b (q) j are de-interleaved, and b(q) j is applied to LDPC decoding.Unlike the SIC receiver, the LDPC decoder decodes all the CBs in each of the turbo iterations.The output CBs C (q) j from the LDPC decoder are used for the reconstruction of s M U in the next iteration.The turbo iterations will continue until all the CBs are correctly decoded, or the maximum number of iterations is reached.
F. Frequency Domain SIC Receiver for OFDM
The frequency domain SIC receiver follows the principle of the time domain SIC receiver developed for OTFS (OTSM or block SC) in IV-D.It operates on a frame of N number of received OFDM symbols {y of dm,k } k=0,1,...,N −1 .Each k th received OFDM symbol is expressed by following (39) as where Λ ef f,k , d M U,k , and w of dm,k represent the combined channel matrix as given in (38), symbol vector as given in (37), and noise F M w of dm,k , respectively.In each q th SIC iteration, where q ∈ N[1 Q], the interference cancellation is performed in frequency domain for the received OFDM symbols as where { d(q) M U,k } are the reconstructed {d M U,k } in the q th iteration obtained using the CCBs from the previous iterations following eqs.( 63) to (68).We set where ef f,k is the updated channel matrix obtained by following the procedures of the time domain SIC receiver, given in eqs.( 73) and (74).Each j th user's estimated symbol vectors The user-wise estimates of symbol vectors, { d(q) j,k }, are then used for soft demodulation, and the resultant LLR values are fed to the LDPC decoder.Decoding is selectively performed Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
TABLE II COMPARISON OF PER-ITERATION COMPLEXITY BETWEEN THE SIC AND TURBO RECEIVERS
on those CBs that were incorrectly decoded in the previous iteration, as is done in the time domain SIC receiver.Similarly, the sequence of operations for the frequency domain SIC receiver follows the time domain SIC receiver given in Fig. 5.
G. Implementation Complexity of Receivers
Table II compares the iteration-wise complexity of the SIC and turbo receivers.The GS detection in turbo receiver requires U M N (2(l max + 1) + 1 + 2 log 2 N ) complex multiplications (CMs) for U number of GS iterations [23].On the other hand, the MMSE in SIC involves sparse matrix multiplication and inversion of a sparse matrix Ψ = H (q) ef f H (q)H ef f + σ 2 w I M N , as given in eqs.(75) ans(76).These operations can be achieved with a complexity of O(M N ) and the order of the number of non-zero elements (NNZ) of Ψ using sparse matrix multiplication and direct Cholesky factorization methods, respectively [42].Since the sparsity of H (q) ef f increases with each iteration, the total complexity of the MMSE operation lies between O(M N ) and O(M N (2l max + 1)), which is close to the complexity of a GS iteration.Since in each q th iteration, the SIC receiver performs detection for only the WCBs of the previous iteration, the waveform demodulation followed by LLR computation is limited to those samples of ŝ(q) M U that correspond to the WCBs.Similarly, the LDPC decoding process involves a distinct number of code blocks: L (q−1) w for SIC, whereas the LDPC decoding process in turbo receiver operates on all code blocks (L).Consequently, the complexity per LDPC decoding iteration, based on min-sum algorithms, is O(d c L (q−1) w C l ) for SIC receiver and O(d c LC l ) for turbo receiver, where d c is the average column weight of the parity check matrix used for LDPC decoding.While the complexity of LDPC decoding per iteration for turbo remains constant, the number of decoding iterations of LDPC varies with each iteration of the turbo loop.The number of LDPC decoding iterations for each of the receivers is given in Fig. 14 in Section V-C based on simulations for block error rate (BLER) results.Similarly, the number of WCBs L (q−1) w which are input for LDPC decoding in the q th SIC iteration for OTFS is given in Fig. 15.For reconstruction of s M U , the SIC receiver can reuse the reconstructed vector from the previous iterations and enhance it with the newly detected CCBs in the current iteration, which are L (q) c − L (q−1) c in number.In contrast, the turbo receiver requires reconstruction of the entire s M U , involving procedures such as bit interleaving, QAM modulation, and waveform modulation.
Complexity of Frequency Domain SIC Receiver for OFDM: The constituent operations of frequency domain SIC are similar to those of time domain SIC listed in Table II, except for the MMSE operation.The matrix inversion in the frequency domain MMSE, as given in (83), can be implemented using the LDL H factorization. To achieve this, Λ ef f is replaced with a banded matrix formed with the main diagonal, D subdiagonals, and D superdiagonals of Λ ef f , as described in [43].The resulting complexity of frequency domain MMSE operation for a frame of N OFDM symbols is M N 8D 2 + 22D + 4 complex operations.We use D = 5 for the complexity comparisons made in V-C using simulations.This MMSE complexity is higher compared to the sparse MMSE operation and GS iterations, and it dominates the overall complexity of the frequency domain SIC receiver.
V. SIMULATION RESULTS AND PERFORMANCE COMPARISONS
We begin this section with the performance analysis of the MA schemes in terms of uncoded error probability, as discussed in Section V-A.In Section V-B, we present the LDPC forward error correction (FEC) coded performance of the receivers in terms of the block error rate (BLER), which is the ratio of the total number of CBs in error for all users to the total number of CBs transmitted by all users as Lw L .The processing complexity of the three receiver architectures is compared in Section V-C.We compare the PAPR of the OTFS, OTSM, and Block SC, and analyze their performance in the presence of a non-linear HPA effects in Section V-D.
The configuration used for Monte Carlo based simulation analysis given in Table III.The Doppler frequency for each path of the EVA channel was generated using the Jakes model, which is defined as ν p = ν max cos θ p , where θ p is uniformly distributed between [−π π] and ν max is the maximum Doppler shift.The delays for allocation are equally shared among users and the allocated delay indices are adjacent.However, it is also possible to allocate varying numbers of non-adjacent delays to users while satisfying the condition in (16).
A. Uncoded BER Comparison
The uncoded BER for OTFS, OTSM, and block SC is computed by hard thresholding the LLR values bj before LDPC decoding in the first stage of signal processing, before SIC or turbo iterations begin.For OFDM, the uncoded BER is determined within the first iteration of the frequency domain SIC receiver, where hard thresholding is applied to the LLR values before LDPC decoding.Fig. 7 shows the uncoded BER performance of these waveforms for different numbers of users.In single-user scenarios, OTFS, OTSM, and block SC outperform OFDM.Both OTFS and OTSM exhibit identical performance, while block SC has relatively poorer performance.However, in multiuser scenarios, the uncoded BER performance for OFDM remains unaffected.This is due to the allocation of orthogonal subcarriers and the use of a cyclic prefix for each OFDM symbol, which limits interference between users in both the frequency and time domains.In the case of OTFS, OTSM, and block SC MA schemes, the information-bearing QAM symbols from users interfere in the time domain, leading to an increase in multi-user interference (MUI) with an increasing number of users.Fig. 7 shows a degradation in the uncoded BER with the number of users, similar to the results in [16] for OTFS.Furthermore, we observe that OTFS and OTSM perform similarly in multi-user scenarios as in single-user scenarios.The performance gap between block SC and these waveforms is reduced as the number of users increases.For 64 users, where MUI is more pronounced, OTFS, OTSM, and block SC have similar uncoded BER performance.
Since most broadband wireless systems use FEC codes, thus, before drawing any usable conclusions, it is essential to observe the coded performance of these waveforms, which is discussed in the following subsection.
B. Coded Performance
We compare the coded performance of MA schemes for OTFS, OTSM, and block SC using two iterative receivers developed in IV-D and IV-E, along with the MA performance of OFDM using a frequency domain SIC receiver described in IV-F.For block SC and OFDM with SIC receiver, the coded QAM symbols are randomly interleaved before being placed in their respective delay-time and time-frequency domain.Similarly, during reception, for the block SC and OFDM with the SIC receiver, the QAM symbol estimates are deinterleaved prior to the QAM soft demodulation.
Fig. 8 shows the BLER performance of MA schemes with SIC receiver.We observe that the coded single-user performance of OTFS, OTSM, and block SC are nearly identical, while these outperform OFDM by a significant SNR margin.It is interesting to observe the multi-user performance of the OTFS, OTSM, and block SC is better than their singleuser performance.This is in contrast to their uncoded BER performance shown in Fig. 7.This can be attributed to the conjecture made in Section III-B, i.e. the effective channel matrix H ef f in ( 28) -( 30) has higher diversity compared to the single-user channel matrix H of (11) and the MUI cancellation performed by the SIC receiver.On the other hand, BLER for OFDM degrades in the multi-user scenario where each user receives only a fraction of the bandwidth, limiting the available channel frequency diversity for coded performance.
The performance of the multi-user turbo receiver is shown in Fig. 9 for OTFS, OTSM, and block SC.For the single-user case, it can be observed that the three waveforms have nearly identical performance.In the multi-user scenario, the BLER is improved compared to the single-user case for SNRs above 13 dB.This improvement is due to the weighted subtraction between the received vector y in (50) and reconstructed s(q−1) M U in the turbo loop, which cancels interference similar to the SIC receiver.However, since the generation of s(q−1) M U also involves WCBs from the previous turbo iteration carrying MUI, the noise combined with the MUI reduces turbo receiver performance in noise-dominated scenarios (for low SNRs).This can be observed in the Fig. 9 for SNRs ranging from 11 dB to 13 dB.
We see that the coded performance of block SC for both receivers is comparable to OTFS and OTSM over the timevarying channels.This can be understood by comparing the uncoded and coded error performance of OTFS and block SC under different mobility conditions, as shown in Fig. 10.We observe that the uncoded BER of block SC is unaffected by mobility, while the uncoded BER of OTFS improves with mobility.This is because each information-bearing QAM symbol in block SC is spread across the entire bandwidth, hence the frequency dispersion caused by mobility does not induce any inter-Doppler interference (IDI) between the QAM symbols.However, it lacks time diversity since these symbols are limited to an interval of ∆τ = 1 B .Whereas in OTFS, the QAM symbols are spread across the entire time-frequency domain, resulting in both frequency and time diversity which helps it in performing better than block SC in terms of uncoded BER.We also observe that the coded performance of block SC and OTFS under different mobility conditions is nearly identical, which is in contrast to the uncoded BER results.This is because the FEC encoded blocks spread in the time domain help coded block SC achieve time diversity, in addition to its inherent frequency diversity.Fig. 11 compares the coded BLER of OTFS and block SC under different mobility conditions.We observe that, like the coded BER, the OTFS and block SC have similar BLER performance.
Comparing Fig. 8 and Fig. 9, we can observe that both the SIC and turbo receivers exhibit similar BLER performance in the presented multi-user scenarios.However, it is necessary to examine their performance in scenarios with a high number of users, where the number of CBs per user that the user transmits is very small.In such scenarios, even if the channel condition is good for a user, the receiver will not be able to clear much interference by utilizing the correctly received CBs from that user.This results in high residual interference that cannot be overcome by available diversity in H ef f .Fig. shows the BLER performance of OTFS and block SC for both SIC and turbo receivers for scenarios with more than 64 users.In the case of OTFS, we see that up to 72 users, where the number of CBs per user is 4, both receivers provide improved performance.However, the improvement is marginal compared to the 64-user scenario, where the number of CBs per user is 5.A degradation in performance appears for 100 users and 160 users, where the number of CBs per user is 3 and 2, respectively, and it is significantly higher for turbo receiver compared to the SIC.In the case of block SC, we can observe a significant improvement in BLER performance when using an SIC receiver for a high number of users compared to OTFS.Additionally, block SC supports a higher number of users than OTFS.We see improved performance for block SC for up to 160 users.Even for the 246-user scenario, the degradation is minimal, and the performance is still close to that of OTFS with 160 users.However, with the turbo receiver, block SC is only marginally better than OTFS.Furthermore, we observe that across both waveforms, the degradation in performance with high number of users is significantly less for the SIC receiver than for the turbo receiver.The BLER performance of the SIC receiver remains better than that of single-user scenarios, providing BLERs ∼ 10 −5 .
C. Receivers Complexity Comparison From Simulations
We compare the complexities of the receivers for the scenario with 64 users.Fig. 13, shows the average number of iterations performed by each receiver per frame for different SNR conditions.We see that the SIC receiver requires less number of iterations compared to turbo receiver.The difference is significant in the low SNR region.For OTFS, at an SNR of 15 dB, the turbo iterations are 2.3 times higher than the SIC iterations.Fig. 14, shows the average number of LDPC decoding iterations required per CB per frame for the three receivers.We observe that, again, SIC-based receivers require fewer decoding iterations compared to turbo, and it is significant for low SNR points.At an SNR of 15 dB, the SIC requires 40% less number of LDPC decoding iterations than the turbo receiver.Fig. 15 shows the average number of WCBs L (q−1) w that are input for LDPC selective decoding in each SIC iteration for OTFS under varying SNR conditions.We see that as the number of SIC iterations increase, the number of WCBs decreases.From the results it can be said that 6 iterations are sufficient, beyond which improvement is negligible.Now we compare the total order of complexity per transmit frame for each receiver to process the received signal.This is determined by multiplying the average number of iterations performed by the receivers (Fig. 13) and the average per-iteration complexity of the receivers.The latter term is computed from the expressions given in Table II, along with the average count of LDPC decoding iterations (Fig. 14) and the average count of WCBs L (q−1) w for each q th SIC iteration (Fig. 15).The result is shown in Fig. 16.We observe that the overall complexity order of the time domain SIC is significantly lower by ∼ 10 times than that of the turbo receiver at all SNR points.This is due to the fact that the per-iteration complexity of SIC gradually decreases as iterations progress, and this is combined with the decreasing average number of SIC iterations with increasing SNR.On the other hand, the frequency domain SIC receiver, which uses non-sparse matrix-based methods for MMSE operation, has high complexity, comparable to that of the turbo receiver.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Therefore, the time domain SIC receiver has significantly lower complexity than the turbo receiver, while providing similar BLER performance.After having discussed the performance complexity of multi-user receivers we move on to the next section, where we analyze the effects of non-linear HPA on OTFS, OTSM, and block SC.
D. Effect of HPA Non-Linearities
Fig. 17 shows the complementary cumulative distribution function (CCDF) curves for PAPR of OTFS, OTSM, and block SC in both single-user and 8-user scenarios.It is observed that OTFS and OTSM have identical PAPR distributions in both cases.In the single-user scenario, OTFS and OTSM have a PAPR of approximately 11.5 dB at 90 th percentile, while block SC has around 3 dB.For 8 users, each user has only 61 IDFT per transmission, resulting in 61N non-zero samples out of M N samples.Consequently, the 90 th percentile PAPR of the 8-user OTFS case is approximately 1 dB less than that of the single-user case.Now we look at the BLER performance of the three waveforms when SSPA [36] based HPA is used in transmission.
Fig. 18 shows the BLER performance of the MA schemes for the SIC receiver in the 8-user scenario with the non-linear Fig. 18.The BLER performance comparison using SIC receiver with and without HPA non-linear effects.Fig. 19.TD curves for a targeted BLER of 10 −3 with SSPA model [36].
HPA at output backoff (OBO) of 4 dB, as well as with an ideal HPA.It can be observed that the BLER performance of block SC with the non-linear HPA is much closer to that of the ideal HPA than the performance of OTFS and OTSM.
As the OBO increases, the gap between the error performance of MA schemes with non-ideal and ideal HPAs decreases.However, increasing OBO also decrease the transmit SNR, leading to an increase in total degradation (TD).TD is defined as the sum (in dBs) of OBO and the SNR gap between the performances for ideal and non-linear HPAs, for a given targeted level of performance [46].TD = (SNR) non ideal HPA − (SNR) ideal HPA + OBO (in dB) (85) Fig. 19 shows the TD versus OBO curves of the three MA schemes using the SIC receiver in the 8-user scenario, for a targeted BLER of 10 −3 .It can be observed that the TD curves for OTFS and OTSM are similar, as their PAPR distributions are close to each other.The TD is minimum for the block SC at an OBO of 1 dB, which is lower than the minimum TD for OTFS and OTSM, occurring at an OBO of 4 dB, by approximately 2.4 dB.This significant SNR advantage with block SC in practical scenarios makes it an alternative to OTFS and OTSM.
VI. CONCLUSION
In accordance with the goal of this work, we have developed a multiple access scheme where each user generates FECcoded QAM-modulated OFDM signal, upsamples it, circularly delays it (as per the user's allocation), and then send the signal in uplink transmission to the base station (BS).It is shown through an analytical system model that the composite received signal at the BS has the form of a received signal of an equivalent single-user OTFS system.Further, a user may flexibly use Walsh-Hadamard spreading in place of OFDM, as explained in Section III-C, to send the signal, which gets combined at the BS to produce an effective single-user OTSM received signal.Likewise, by utilizing this framework, a user may directly send the signal without using any spreading, resulting in an equivalent single-user block SC received signal at the BS.The multi-user SIC and turbo receivers developed in this work are able to extract the diversity gain available in the three waveforms and also provide a superior coded performance for multi-user scenarios than single-user scenarios by utilizing multi-user channel diversity.This novel approach of MA enables even low capability devices to enjoy the benefits of diversity gain available with OTFS, OTSM, and block SC, which was limited to the devices with high signal processing capability as per existing works.
The single-user and multi-user coded-BLER of OTFS, OTSM, and block SC are demonstrated to be considerably superior to that of OFDM, with OTFS and OTSM exhibiting identical error performance.The multi-user SIC receiver is shown to require much less computation complexity than the multi-user turbo receiver.The block SC, in conjunction with the multi-user SIC receiver exhibit support for the highest number of users while achieving the lowest coded-BLER amongst all waveforms under identical operating conditions.It has simple signal generation structure, lowest PAPR and has the highest resilience to HPA non-linearities.
Thus, based on the observations the block single carrier stands out as a potential candidate waveform for next generation air interface as it provides support for multiple access, power efficiency, low capability devices, high throughput and reliability, which are the requirements of future 6G systems.Investigations on multi-antenna signal processing is envisioned as a future extension of this work.
where, l max is the channel delay tap corresponding to the maximum excess delay, and a p e j2πνpn ′ ∆τ c o (l∆τ − τ p ). (91)
Fig. 3 .
Fig.3.The first stage of signal processing for both SIC and turbo receiver.
Fig. 4 .
Fig. 4. The allocation of row indices to users for ZP based transmission.
Fig. 8 .
Fig. 8. BLER performance of the MA schemes for the SIC receiver.
Fig. 9 .
Fig. 9. BLER performance of the MA schemes for the turbo receiver.
Fig. 10 .
Fig. 10.Uncoded BER and coded BER of block SC and OTFS under different mobility conditions for turbo receiver.
Fig. 11 .
Fig. 11.BLER performance of block SC and OTFS under different mobility conditions for turbo receiver.
Fig. 12 .
Fig. 12. BLER performance of the receivers for high number of users.
Fig. 13 .
Fig. 13.Average number of iterations for turbo and SIC receivers.
Fig. 14 .
Fig. 14.Average number of LDPC decoding iterations per CB per frame for turbo and SIC receivers.
Fig. 15 .
Fig. 15.SIC iteration-wise average number of CBs input for LDPC decoding at varying SNRs for OTFS in a 64-user scenario.
TABLE I MATRIX
P FOR OTFS-RELATED WAVEFORMS | 13,192.2 | 2024-03-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Improving deep learning method for biomedical named entity recognition by using entity definition information
Background Biomedical named entity recognition (NER) is a fundamental task of biomedical text mining that finds the boundaries of entity mentions in biomedical text and determines their entity type. To accelerate the development of biomedical NER techniques in Spanish, the PharmaCoNER organizers launched a competition to recognize pharmacological substances, compounds, and proteins. Biomedical NER is usually recognized as a sequence labeling task, and almost all state-of-the-art sequence labeling methods ignore the meaning of different entity types. In this paper, we investigate some methods to introduce the meaning of entity types in deep learning methods for biomedical NER and apply them to the PharmaCoNER 2019 challenge. The meaning of each entity type is represented by its definition information. Material and method We investigate how to use entity definition information in the following two methods: (1) SQuad-style machine reading comprehension (MRC) methods that treat entity definition information as query and biomedical text as context and predict answer spans as entities. (2) Span-level one-pass (SOne) methods that predict entity spans of one type by one type and introduce entity type meaning, which is represented by entity definition information. All models are trained and tested on the PharmaCoNER 2019 corpus, and their performance is evaluated by strict micro-average precision, recall, and F1-score. Results Entity definition information brings improvements to both SQuad-style MRC and SOne methods by about 0.003 in micro-averaged F1-score. The SQuad-style MRC model using entity definition information as query achieves the best performance with a micro-averaged precision of 0.9225, a recall of 0.9050, and an F1-score of 0.9137, respectively. It outperforms the best model of the PharmaCoNER 2019 challenge by 0.0032 in F1-score. Compared with the state-of-the-art model without using manually-crafted features, our model obtains a 1% improvement in F1-score, which is significant. These results indicate that entity definition information is useful for deep learning methods on biomedical NER. Conclusion Our entity definition information enhanced models achieve the state-of-the-art micro-average F1 score of 0.9137, which implies that entity definition information has a positive impact on biomedical NER detection. In the future, we will explore more entity definition information from knowledge graph.
Background
Biomedical named entity recognition (NER) is a fundamental task of biomedical text mining to identify biomedical entity mentions of different types in biomedical text.Most biomedical NER studies focus on the biomedical text in English.To accelerate the development of Spanish biomedical NER techniques, Martin Krallinger et al. organized a specific challenge for chemical & drug mention recognition in Spanish biomedical text, called PharmaCoNER, in 2019 [1].Participants were required to recognize the entities in Spanish biomedical text, as shown in Fig. 1.
Biomedical NER is a typical sequence labeling problem, and lots of state-of-the-art methods have been proposed for this problem, such as BiLSTM-CRF [2].Almost all these methods do not consider the meaning of different entity types, which may benefit biomedical NER.The meaning of each entity type can be represented by its definition.For example, the definition of PROTEINAS in the guideline of PharmaCoNER 2019 is: "Las menciones de proteínas y genes incluyen péptidos, hormonas peptídicas y anticuerpos." (Protein and gene mentions include peptides, peptide hormones, and antibodies).In this paper, we explore how to encode entity definition information in two kinds of deep learning methods for NER.They are: (1) SQuad-style MRC methods designed to find a continuous span of entity mentions in given text for each type.We use each type's entity definition as a query instead of a naive query generated by simple rules in MRC methods.For convenience, we adopt MRC to represent SQuad-style MRC in the following sections in this paper.( 2) Span-level one-pass (SOne) methods that predict entity spans of one type by one type.We use entity definition information to represent each entity type's meaning and introduce the entity type meaning into SOne.The definition information of each type includes the original definition of each type in the guideline and entity mentions in the text.We compare them in the SOne model.
In order to evaluate the performances of MRC and SOne, we conduct experiments on the PharmaCoNER 2019 corpus.Experiments show that the entity definition information brings improvements to both MRC and SOne methods.The improvement in microaveraged F1-score is about 0.003.The MRC method using entity definition information as query achieves the best performance with a micro-average precision of 0.9225, a recall of 0.9050, and an F1-score of 0.9137, respectively.It outperforms the best model of the PharmaCoNER 2019 challenge by 0.0032 in micro-averaged F1-score.
Related work
The natural language processing (NLP) community has made a great contribution to the development of NER in the biomedical text through challenges, such as I2B2 (Informatics for Integrating Biology and the Bedside) [3,4], BioCreative (Critical Assessment of Information Extraction systems in Biology) [5,6], SemEval (Semantic Evaluation) [7,8], CCKS (China Conference on Knowledge Graph and Semantic Computing) [9,10] and IberLEF [11].A large number of methods have been proposed for biomedical NER.Most of them can be classified into the following three categories: (1) Rule-based methods that extract named entities using specific rules design by experts.The earlier clinical NLP tools are rule-based systems relying on clinical dictionaries, such as MedLEE [12], KnowledgeMap [13] and MetaMap [14].(2) Supervised machine learning methods with hand-crafted features Maximum Entropy (ME) [15,16], Support Vector Machines (SVM) [17], CRF [18,19], Hidden Markov Models (HMM) [20,21] and Structural Support Vector Machines (SSVM) [22].They usually treat NER as a sequence labeling task, which tags a sentence with a label sequence.The common features used in the supervised machine learning methods include orthographic information (e.g.capitalization, prefix, suffix and word-shape), syntactic information (e.g., POS tags), dictionary information, n-gram information, disclosure information (e.g.section information in EHRs) and some features generated from unsupervised learning methods [23].(3) Deep learning methods that can learn features from large unlabeled data without costly feature engineering.Convolutional Neural network (CNN) [24], Recurrent Neural Network (RNN) [25] and Long Short Term Memory neural network (LSTM) [2] have been widely used for biomedical NER and show good performance.Besides the methods mentioned above, there are also some other attempts.For example, to tackle the low-resource problem in the biomedical domain, researchers introduce multi-task learning methods to learn more abundant information from other tasks, such as NER from other sources, chunking, and POS tagging [26][27][28], and deploy transfer learning methods to first learn knowledge from related sources and then finetune on target [29][30][31][32][33].
Nowadays, there is an upward trend in defining NLP tasks in the MRC framework.MRC models [34][35][36] extract answer spans from the context given a pre-defined question.Generally, SQuad-style MRC models can be formalized as predicting the start position and the end position of the answer.Li et al. [37] treat the entity-relation extraction task as a multi-turn question answering and propose a unified MRC framework to recognize entities and extract relationships.Li et al. [38] propose an MRC method to recognize both flat and nested entities.
Datasets
In this study, all experiments are conducted on the PharmaCoNER 2019 corpus annotated by medicinal chemistry experts according to a pre-defined guideline.The corpus contains 1000 clinical records with 24,654 chemical & drug mentions.The corpus is divided into a training set of 500 records, a development set of 250 records and a test set of 250 records, where the test set is hidden in a background set of 3751 records during the test stage of the competition.In experiments, we first split each record into sentences by sentence ending symbols, including '\n' , '. ' , ';' , '?' , and '!' .About 95% of sentences are no longer than 230 tokens.The corpus statistics, including the number of records, sentences, and chemical & drug mentions of different types, are listed in Table 1.It should be noted that the UNCLEAR mentions are not considered during the competition.
Task definition
Given a sequence X = {x 1 , x 2 , . . ., x n } of length n, we need to assign a label sequence Y = y 1 , y 2 , . . ., y n to X, where y i is the possible label of token
, PROTEINAS, NORMALIZABLES, NO_NORMALIZABLES, UNCLEAR).
MRC definition: the sequence labeling problem can be redefined in the MRC framework as follows, For each label type y, its definition information is regarded as a query q y = {q 0 , q 1 , . . ., q m } of length m, a sentence X is regarded as the context of q y , the span of an entity of type y, and x y start:end = x start , x start+1 , . . ., x end−1 , x end , is recognized as an answer.Then, the original sequence labeling problem can be represented by q y , X, x y start:end .The goal of MRC is to find the spans of all entity mentions of all types, given all sentences.
SOne definition: SOne takes sequence X as inputs and predicts the spans of all entities of one type by one type using a multi-layer pointer network [39].The number of network layers depends on the number of entity types.For each type of entity, we add entity definition information e to enhance SOne by concatenating it to all tokens.
Query generation for MRC
Query generation is critical for MRC, since queries usually contain some prior knowledge (e.g.entity type definition) about tasks.Li et al. [40] introduce various kinds of query generation methods, including keywords, Wikipedia, rule-based template filling, synonyms, keywords combined synonyms and annotation guideline notes, and compare them.The results show that annotation guideline is the best choice for query generation.Following Li et al. [40], we compare two kinds of query generation: annotation guideline and rule-based template filling.Table 2 shows our generated queries for each type of entity.
Model detail
In this study, We utilize BERT (Bidirectional Encoder Representations from Transformers) [41] as our model backbone.Figure 2 shows the skeleton of the MRC model.Given query q y and sentence X, we need to predict the span of every entity of type y, including a start position x y start and an end position x y end .The model first takes the following input and encodes it by BERT: (1) where [CLS] and [SEP] are special tokens of BERT, denoting whole sentence and sen- tence separator, respectively.Suppose that the last layer output of BERT is H ∈ R s×d , where s is the total length of [CLS] , q y , [SEP] , X and [SEP] , and d is the dimension of the last layer output of BERT, the model then predicts the possibilities of start position and end position as follows: where W start and W end are trainable parameters, b start and b end are biases.
The predicted start index I start and end index I end are: We use MRC_rule and MRC_guideline to denote MRC using rule-based template filling for query generation and MRC using annotation guideline as query, respectively.
Figure 3 shows the skeleton of the SOne model.In this model, we first use BERT to encode the input sentence X as Z ∈ R n×d (i.e., the output of the BERT's last layer), and then con- catenate the entity definition information representation e ∈ R d e to all tokens, where d e is the dimension of the entity definition information representation.Here, we consider three kinds of entity definition information: (1) entity mentions word embedding.each entity type definition information is represented by the mean pooling of word2vec embeddings (2) where E ∈ R n×d e is n copied e, and [] denotes the concatenation operation.
Finally, the SOne model makes the same prediction for start position and end position as the MRC model.The only difference is that SOne has four input-shared span predictors with the same structure and different parameters, while MRC has four separate span predictors.The overall objective function of MRC and SOne is: where L start is the start position prediction loss and L end is the end position prediction loss.
Evaluation metrics
The performances of all models are measured by micro-averaged precision (P), recall (R), and F1-score (F1) under the "exact-match" criterion: where TP is true positive, FP is false positive, and FN is false negative.
These measures can be calculated by the evaluation tool [43] released by the official organization of the PharmaCoNER 2019 challenge.
Experiment setting
Following Xiong's work [44], we first train our models on the training set and development set, and then further finetune the model for 20 epochs.The max sentence lengths of the MRC model and SOne model are set as 250 and 230, respectively.The difference in the max length is due to the query in the MRC model.The learning rate of BERT is set as 2e−5, the batch size of all models is set as 20.The dimension of entity definition information representation d e is set as 300.Other parameters are set as the default.The code is available at [45].
Performance evaluation
Table 3 presents the results of our proposed MRC and SOne model (lower part) and summarizes some reported results on the PharmaCoNER Corpus (upper part).( 6) First, the micro-average precision, recall and F1-score of MRC_rule and MRC_guideline is 0.915, 0.9055, 0.9109 and 0.9225, 0.9050, 0.9137, respectively.Results show that both MRC_rule and MRC_guideline outperform the baseline model SOne by 0.44% and 0.72% in micro-averaged F1-score.The reason why MRC_guideline performs better than MRC_rule lies in the expertness of guideline definition.For SOne extended models, all kinds of entity definition information representation can bring improvements to the baseline model SOne.Compared with SOne, the micro-averaged F1-score of SOne_ rule increases to 0.912, SOne_guideline increases to 0.9128, and SOne_w2v increases to 0.9094.The overall micro-averaged F1-score improvements of extended SOne models range from 0.29 to 0.63%.
Second, MRC-guideline outperforms all existing systems on the PharmaCoNER corpus, creating new state-of-the-art results and pushing the micro-averaged F1-score of the benchmark to 0.9137, which amounts to 0.32% absolute improvement over the top-1 system of the PharmaCoNER 2019 challenge, developed by us that using lots of features, and 1% absolute improvement over our previous system without using features [44], which is a significant improvement.We perform a significance test by comparing the model without using any feature with our MRC model or SOne model, and the results show that the improvement is significant (t-test < 0.05) [46].This implies that entity definition information has a positive impact on entity recognition.
Third, Table 4 shows the detailed results of each entity type of MRC_guideline and SOne_guideline.Both MRC_guideline and SOne_guideline perform best on NORMAL-IZABLES and worst on NO_NORMALIZABLES.Though MRC_guideline outperforms SOne_guideline in terms of micro-averaged F1-score, it wrongly predicts all NO_NOR-MALIZABLES type.The probable reason is that queries of NORMALIZABLES and NO_NORMALIZABLES are too similar, which may confuse our models.Overall, MRC_ guideline outperforms better than SOne_guideline on micro-averaged precision but worse on micro-averaged recall.Besides, we analyze all our proposed models and find that the SOne model can recognize the NO_NORMALIZABLES entities, but the MRC model cannot.It may be because that concatenation of entity definition representation benefits to few samples.
Error analysis
Comparing with previous state-of-the-art models, our model can recognize more named entities due to the domain knowledge embedded in the entity definition information.For example, because of the introduction of the PROTEIN information, our model can recognize "timoglobulina (thymoglobulin)", "protrombina (prothrombin)" and so on, which are ignored by previous state-of-the-art models.To visualize the effect of the added domain knowledge, we calculate the cosine similarity of some words based on their word2vec embeddings.For example, the similarity of "protrombina" and "proteínas" is more than 0.5 but has a lower similarity with "normalizar" or words in the question of the UNCLEAR type.
Though the MRC_guideline model outperforms other models, there are also some errors, mainly of the following five kinds.(1) About 20% of errors are due to the predicted entities not included in the gold test set.Although these predicted entities are the ones that have appeared, such as "vimentina (vimentin)", they are wrong because they are not officially annotated.(2) About 30% of errors are due to that the model omits some entities.(3) About 16% of the errors are because the model predicts the correct entity type, but the boundary is too long.For instance, the correct entity is "anticuerpos anticitoplasma (cytoplasmic antibodies)", but the model predicts "anticuerpos anticitoplasma de neutrófilo (antineutrophil cytoplasmic antibodies)", or the correct entity is "hormonas de crecimiento (growth hormones)", but the model predicts "hormonas de crecimiento y antidiurética (growth hormones and antidiuretics)".(4) About 20% of errors are because the model predicts the correct entity type, but the boundary is too short.For example, "tinción de auramina" is wrongly predicted as "auramina (auramine)", "anticuerpos antimembrana basal glomerular (glomerular basement membrane antibodies)" is wrongly predicted as "nticuerpos antimembrana basal (basal membrane antibodies)", and "(Ig)A-kappa" is wrongly predicted as "Ig".(5) About 10% of the errors are caused by that the model predicts the wrong entity type, and 70% of them are because that "NO_NORMALIZABLES" entity type is mistakenly predicted as "NOR-MALIZABLES", such as "Viekirax", "Tobradex" and "Harvoni".
Conclusion
This paper proposed two kinds of entity definition information enhanced model, MRC and SOne for biomedical NER.Compared with the previous models, our methods do not require features and achieve state-of-the-art performance with a micro-average
Fig. 1
Fig. 1 Examples of the biomedical named entities in Spanish records.(NORMALIZABLE entities in green, PROTEINAS entities in blue, NO_NORMALIZALLE entities in yellow and UNCLEAR entities in red.Notice that UNCLEAR entities are not included in the final evaluation.)
Table 1
Statistics of the PharmaCoNER 2019 Corpus
Table 2
Generated queries for each type of entity Rule-template ¿Qué entidades No_NORMALIZABLES se mencionan en el texto?(WhichNON-NORMALIZABLE entities are mentioned in the text?)
Table 3
Results on PharmaCoNER CorpusThe method with the highest F-score among all methods is highlighted in bold * Compared with the model without any feature, this is a significant improvement (t-test < 0.05)
Table 4
Detailed results of each entity type of MRC_guideline and SOne_guidelineThe methods with the highest F-scores in each entity type are highlighted in bold F1-score of 0.9137 on the PharmaCoNER Corpus.It indicates that the introduction of entity definition information is effective.In the future, we are planning to introduce more effective entity category definition information through domain knowledge graphs and to explore more valid methods to add the entity definition information, such as attention mechanism. | 4,225.2 | 2021-12-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Marek ’ s disease virus challenge induced immune-related gene expression and chicken repeat 1 ( CR 1 ) methylation alterations in chickens
Marek’s disease virus (MDV) challenge induces lymphoma in susceptible chickens. Host genes, especially immune related genes, are activated by the virus. DNA methylation is an epigenetic mechanism that governs gene transcription. In the present study, we found that expression of signal transducer and activator of transcription 1 (STAT1) was upregulated at 10 days post infection (dpi) in MD susceptible chickens, whereas interleukin 12A (IL12A) was elevated in both resistant and susceptible chickens. However, we did not observe MDV-induced DNA methylation variations at the promoter CpG islands (CGIs) in STAT1 and IL12A. Interestingly, the methylation levels at Chicken Repeat 1 (CR1), the transposable elements (TEs) located upstream of two genes, were different between resistant and susceptible chickens. Furthermore, a mutation was identified in the CR1 element near IL12A. The impact of the point mutation in transcriptional factor binding is to be examined in the near future.
INTRODUCTION
Marek's disease (MD) is a lymphoma in chicken caused by the infection of Marek's disease virus (MDV).The infectious disease in chickens includes three phases, from the early cytolytic stage at 5 days post infection (dpi), to latent stage around 7 -10 dpi, and then the late cytolytic phase and transformation [1].Recently, several studies revealed that expression profiles of immune related genes were altered post MDV infection in chickens [2][3][4], suggesting those genes play important roles in in-nate and adaptive immunity in responses to MDV infection.
As we have known, viral infection is one of the environmental factors triggering DNA methylation alterations and consequently changing gene expression profiles.DNA methylation frequently occurs at the CpG sites, and increased methylation level at the promoter regions is associated with transcriptional silencing [5].Notably, the association between DNA methylation and gene expression was observed in Marek's disease resistant and susceptible chickens post MDV infection [6,7].
Most of CR1 elements are truncated at their 5'UTR and conserved at 3'UTR [10][11][12].The distribution of CR1 elements in high GC content regions makes them the potential targets for DNA methyaltion [9].The loss of DNA methylation on repetitive DNA was associated with cancer [13,14].However, little is known about the effects of MDV infection on methylation status of chicken repeat 1, the predominant transposable element in chicken genome.
This study aimed to uncover the MDV challenge induced DNA methylation variations on CR1s in MD resistant and susceptible chickens, and its subsequent effects on gene expression.Chicken line 6 3 and line 7 2 , two highly inbred lines of specific pathogen free (SPF) chickens, are both susceptible to MDV, but line 6 3 is resistant to MD tumors while line 7 2 is highly susceptible to MD tumors [15].Our previous result demonstrated that MDV replication was repressed infected line 6 3 than line 7 2 [16].Cytokines and other immune related genes were differentially expressed in chickens after MDV infection [17][18][19].In this study, we found that two important immune genes, signal transducer and activator of transcription 1 (STAT1) and interleukin 12A (IL12A) were activated in line 6 3 and line 7 2 after MDV infection.Instead of changing methylation level at promoter CpG islands, MDV infection influenced the methylation status of two CR1s near the promoters, which may be associated with STAT1 and IL12A mRNA expression levels.The finding may give us new insight of the potential roles of retrotransposons in MD resistance and susceptibility in chicken.
Viral Challenging Experiment and Sample Preparation
SPF chickens from highly inbred lines 6 3 and line 7 2 were sampled for this study.The chickens from each of the two lines were divided into two groups.The infected groups were challenged with 500 PFU of 648A passage 40 viruses intraabdominally, at day 5 after hatch, and were designated as treatment groups.The control groups were not challenged with the MDV.Chickens from both treatment and control groups were euthanized at 10 days post infection and fresh spleen samples were collected.
DNA Preparation and Bisulfite Treatment
Genomic DNA was extracted by using Nucleo Spin kit (Macherey-Nagel) from four samples of each group, and the concentration was measured by Nanodrop.Sodium bisulfite conversion reagents were used to treat 500 ng of DNA (EZ DNA Methylation Golden Kit) using the standard protocol provided by the manufacturer.
Pyrosequencing and Bisulfite Sequencing
PCR primers (Table 1) were designed to amplify multiple CpG sites in specific CGIs and CR1-F at an upstream region of IL12A and STAT1.Pyrosequencing and bisulfite sequencing were applied to detect the methylation levels of STAT1 and IL12A, respectively.For pyrosequencing, we used biotin labeled universal primer in the PCR reaction.The bisulfite PCR included 1 µl of 1:5 diluted bisulfite converted DNA, primers and PCR reagents from Hotstar Taq polymerase kit (QIAGEN) with four biological replicates.The methylation level detection was carried out individually by Pyro Q-CpG system (PyroMark ID, Biotage, Sweden) using 20 µl of PCR products.For bisulfite sequencing, the equal amount of DNA from four MDV challenged or control samples from each chicken line were pooled together, serving as a template for the bisulfite conversion and the bisulfite PCR, and then PCR products were purified (QIAquick Gel Extraction Kit, QIAGEN).The purified PCR products were ligated to PCR ® 2.1 Vector (The Original TA cloning ® Kit, Invitrogen), transformed to DH5α competent cells (ZYMO Research), and screen the white colonies for successful insertions after incubation at 37˚C overnight.Ten white colonies from each group were cultured at 37˚C shaker overnight.The plasmid DNA was isolated using QIAprep ® Miniprep Kit (QIAGEN), and then M13 reverse primer was used to sequence all the samples.
Transcription Factor Binding Sites Prediction
Transcription factor binding sites (TFBS) were predicted based on the sequence of CR1-F elements upstream of STAT1 and IL12A using a web based tool (http://www.cbrc.jp/research/db/TFSEARCH.html).The threshold score was set as 90 to filter out poorly conerved TFBSs [20].This algorithm predicts TFBSs based s OPEN ACCESS Table 1.Primers for bisulfite PCR and real time-PCR.
Genes Sequences Forward 5'-GGAGGGTAGAGAGTATAAAAACGG-3' on the correlation calculation with binding site profile matrices, and written by Yutaka Akiyama (Kyoto University).
IL12A and STAT1 mRNA Expression
In this study, we first detected the mRNA levels of STAT1 and IL12A at the latent stage of MDV infection.As shown in Figure 1(a), the transcription of STAT1 was significantly activated in infected line 7 2 , showing an approximate five-fold increases than noninfected chickens (p < 0.01), whereas in line 6 3 , STAT1 was slightly upregulated after MDV infection but not significantly different from its control group (p > 0.05).Moreover, the STAT1 transcriptional levels did not show statistical significance between infected line 6 3 and line 7 2 (Figure 1(a)).However, the expression of IL12A was dramatically elevated in MDV challenged line 6 3 and line 7 2 groups compared to their control groups (p < 0.01).Meanwhile, in infected chickens, the IL12A transcripts were significantly higher in line 6 3 chickens than in line 7 2 chickens (Figure 1(b)).The expression levels of the two genes were not significantly different between the line 6 3 and line 7 2 control groups (Figures 1(a) and (b)).
DNA Methylation at CGIs Upstream of STAT1 and IL12A
To understand how MDV infection induces differential gene expression, DNA methylation at the CpG islands (CGIs) located around transcriptional start sites (TSS) of STAT1 and IL12A were investigated (we named them as STAT1_CGI and IL12A_CGI, respectively) by the pyrosequencing method.The STAT1_CGI overlaps with the 5'UTR and potential promoter region, and contains 6 CpG sites which are 345 bp away from the TSS.We found that all the CpG sites remained hypomethylated (less than 10%) in the control and infection groups of both lines.The methylation changes were less than 1% between the control and MDV infection groups in both lines (Figure 2(a)).The IL12A_CGI, containing 78 CpG sites, covers the promoter region and entire the first exon.The methylation levels of 5 CpG sites, 10 bp upstream of TSS, were measured.The methylation status of the IL12A_CGI was not different between the challenged and control groups of both lines (Figure 2(b)).On average, the IL12A_CGI was significantly less methylated in line 6 3 than that in line 7 2 (Figure 3), and the methylation difference reached 10% in the first CpG site.
DNA Methylation of CR1-F Elements Upstream of STAT1 and IL12A
Since MDV infection induced DNA methylation altera- Moreover, the methylation variations showed different trends in the 4 CpG sites.In line 7 2 , the methylation levels in the first three CpG sites were significantly decreased after MDV infection (p < 0.05 or p < 0.01), and methylation level of the last CpG site was not significantly decreased (p > 0.05).In contrast, in line 6 3 , the methylation levels of the first two CpG sites were significantly declined by more than 10%, which was similar to the changes in line 7 2 .At the third CpG site, the methylation level was dramatically increased after MDV infection (p < 0.01), and the slightly increased methylaion level was also observed at the fourth CpG site (p > t
0.05).
As for the IL12A_CR1_F element, it locates at 732 bp upstream of TSS.Four CpG sites were examined by bisulfite sequencing.As demonstrated in Figure 4(b), the methylation levels were enhanced in this region in infected chickens, from 36.7% to 56.7% in line 6 3 and 50% to 75% line 7 2 , respectively.However, the increased average methylation in infected groups was largely due to the occurrence of hypermethylation at the second and third CpG sites in infected line 6 3 , and first and fourth CpG sites in infected line 7 2 .After further sequencing analysis, we found that 80% of the last CpG site in this element was mutated to AT in line 6 3 birds regardless of infection (Figure 4(c)).The mutation at the CpG sites in line 6 3 resulted in decreasing methylation level of the entire region in line 6 3 compared to line 7 2 .
Transcriptional Factor Binding Site Prediction
To uncover the potential function of the CR1-F element methylation or mutation on gene expression, the transcriptional factor binding sites (TFBS) were predicted using sequences of CR1-F elements upstream of STAT1 and IL12A, respectively.By computational prediction, as shown in Table 2, we found that CR1-F upstream of STAT1 contained two GATA-1 recognition sites, at 360 -370 bp and 578 -593 bp regions.The fourth CpG site was resided in the second putative GATA-1 binding site (Table 2).Comparatively, two putative binding sites were predicted in IL12A.E47 and MZF1 were located at the 16 -20 bp and 117 -124 bp of CR1-F upstream of IL12A, respectively.The putative E47 binding site contains at the first CpG site of the CR1-F element.The mutation, 12 bp away from the predicted MZF1 binding site, did not change the TFBS prediction results (Table 2).
DISCUSSION
Latency is a crucial step to establish permanent immunosuppression in MDV infection [17].It has been proven that the differences in viral load were detectable at 10 dpi in the spleen of infected line 6 3 and 7 2 chickens [2].Therefore, all the spleen samples were collected at 10 dpi to elucidate the influence of MDV infection on host gene expression and DNA methylation.In our previous study, we proved the MDV was successfully challenged in both line 6 3 and 7 2 , and replicated faster in infected line 7 2 than line 6 3 [16].
STAT1, a member of STAT protein family, acts as a transcriptional activator and mediates gene expression in response to pathogens [21].STAT1 transcription was induced by MDV challenge in the early cytolytic phase at 5 dpi [17].Here, we found that its expression was also significantly activated at latency in infected line 7 2 (p < 0.05), but not in infected line 6 3 (p < 0.05).It was reported that IL12A interacts with IL12B to form a cytokine IL12, and stimulates downstream signaling pathways for innate immunity [22].After MDV infection, IL12A was upregulated in the lytic infection (5 dpi) [23].In our study, we found the IL12A mRNA level was continuously elevated at latent stage, further illustrating the importance of IL12A in MD pathogenesis.
DNA methylation fluctuation triggered by virus infection has been well documented [24,25].Recently, distinct DNA methylation patterns were identified in MD susceptible chickens after exposure to MDV [6].In this study, we found that promoter CGI methylation remains either very low or stable after MDV challenge, whereas the methylation status of two CR1-F elements, further upstream of promoter CGIs, was altered upon MDV challenge, agreeing with previous research [26].It has been found that true differential methylation regions were not in the CpG island but in the low CpG density regions near the traditional CpG islands [26].These results suggest that, instead of the CGIs at promoter regions, the methylation level of CR1-F elements might be influenced by MDV infection.
As we have known, about 25% of human gene promoter regions harbor sequences derived from TEs, indicating the potential contribution of TEs to regulatory elements [27].Interestingly, the CR1 also contains putative TFBSs.Two GATA-1 binding sites were predicted in the CR1-F element upstream of STAT1, and their sequences were same as the chicken GATA binding consensus sequence WGATAR, in which W and R refer to A/T and A/G, respectively [28].The regulation of GATA-1 on STAT1 transcription has been proven in mice [29], we thus speculated that GATA-1 might control chicken STAT1 expression with the same manner.Moreover, DNA methylation regulates gene expression by blocking transcription factor binding [30,31].Therefore, we hypothesized that DNA demethylation induced by MDV infection in CR1-F might mediate the GATA-1 binding to the upstream of STAT1 and thereby enhance its expression.The methylation level of the last two CpG sites, where the putative GATA-1 binding site is located, was enhanced in line 6 3 after MDV infection, but declined in line 7 2 .This difference may help us explain the smaller upregulation of STAT1 transcription in line 6 3 than in line 7 2 .The hypothesis will be confirmed in the future.
In contrast, MDV infection was associated with not only the increased methylation level of CR1-F element near IL12A, but also increased IL12A mRNA levels.We reasoned that the increased methylation level may reduce the binding affinity of myeloid zinc finger protein 1 (MZF1) and E47, two transcriptional repressors [32,33], hereby explaining the correlation of increased methylation levels at IL12A_CR1-F element and the activation of IL12A transcription after MDV infection in line 6 3 and line 7 2 .We also observed that IL12A was also more actively transcribed in line 6 3 than line 7 2 regardless of MDV infection.In control groups, IL12A mRNA was about 30% higher in line 6 3 than in line 7 2 although not statistically significant (p = 0.2); in infected groups, line 6 3 had significantly higher IL12A mRNA level than line 7 2 .Correspondingly, the promoter CGI and IL12A_CR1-F element were greatly methylated in line 7 2 compared to line 6 3 despite MDV infection (Figures 3 and 4(b)), implicating that the repression of IL12A transcription was probably mediated by the methylation at both the promoter and CR1-F region.Therefore, the methylation at promoter CGI and upstream CR1-F elements may contribute to transcriptional differences in IL12A in line 6 3 and line 7 2 .Because the methylation on promoter region was stable, we think the MDV infection triggered the methylation alterations at IL12A_CR1-F element, which may be involved in the transcription stimulation of IL12A in MDV infection.Moreover, it has been reported that the increased number of methylated CpG sites at the distal region inhibited the activity of adjacent gene promoter [34].Therefore, we assume that the mutation at CR1 region in line 6 3 may affect the activity of IL12A in line 6 3 .Taken together, the methylation change on IL12A_CR1-F element was most likely involved in the transcription stimulation of IL12A in response to MDV infection in two lines.The differences in genetic and epigenetic aspects, in terms of SNPs and DNA methylation patterns, may comprehensively account for the difference of IL12A mRNA levels between two lines.
Collectively, MDV challenge activated STAT1 and IL12A transcription in the MD resistant line 6 3 and susceptible line 7 2 chickens at the latency.The methylation status at the promoter CpG islands of STAT1 and IL12A were stable after MDV infection.The enhanced expression levels of the two immune-related genes might be mediated by DNA methylation variations at the upstream CR1-F elements, the chicken repetitive DNA sequences.These results indicated that these two CR1-F elements were presumably cis-regulatory sequences and their methylation alterations thereby might be involved in responses to MDV infection.Further work is required to demonstrate the biological functions of the CR1-F ele-ments and the influences of DNA methylation on transcription factor binding affinity and gene expression.
Forward:
Primers for bisulfite PCR; Y stands for C/T.B : Primers for qRT-PCR.
Figure 1 .
Figure 1.The relative mRNA levels of STAT1 (a) and IL12A (b) in non-infected and infected Line 6 3 and Line 7 2 was quantified by qRT-PCR and normalized using GAPDH (n = 4, mean ± SD).The one or two asterisks indicate the mRNA level was significantly different with p < 0.05 or p < 0.01.L7 2 : line 7 2 ; L6 3 : line 6 3 .tions were not observed at promoter CGIs, we traced further upstream sequences of STAT1 and IL12A.Interestingly, two CR1-F elements (referred as STAT1_CR1_F and IL12A_CR1_F, respectively) are located close to CGIs containing several CpG sites.The STAT1_CR1_F with 636 bp in length resides around 1.2 kb upstream of
Figure 3 .
Figure 3.DNA methylation level at CpG islands (CGIs) upstream of IL12A were detected by pyrosequencing (n = 8 from each chicken line, including 4 control birds and 4 MDV infected birds, mean ± SD).The one or two asterisks denote the methylation level at the CpG sites was significantly different with p < 0.05 or p < 0.01.L7 2 : line 7 2 ; L6 3 : line 6 3 .
2.2. Purification and Quantification of mRNA Levels
The spleen samples were immediately stored in RNAlater solution (QIAGEN), and then placed at -80˚C for DNA and RNA extractions.The challenged experiments were conducted in the BSL-2 facility at the USDA-ARS, Avian Disease and Oncology Laboratory at East Lansing, Michigan, USA, following the Guidelines for Animal Care and Use (Revised April, 2005) established by ADOL's IACUC.
Table 2 .
Transcription factor predicted in CR1-F elements.
A: Names of transcription factors; B : Genes located downstream of the CR1-F elements; C : Calculated based on matrix similarity; D : Sequences of predicted TFBS. | 4,286.8 | 2012-07-27T00:00:00.000 | [
"Biology"
] |
Parallel evolution of picobirnaviruses from distinct ancestral origins
ABSTRACT Picobirnaviruses (PBVs) are small, bi-segmented, double-stranded RNA viruses frequently associated with gastrointestinal and recently linked to respiratory infections. Detected in hosts from distant biological kingdoms, debate swirls as to their age and origins and whether they are prokaryotic or eukaryotic viruses. Our evolutionary analysis revealed a contemporaneous emergence for PBV, as PBV-R1&3 species arose ~350 years ago with both segments, whereas the more ancient species, PBV-R2, initially lacked capsid. Integrated phylogenetic reconstruction defined two origins for PBV, determining PBV-R1&3 species descended from Reovirus and PBV-R2 branched with Partitivirus ancestors. These results, coupled with the heterogeneity of Shine-Dalgarno motifs, argue against a prokaryotic origin. Epistatic interactions identified in the RdRp of PBV-R1&3 evidenced the constraints imposed by vertebrate host immunity, whereas its absence in PBV-R2 concurs with its fungal origin. After acquisition of capsid, PBV-R2 increased its adaptive and functional divergences in RdRp domains and the compactness of its RNA structure to enable encapsidation. While their pathogenicity remains an open question, picobirnaviruses likely originated from both fungal and avian hosts: parallel evolution mechanisms have driven the genetic similarities shared among present-day PBV species. IMPORTANCE Picobirnaviruses (PBVs) are highly heterogeneous viruses encoding a capsid and RdRp. Detected in a wide variety of animals with and without disease, their association with gastrointestinal and respiratory infections, and consequently their public health importance, has rightly been questioned. Determining the “true” host of Picobirnavirus lies at the center of this debate, as evidence exists for them having both vertebrate and prokaryotic origins. Using integrated and time-stamped phylogenetic approaches, we show they are contemporaneous viruses descending from two different ancestors: avian Reovirus and fungal Partitivirus. The fungal PBV-R2 species emerged with a single segment (RdRp) until it acquired a capsid from vertebrate PBV-R1 and PBV-R3 species. Protein and RNA folding analyses revealed how the former came to resemble the latter over time. Thus, parallel evolution from disparate hosts has driven the adaptation and genetic diversification of the Picobirnaviridae family.
. The presence of Shine-Dalgarno (bacterial ribosome binding) sequences in the 5′UTRs of capsid and RdRp genes, its relatedness to fungal partitiviruses, and the lack of phylogenetic delineation along host lines argue in favor of the latter (7)(8)(9).However, PBV particles are capable of disrupting biological membranes in vitro, suggesting the capsid can invade animal cells (10).Indeed, its capsid undergoes autocatalytic matura tion, a process observed in other non-enveloped animal viruses that activates the virion for entry.Likewise, the capsid projecting (P) domain exhibits low conservation and is exposed, likely subject to antigenic drift as it avoids adaptive immunity or seeks to utilize cellular receptors of different host species (10).A better understanding of the mechanisms driving the emergence and evolutionary processes guiding PBV diversifica tion could shed light on this debate.
The contemporaneous emergence of PBV species is distinguished by their genomic segment composition
A maximum clade credibility (MCC) tree was constructed for RdRp and capsid to determine the evolutionary history of Picobirnavirus.An initial Bayesian evaluation of temporal signal (BETS) evaluation of the temporal signal yielded strong support for heterochronous over isochronous models for both genomic segments using either a strict or an uncorrelated relaxed log normal clock model with constant growth as prior (Table 1).Log marginal likelihoods calculated using both the path sampling (PS) and stepping-stone (SS) sampling methods with a Bayes factor >3 favored the heterochro nous data set (Table 1).Further assessments of molecular clock models and proper priors by these methods revealed that an uncorrelated relaxed local clock model in combina tion with a Bayesian skyline plot prior provided the best rankings (Table S1 and S2).The topological structure of the trees and temporal distribution of strains (time-stamped distribution) together indicate these are contemporaneous viruses that emerged <700 years ago.Indeed, each tree's ladder-like appearance (e.g., ancestral strains near the root and recent strains at distal tips) and the inferred timescale would not be expected for bacteriophages of prokaryote-linked viruses.
Whereas the capsid phylogeny reflects a more recent, single speciation event, the RdRp revealed an older emergence, with a subsequent diversification into three co-evolving species, in agreement with the genetic analysis presented in reference 11.RdRp bifurcated immediately from the root (Fig. 1, top panel), with PBV-R 2 (blue; tMRCA = 1,347) being the most ancestral lineage.Notably, the emergence of capsid (tMRCA = 1,590) coincided with the diversification (tMRCA = 1,583, gray node) of PBV-R 1 (red) and PBV-R 3 (green) species (Fig. 1, bottom panel).This immediately suggests two compet ing theories to explain their different evolutionary histories.In model 1 (Fig. S1A), the RdRp lineages share a common ancestor, whereupon the acquisition of capsid through a reassortment event, selective pressure drove the diversification of PBV into three species.In model 2 (Fig. S1B), the RdRp lineages descended from different ancestors, with PBV-R 1 and PBV-R 3 each possessing a capsid from the time of their emergence, while PBV-R 2 , initially lacking a capsid, acquired this segment later through a duplication and reassortment event with one of these two PBV species by a mechanism previously described (12).The PBV family is characterized by the convergent evolution of RdRp from two distinct origins Phylogenetic trees of RdRp proteins from PBV and other segmented, dsRNA viruses were constructed to explore these hypotheses (Fig. 2).Surprisingly, PBV-R 1 and PBV-R 3 sequences branched with the Spinareoviridae family, having diversified from a common rotavirus ancestor.PBV-R 2 grouped separately from PBV-R 1 and PBV-R 3 and branched with Partitiviridae family members that infect fungi, both (PBV-R 2 and Partitiviridae) basal to "PBV-like" viruses employing an alternative, mitochondrial genetic code (9).Thus, whereas PBV-R 1 and PBV-R 3 appear to have a vertebrate reovirus origin, a non-chordate eukaryotic origin is inferred for the PBV-R 2 species.In a similar previous analysis by Knox et al., PBV sequences (genogroups I nd II) remained clustered together (13).Figure 2 by comparison contains more taxa and is derived from a full-length alignment; however, the key factor altering our tree topology is the inclusion of the "PBV-like" strains absent from their analysis (13).Its removal results in all three PBV species clustering together (Fig. S2).Importantly, PBV-like viruses do not possess a capsid and they closely resemble mitoviruses which infect the mitochondria of fungi (9,14).Thus, the evidence supports an independent evolution from different ancestral starting points converging to acquire similar traits through the diversification of each PBV species (hypothesis model 2).The data suggest PBV-R 1 and PBV-R 3 species containing capsids emerged more recently from a vertebrate host, whereas the more ancient PBV-R 2 lacking a capsid acquired it from PBV-R 1 and PBV-R 3 .These results are aligned with evidence of co-infections between PBV and reoviruses (rotaviruses) each having similar gastric pathologies, reflecting a close relationship among these viral groups (2).
Epistatic constraints in RdRp highlight differential host-virus interactions
A co-evolutionary analysis was performed to identify epistatic interactions maintaining PBV RdRp protein functionality and the role these constraints play in viral fitness.Clusters of statistically recognized coevolutionary pairs (Fig. S3) were mapped onto the 3D structure, and the weight of their epistatic interactions was estimated (Fig. 3).Results reflect the role of the stabilizing selection imposed on the virus by the host as an indication of the likelihood of successful adaptation.For PBV-R 1 , we observed a high degree of interconnectivity between domains, wherein residues of the palm, finger, thumb, and N-terminal domains compensate for one another when mutated to preserve functionality.PBV-R 3 was less constrained than PBV-R 1 , displaying greater "intra"-versus "inter"-domain constraints, suggesting the host immune response against the virus has a less selective role in this process.Once again, PBV-R 2 shows a completely different pattern: there are no intra-/inter-domain constraints and consequently no evidence of a co-evolutionary process.Therefore, based on the coevolutionary analysis, the evolution of PBV-R 1+3 proteins were subject to epistatic modifications and appears to have been restricted by host jumping event(s) to vertebrates (avian, mammals), whereas the PBV-R 2 protein lacks these constraints.While functional experiments will be required to corroborate the findings of the coevolutionary analysis, the results again indicate that PBV1+3 and PBV2 have different evolutionary histories.
PBV-R 2 lacked an extracellular virion phase during the first 300 years of its emergence
To further dissect the differences between PBV-R 1 and PBV-R 3 versus PBV-R 2 , we estimated species adaptive divergence over time.A time heterogenous phylogeny was obtained from a three-epoch model, wherein kappa values (k n : transitions/transversion bias) measured over 50-year windows define each epoch and provide an estimate of the accumulation of mutations during this interval (Fig. 4).Adaptive divergence (solid line) is calculated by dividing Darwinian selection (episodic selection) (ω 2 = d N /d S ) by muta tional ratio bias (k n parameters).Positively selected lineages (e.g., ω 2 > 1) during periods in which numerous amino acid-changing transversions accrue (e.g., low k n ) are an indication of a higher probability of fixation (fixation bias) for the new lineage; hence, its adaptive divergence increases.Upon emergence 300 years ago, PBV-R 1 kappa levels were initially elevated (k 1 = 23.86).This is an indication of a high accumulation rate of mutations, potentially resulting from a host-jump event, but which steadily declines over time (k 2 = 4.08, k 3 = 1.97) as the virus adapts to its new presumed mammalian host (Fig. 4A).PBV-R 3 also shows a similar trend in fixation bias and adaptive divergence (bold line) despite low mutational ratio bias early on (k 1 = 1.69), a value which could be impacted by the low number of available sequences (Fig. 4C).By contrast, PBV-R 2 shows no evidence of a positively selected lineage until late in epoch Q2, some 300 years after its emergence (Fig. 4B).This timeframe coincides precisely with the acquisition of the capsid segment (red arrow; see Fig. 1).Elevated kappa values (e.g., 30.94) without fixation for PBV-R 2 in the first 150 years prior to this event reflect the unconstrained generation of mutations in RdRp in the absence of limits imposed by an RdRp-capsid interaction.These results are once again consistent with PBV-R 2 not being under immune pressure and suggest a lifestyle without an extracellular virion phase.Either by complementarity or co-infection events, PBV-R 2 acquired this new genomic segment and encapsidation capacity.It is still unclear if PBV-R 2 (PBV-R 1 and PBV-R 3 ) can achieve mammalian infection on its own or only via co-infection with its natural host, but this event was pivotal in the diversification of this lineage.
Shine-Dalgarno motifs in Picobirnaviridae are unstable, lack species and host demarcations, and are not favored by evolution
We therefore returned to the presence of Shine-Dalgarno (SD) sequences offered as evidence of PBV's prokaryotic origins (5).Should these ribosome-binding signals indeed be functional within its bacterial host, one might anticipate a strict adherence to the AGGAGGU consensus, which has been previously reported with a curated set of examples (7).Here, we performed an exhaustive analysis of all available sequences (segment 1, n = 465; segment 2, n = 437) and observe far less conclusive results (Fig. 5).
To begin, either most deposited strains lack this sequence information (e.g., N/A: 5′ end missing) or there is nothing resembling an SD consensus upstream of the coding region.
For the latter case, 27% of 5′UTRs preceding segment 2 RdRp, 43% of 5′UTRs upstream of segment 1 ORF1, and 80% of intergenic regions upstream of capsid do not possess this motif.Remaining sequences displayed several variations of the SD-like element (Fig. 5).Stability of the SD consensus is required to act as a cis regulator of translation (17).However, we observed considerable variability both in the SD consensus and within the length of the requisite 5 ± 2 nucleotides preceding the ATG start codon (Fig. 5A through C; Fig. S4 and S5).Focusing on RdRp, while one might have expected greater conformity among the more ancestral PBV-R 2 sequences, we failed to observe trends along species lines (Fig. 5D).The proportion of stable SD elements was similar between PBV species, and there was no correlation with the time of diversification, suggesting the acquisition of this sequence was a random event not primed by evolution.Finally, we explored the distribution of SD sequences in terms of reported host and geographic restrictions and again failed to observe a linkage, yet this admittedly does not preclude infections with bacteria harboring these PBV strains (17).For comparison, we analyzed 109 available segment L sequences encoding RdRp from the Cystoviridae family.The canonical SD motif AGGAGGG was present in 100% of the species analyzed without nucleotide variation (Fig. S6A and B; Table S14).We explored the secondary structure of the 5′UTR in bacteriophage phi6 and obtained the formation of several stem-loops resembling IRES-like structures which stabilize the SD region (Fig. S6C) and previously described as alterative translation strategy for this viral species (reviewed in reference 18).Experimental evidence in agreement with our findings is manifested in the inability of PBV to replicate in bacterial culture, although this one example is not conclusive of the lack of potential to do so, and future experiments may prove otherwise (8).The source of this motif in PBV is therefore unclear; however, we hypothesize that PBV-R2's relatedness to PBV-like viruses and mitoviruses could have been acquired through their interaction with hosts like Jakobids that carry SD in the mitochondria (19,20).
Functional divergence directed changes in the RdRp of PBV-R 2 to facilitate the capsid-nascent RNA-RdRp interaction
If PBV-R 2 indeed acquired its capsid from PBV-R 1 /R 3 , hallmark mutations or elements such as the SD sequence fixed after a duplication and reassortment process, which occurred between the two species would be expected.Indeed, the presence of non-segmented PBVs in Himalayan marmots suggests a mechanism by which homol ogous recombination occurs, mediated by GAAAGG direct repeats in the 5′UTRs of each segment (12).To explore this possibility, a functional divergence assessment was undertaken comparing the RdRp coding sequences.The results indicate that the coefficients of type-I functional divergence (θI) and type-II functional divergence (θII) between PBV-R 1 and PBV-R 2 species yielded statistically significant differences (Tables S3 and S4), whereas the other combinations interrogated (PBV-R 2 /PBV-R 3 and PBV-R 1 / PBV-R 3 ) were not supported.Residues identified after a sitespecific posterior probability (Qk > 0.98) distribution analysis that exceeded the cutoff were represented as bold lines (upper panels) and mapped onto the 3D structure to identify which amino acids may have played a role in shifting the ancestral function of PBV-R 2 (Fig. 6).Type I residues (left, blue) are highly conserved in PBV-R 1 but are variable in PBV-R 2 , indicating they have different constraints.The implications are that mutations in PBV-R 2 were selected for their ability to acquire the same functionality as in PBV-R 1 .Based on the RdRp crystal structure from Collier et al. (21), these amino acids are located primarily in motifs B and F which are known to mediate the interaction of RdRp with capsid during packaging (21).
Type II residues (right, green) are biochemically distinct from one another in PBV-R 1 vs PBV-R 2 but highly conserved within in each protein.These residues may have different properties but the same function, or they can promote differentiation in function of the proteins.Once again, these amino acids are located in the very same region of RdRp as those detected by type I functional divergence.The results strongly suggest these mutations in PBV-R 2 were essential for its ability to interact with capsid and consequently its ability to infect mammals.The genomic RNA of PBV-R 2 evolved to a more stable and compact structure to enable encapsidation To prevent triggering an immune response, dsRNA viruses need to accomplish repli cation and transcription within the confines of the capsid (22).A 120-subunit icosa hedral architecture is another hallmark of dsRNA viruses (23).These functional and space constraints inevitably restrict the pairing of incompatible capsids and RdRps.The encapsidation process requires the presence of genomic RNA, mediating the indirect interaction between capsid and RdRp through the AU-rich 5′UTR stem loops in both segments (21).During assembly, the 5′UTR is bound by the flexible N-terminal region of the capsid protein (10,21).RdRp motifs B and F play the most relevant roles in the stability of both template and nascently transcribed RNA, with motif B facilitating strand selection and passing the new (+)RNA to the palm and fingers and motif F involved in RNA synthesis via the movement of the rNTP.Residues identified by our functional divergence analysis were located in these critical RdRp domains and likely guarantee the correct interaction with the capsid via the RNA intermediate (Fig. 6A).Unlike partitivi ruses containing only one genomic segment per virion, PBV particles package both genomic segments, as well as the viral polymerase bound to each (23).Therefore, upon acquisition of a capsid, PBV-R 2 would require a shift in biochemical or charge properties of these functional residues to ensure replication and transcription of two segments and accommodate the protein-RNA-protein interaction needed for encapsidation.Based upon the time-heterogenous analysis, we selected contemporaneous strains of all three species and the most ancestral strain of PBV-R 2 to compare the RNA secondary structures and free energies (Fig. 6B).The folding pattern of the ancient PBV-R 2 had a higher minimum free energy compared with its more recent progeny, with the latter exhibiting a more compact, elongated structure for packaging.Moreover, the key distinction is the evolution away from a self-folding 5′UTR to the complementary base pairing of the 5′UTR and 3′UTR.Remarkably, equivalent structures and free energies were observed in PBV-R1 and PBV-R3 (Fig. 6B), underscoring that the annealing of termini was evolutionarily favored upon acquisition of the capsid.
DISCUSSION
Recent advances in sequencing and genomic surveillance have highlighted the potential for zoonotic transmission of Picobirnavirus, yet determining the true origin of these viruses has remained elusive (5).Approaches using structural biology (10, 21) and morphogenesis (23) have highlighted their resemblance to both Reoviridae and Partitiviridae families.Here, by combining temporal, statistical, and integrated phylo genetic analyses, we reveal that both sides of the debate appear to be correct.PBV emerged from different ancestral vertebrate and fungal hosts, and it was through parallel adaptation that they converged to become the viral phenotypes of the present day.Parallel molecular evolution is a widespread and important phenomenon in viral species adaptation (24,25).The absence of capsid from the most ancestral species (Fig. 1), along with the dual emergence of RdRp from different dsRNA viral families (Fig. 2), provided evidence that PBV evolution was being driven by parallel patterns of fitness improvement.The adaptive divergence (also known as selectionism) (26) (Fig. 4) displayed by PBV-R 2 upon acquisition of the capsid, combined with the subsequent functional divergence (as evidence of mutationism) (26) of its genomic RNA and RdRp (B) RNA secondary structures for the entire RdRp sequence from representative PBV strains were obtained using RNAfold.The Bayesian Integrated Coalescent Epoch PlotS (BICEPS) model identified PBV-R2/human/BEL/HPBV959/2010 (KU892529) as an ancestral strain and PBV-R2/Marmot-picobirnavirus/c330624/2013 (KY928706) as a contemporaneous strain.PBV-R1 and PBV-R3 sequences were analyzed for comparison.Minimum free energy (MFE) values for RNA secondary structures are listed and coloring of RNA molecule ends facilitated the visualization of the interactions between the 5′UTR (blue) and the 3′UTR (red).domains to facilitate encapsidation (Fig. 6), also stood as clear examples that this pivotal event shaped PBV evolution.Thus, capsid folding and assembly appear to impose a selective constraint which drove the convergence of Picobirnaviridae family members.In Monttinen et al., structure-based clustering of 120 subunit icosahedral capsids correlated with RdRp phylogenies which led the authors to surmise that PBV RdRp and capsid genes descended from cystoviruses or reoviruses (27).Indeed, our tree showed this relationship (Fig. 2) and recall many additional parallels with Reoviridae.The proteolytic maturation of PBV capsid protein is similar to cleavage in orthoreovirus capsid mu1 (10).The T1 innermost core in reoviruses (e.g., blue tongue virus) resembles the PBV capsid structure and acts as a scaffold to prime assembly of the surrounding T13 capsid needed for cell entry.The five pores in PBV used to release mRNA during transcription also mimic Reovirus.PBV presumably requires a mono-particulate virus (both segments packaged in the same virion) to enable transmission to neighboring cells during an extracellular phase of its life cycle (23).Thus, its viral particle is tightly packed and has the same density as rota-/reoviruses (34-38 bp/100 nm 3 ).By contrast, partitiviruses (fungal) are loosely packed (20 bp/100 nm 3 ), with only one segment per virion, despite multi-segmented genomes (23).
Our dating of PBV emergence roughly 600 years ago is not consistent with the RNA bacteriophage evolutionary hypothesis, since the latter lack a molecular clock (28).Evolutionary rates of 10 −3 substitution/site/year, the phylogenetic reconstructions obtained, and the apparent lack of a regulatory role for Shine-Dalgarno signals (Fig. 5) suggest picobirnaviruses are not prokaryotic viruses.In a recent study, Boros et al. (8) claimed functionality of SD sequences in PBVs; however, controls such as mutating or deleting SD sequences from expression cassettes were not performed, and therefore, expression could be due to non-canonical translation-initiation mechanisms in E. coli (e.g., leaky translation [29]).The high degree of SD conservation and IRES-like secondary structure in Cystoviridae suggesting a cis regulatory role contrasts with the sporadic and variable presence of the SD and RNA base pairing patterns observed in Picobirnaviridae (Fig. 6B; Fig. S6).
For PBV-R 2 , its prior lack of a capsid and relatedness to PBV-like and mitoviruses certainly suggest these species resided in lower organisms, were not under selective immune pressure, and are comparatively more ancient viruses.Structural and phyloge netic relationships to partitiviruses raise the possibility that PBV-R 2 crossed the species barrier from unicellular eukaryotic organisms to vertebrates.By contrast, PBV-R 1 /R 3 phylogeny indicates they originally descended from Reovirus ancestors and may have emerged from a vertebrate host.Our findings point to convergent evolution as having driven the similarities we see shared today among these three PBV species.Interestingly, PBV has not evolved to impose host restrictions, as strains cross vertebrate lines and the same virus is able to infect different species (e.g., pig and human) (30,31).However, new mutations may alter cellular tropism and drive gastrointestinal strains to emerge as respiratory pathogens (32).As unsampled ancestors get sequenced and new techniques are developed (cell culture model), we will inch closer to an answer, but our study suggests that PBV species were derived from distinct host origins and provide a clear example of parallel evolution mechanisms at work.
Conclusions
Our study reveals that PBV species started from two distinct origins and that conver gent evolution drove the diversification and evolution of Picobirnaviridae.Epistatic interactions, functional divergences, and positive selection synergistically favored the acquisition of the capsid in the most ancestral species, the pivotal event that shaped the dynamics of the entire family.Their contemporaneous emergence, together with the frequent absence and heterogenous nature of the SD which suggests these are nonfunctional, evolutionary vestiges, argues against a prokaryotic host.Rather, the timing of the accumulation of mutations in PBV-R2 that promoted encapsidation is consistent with a host-jump event requiring adaptation to an extracellular lifestyle in a vertebrate host.These insights provide new dimensions to the PBV origin and pathogenicity debates.
Sequence data set, alignments, and phylogenetic analysis
Different sequence data sets were created for different purposes in this study, including temporal inference, phylogenetic, divergence, and coevolutionary analyses.Data sets A and B initially contained all Picobirnavirus genus sequences for both capsid and RdRp coding regions, respectively.All sequences utilized in Perez et al. (33) and Berg et al. (3) with defined isolation times were included into these two datasets.Sequences were downloaded from GenBank (http://www.ncbi.nlm.nih.gov/) on 1 August 2021 and filtered as described in Perez et al. (33) (Table S5 and S6).Data set C was created to infer the phylogenetic relationship among all PBV-related double-stranded RNA viral families.It contains the RdRp coding sequences previously described in Knox et al. (13) as well as those from the PBV-like group (9) omitted in Knox et al. (13).All 640 sequences included in data set C were downloaded from GenBank on 20 January 2022 and filtered as above (Table S7).Finally, data sets D, E, and F contain sequences from each PBV species defined in Perez et al. (33) and were created for the adaptive divergence and coevolutionary analyses (Tables S8 to S10).All data sets for both nucleotides and deduced amino acid sequence alignments (MSA) were created using Multiple Alignment using Fast Fourier Transform (MAFFT) with the option E-INS-I to decrease the penalty in the gaps as described in Perez et al. (33) Phylogenetic analyses using maximum likelihood inferences were performed as described in Perez et al. (34) Briefly, ML phylogenetic trees derived from either nucleo tides or deduced amino acid alignments were computed with the IQ-TREE2 program (35) using the ModelFinder algorithm to select the bestfit model for each data set.Confidence levels for branches and internal nodes were determined by 1,000 replicates of both the Shimodaira-Hasegawa approximate likelihood ratio test (36) and ultrafast bootstrapping (37) to each phylogenetic inference.Trees were used as input files for the temporal Bayesian analysis, edited, and visualized using Figtree and R-script (GitHub repository).
Molecular clock test, Bayesian model and prior selection, and temporal analysis
The presence of the temporal signal in data set A was initially tested by two methodolo gies: (i) the regression of the distance from the root to each of the tips as a function of the sequence sampling times, known as root-to-tip regression, and (ii) an explicit assessment of the temporal signal using the Bayesian evaluation of temporal signal test (38).The former was determined using TempEst (39) and TreeTime (40) to calculate the underlying temporal signal using the heuristic residual mean squared and least squares models, respectively, whereas BETS was conducted using BEAST v.1.10.4 (41).BETS statistically contrasts two competing models (one in which the data are accompanied by the actual sampling times [heterochronous] and the other one in which the sampling times are constrained as contemporaneous [isochronous]).Thus, if the heterochronous model improves the statistical fit of the data, the use of a molecular clock to calibrate the data is warranted (38).The estimation of the marginal likelihood (M) of these two competing models (heterochronous and isochronous) was achieved with the estimators of log marginal-likelihood (log M): path sampling (42) and stepping-stone sampling (43).The analysis was performed as recently described by Orf et al. (44) with some modifica tions, briefly: two molecular clocks were tested: a strict clock (SC) and an uncorrelated relaxed lognormal clock (45), with an exponential distribution prior with a mean of 1.0 and a constant coalescent prior using an exponential population size with a mean of 10.0 and offset of 0.5.The analyses were run using BEAST 1.10.4(41) with a chain length of 5 × 10 8 interactions, from which a 10% burn-in was discarded.Thus, four total runs (one for each hypothesis with each clock prior) were conducted per genome segment.In all cases, the marginal likelihood estimation (MLE) using path sampling/stepping-stone sampling was set to a 5 × 10 6 length of chains with 500 number of path steps and the default beta distribution of the path step (46,47).Comparison of the statistics from both the path sampling and stepping-stone sampling methods was tabulated to compare the models (Table 1).
Likewise, the molecular clock model and the prior that best fit the analyzed data set were selected based on the results of these two estimators (PS and SS) as described in De la Cruz et al. (48).For both genomic segments, an uncorrelated relaxed local clock with a Bayesian skyline plot as a prior was selected (Tables S1 and S2).The initial ML trees estimated by IQ-TREE2 were used as starting trees in the temporal analyses by the Bayesian approach.For both genomic segments, two Markov chain Monte Carlo (MCMC) chains were run over 200 million states using BEAST v1.10.4 (49) with sampling every 20,000 states.Both chains were combined after 10% of states were removed (burn-in step).For the RdRp and capsid sequences, WAG+gamma (four categories) and LG+gamma (four categories) substitution models were used, respectively, together with the previously selected molecular clock and prior.For both segments, we used a CTMC scale prior for the rate and an uninformative uniform distribution for the skyline.popSizesince there is no previous evolutionary information about PBVs.Convergence for both MCMC chains and parameters was assessed using Tracer v1.7 (50).
Adaptive divergence analysis
The adaptive divergence analysis was performed as described by Kistler and Bedford (51) with minor modifications.First, nucleotide alignments for data sets D, E, and F (Tables S7 to S9) were used to infer time-heterogeneous phylogenies using a BICEPS model (52).Three epochs with windows of 50 years was applied to each viral species, with epoch times demarcating when statistical differences were observed in the kappa (κ n ) values (defined as the transition-transversion bias values mutational ratio bias]).Two Markov chain Monte Carlo chains were run over 400 million states using BEAST 2.6.3 (53) with sampling every 40,000 states.Both chains were combined after a 10% burn-in, and MCC trees and κ n values for each epoch time were obtained.Second, to measure episodic selection, non-synonymous/synonymous mutation rate ratios (d N /d S :ω 2 ) for each internal node of the time-heterogeneous phylogenies were determined as described by Perez et al. (34).Briefly, the CODEML program of the PAML v.4.9 software package (54) was used applying a branch-site model (A and A 1 ) to prespecified branches (hypothesized under positive selection [foreground branches]) in comparison with the remaining branches (background branches).The branch-site model was tested under the null (ω along all branches [0 < ω 1 < 1 and ω 2 = 1]) and alternative (ω 2 > 1) (54), and both hypothe ses were contrasted by likelihood ratio tests (LRTs).The significance of the LRTs was estimated assuming a Chi-square (χ 2 ) distribution with degrees of freedom assigned as the difference in the number of parameters in the two types of models, as previously determined in Perez et al. (34).In all cases, a ω 2 = 1 assumes that significant differen ces were not found between the branches (Tables S11 to S13).Finally, the adaptive divergence for each species was determined by adding the ratio (window of 50 years) obtained from the weighted episodic selection with the ratio of mutational bias during the epoch time in which each lineage was assessed (Tables S11 to S13).
Coevolution analysis
Some amino acids (due to their composition and location) can affect the function and the stability of a protein more than others (55), and substitutions at these "rele vant" positions can occur once compensatory changes take place in the protein (56).Coevolutionary analyses can identify these structural and/or functional interactions (55) and thus was conducted to reveal compensatory substitutions in RdRp of different PBV species, likely in response to host constraints (immune response and/or adaptation).As the number of sequences available for each species differed considerably (see data sets D, E, and F), the recently developed algorithm BIS2TreeAnalyzer (57) was employed since it was specifically designed to identify clusters of coevolving residues in alignments with a relatively low number of sequences (58).MSA of the deduced amino acid sequences of the data sets D, E, and F together with their respective phylogenetic trees were used as input files.The most ancestral sequences identified in the time-heterogeneous phylogenetic analysis were selected as roots for each tree.Taking into account intra-spe cies variability in the RdRp sequence for each PBV species (33), an alphabet reduction by physicochemical properties was set.A coevolutionary dependency network for each intra-and inter-RdRp domain was constructed based on the parameters described by Champeimont et al. (15), and a weight defined as the sum of interactions (direct and indirect [see Champeimont et al. {15} for definition]) was summarized and visualized using the R package circlize (59).
Functional divergence analysis
For functional proteins like RdRp, replacement rates of amino acids are heterogeneously distributed across the entire sequence and linked to the natural selection (60).Thus, type I functional divergence uncovers those differential rates (acceleration or deceleration) in each amino acid position known as "covarion"-type model ( 61) and type II unveils the changes in the physicochemical properties among conserved residues of two different groups, known as constant-but different mode) (62).DIVERGE software (version 3.0) (63) was used to measure type I and type II functional divergence.Coefficients θI and θII among PBV-R 1 , PBV-R 2 , and PBV-R 3 were estimated to test the statistical support of the analyses (64), and a sitespecific posterior probability (Qk) analysis was applied to predict which amino acid residues were crucial for functional divergence.High Qk values indicate an elevated degree of the functional constraint or that the change in the residue property at a site is different between two clusters.Based on the distribution obtained, the critical value of Qk was set to 0.98.
Shine-Dalgarno-like motifs characterization
To determine the role of the Shine-Dalgarno-like motifs (SD-like) in Picobirnaviridae, all nucleotide sequences for both segment in the current study (Tables S5 and S6) were individually inspected using SnapGene software (www.snapgene.com)to accurately separate the coding region from the untranslated region.Thus, the 5′UTR for both segments 1 and 2 was extracted as well as the upstream untranslated region of the capsid (hereafter denoted as intergenic).All extracted sequences were aligned using Bioedit (65) and visually inspected for the canonical Shine-Dalgarno sequence AGGAGGU, allowing degrees of variability surrounding the core xGGxGGx.The software Metalogo (66) was used to identify group distribution and stability based on entropy.The R-package ggseqlogo was used to identify distribution proportion of the SD-Like element for each segment based on probability (67).Distribution percentages of SD-like sequences were visualized as stacked histograms using the Tidyverse R-package (68).Finally, a reconciliation between the composition of the SD-like sequence motifs with the species demarcation, epoch time (determined in the section above), stability measured by entropy (Fig. S6), host restriction, and geographic distribution was mapped by using the R-package ggtreeExtra (69).For comparison purposes, the dsRNA phage family Cystoviridae was selected to evaluate the presence and conservation of SD-like motifs.From the 608 sequences available at the GenBank database on 21 June 2023, the 109 complete sequences for the L segment encoding for the RdRp of the different species of this viral family were downloaded (Table S14).The untranslated region was inspected for the presence of SD-like motifs as described above.In addition, the secondary structure of the 5′UTR of all 62 sequences for the bacteriophage phi6 on which the translation mechanism has been previously described was determined as described below ("RNA secondary structure"), in order to visualize a model of the putative role the SD-motif in the translation process.
RNA secondary structure
The RNA secondary structures for the entire segment 2 (RdRp) of different PBV species were obtained using the RNAfold Webserver on 28 September 2022.PBV-R 2 /human/BEL/ HPBV959/2010, PBV-R 2 /Marmot-picobirnavirus/c330624/2013, PBV-R 1 /Simian/PBV/13R/ 2009, and PBV-R 3 /Variants-V39/Gorilla/2015 (accession numbers KU892529, KY928706, KY120186, and KY503004, respectively) were selected as representatives.For PBV-R 2 , KU892529 was identified as ancestral and KY928706 was selected as contemporaneous by the time-heterogeneous phylogeny analysis.Contemporaneous PBV-R 1 and PBV-R 3 strains were used for comparison.Also, the secondary structure of the 5′UTR of Phi6 (Table S14) was determined.The folding of secondary RNA structures was computed as described by Relova et al. (70).Briefly, MFE parameters, equilibrium base pairing probabilities, and the partition function (PF) were selected to obtain folding patterns.Pairing probabilities were visualized on the RNAfold Webserver using the forna format (http://rna.tbi.univie.ac.at/forna/forna.html)with colors set by position to facilitate the visualization of the interactions between the 5′UTR and the 3′UTR of the RNA molecule.
FIG 1
FIG 1 Evolutionary history of Picobirnavirus.Time-calibrated maximum clade credibility trees for both genome segment analysis reveal a more ancestral origin for RdRp (top; tMRCAroot = 1,347) compared with capsid (bottom; tMRCAroot = 1,590).Diversification into the three species defined by Perez et al. (11) is denoted with tMRCA/95%HPD interval at main internal nodes using the color scheme indicated in the caption legends.The emergence of capsid (bottom) coincided with the diversification of PBV-R 1 and
FIG 3 6 FIG 4
FIG 3Coevolutionary events in RdRp of Picobirnavirus.BIS2 analysis was performed as described by Champeimont et al.(15) to identify amino acid pair interactions with statistical significance.For all three PBV species (A-C), coevolved residues were mapped as spheres onto the 3D structure of PBV-R1 (PBD: 5i62).Domains of the RdRp are colored as described in reference 16, with the N-and C-terminal domains in yellow and magenta, respectively, fingers in blue, the palm in red, and the thumb subdomain in green.Residue pairs and their sequence positions are shown beneath each 3D structure.The circular representation depicts the intra-and inter-domain interactions for each epistatic residue.The width of these links indicates the frequency of coevolutionary events as estimated by the post hoc summarized weight (see matrixes in Fig.S3).
FIG 5
FIG 5 Distribution and variability of Shine-Dalgarno motifs on both Picobirnavirus genome segments.(A-C) Logo illustration of the nucleotide substitution rates expressed as probability of nucleotide base per position (top), with the combined distribution (%) of each sequence on both segments (light-gray) as well as by individual species (11) (bottom).(D) Temporal phylogenetic tree based on the coding region of segment 2 (RdRp) reconciled by the SD-Like motif (tips) distribution.Species demarcation (inner ring), Epoch time of emergence (second ring), stability of the SD-like sequence (determined by entropy of each position Fig.S5]) (third ring), host restriction (fourth ring), and geographic distribution (outer ringer) are all indicated in the phylogeny.
FIG 6
FIG 6 Functional divergence of the RdRp of PBV species.(A, top) Plots of amino acid residues with statistical support in RdRp that were changed by the action of functional divergence type I (in blue) and type II (green) (see Tables S7 and S8).Numbers in gray represent the AA position in the alignment and those in bold indicate positions after eliminating gaps, relative to the PBV-R2 reference sequence (accession number GU968924).Residues exceeding the posterior Bayesian statistics cutoff, Qk > 0.98 (red dashed line), were labeled with dark-blue (type I) and dark-green (type II) bars.(bottom) Schematic of 3D structure of
FIG 6 (
FIG 6 (Continued) human PBV-R1 RdRp (PDB ID: 5i62) with domains colored as in Fig. 3. Coinciding positions in PBV-R1 are with reference to accession number GU968924.Blue spheres labeled with positions and domain motifs indicate type I functional divergence sites and green spheres indicate type II functional divergence sites.
TABLE 1
BETS analysis comparing data fitting for two models: heterochronous (het) and constrained (Iso: isochronous) a | 8,845.2 | 2023-10-27T00:00:00.000 | [
"Biology"
] |
Circulating Th22 cells, as well as Th17 cells, are elevated in patients with renal cell carcinoma
T-helper (Th) 22 cells serve an essential role in different types of tumors and autoimmune diseases. No research has been conducted to study the role of Th22 cells in the pathogenesis of renal cell carcinoma (RCC). We aimed to evaluate the prognostic value of circulating Th22, Th17, and Th1 cells in RCC patients. Thirty-two newly diagnosed RCC patients and thirty healthy controls were enlisted in the research. Their peripheral blood was collected, and the frequencies of circulating Th22, Th17, and Th1 cells were detected by flow cytometry. Plasma IL-22 concentrations were examined by an enzyme-linked immunosorbent assay (ELISA). Quantitative reverse transcription-polymerase chain reaction (RT-PCR) was used to identify the mRNA expression levels of aromatic hydrocarbon receptor (AHR) and RAR-associated orphan receptor C (RORC) in peripheral blood mononuclear cells (PBMC). Compared with the healthy control group, the frequency of circulating Th22 and Th17 cells and concentrations of plasma IL-22 were significantly increased in RCC patients. However, there was no significant difference in the frequency of Th1 cells. A positive correlation between Th22 cells and plasma IL-22 levels was found in RCC patients. Also, there was a significant positive correlation between Th22 and Th17 cells in RCC patients. An up-regulated expression of AHR and RORC transcription factors were also observed in RCC patients. As tumor stage and grade progressed, the frequencies of Th22 and Th17 cells and the level of plasma IL-22 significantly increased. Meanwhile, there was a positive correlation between Th22 and Th17 cells and RCC tumor stage or grade. Furthermore, patients with high Th22 or Th17 cells frequency displayed a decreased trend in survival rate. Our research indicated that the increased circulating Th22 and Th17 cells and plasma IL-22 may be involved in the pathogenesis of RCC and may be involved in the occurrence and development of tumors. Th22 cells, plasma IL-22, and Th17 cells may be promising new clinical biomarkers and may be used as cellular targets for RCC therapeutic intervention.
Introduction
Renal cell carcinoma (RCC) represents 2-3% of all cancers worldwide and has a worldwide estimated incidence of approximately 300,000 cases and mortality of 129,000 deaths every year [1][2][3]. Different factors, such as cigarette smoking, obesity, asbestos exposure, and regular use of analgesics, are thought to promote RCC development, but its etiology is still unknown [4][5][6]. Although surgical nephrectomy is effective in treating RCC patients who have no evidence of metastases, controlling RCC that is at an advanced-stage remains a challenge due to the lack of reliable biomarkers [7,8]. Therefore, identifying reliable markers for early detection or use as a prognostic factor is essential.
Studies have shown that cellular immunity plays a vital role in the pathogenesis of RCC [9,10]. The CD4 + T cell subset reflects the immune function state and is critical in the maintenance of homeostasis and tumorigenesis [11]. CD4 + T cells can differentiate into many active subsets, including T-helper (Th) 1, Th2, Th17, and Treg cells. Previous evidence has suggested that these Th subsets are associated with the Ivyspring International Publisher pathogenesis and progression of some solid tumors [12][13][14][15]. Th22 cells are a newly identified subset of CD4 + T cell that is characterized by the secretion of IL-22, but not IFN-γ or IL-17 [16,17]. Th22 cells are similar to Th17 cells in the expression of chemokine (C-C motif) receptor 6 (CCR6) and CCR4. Compared with Th17 cells, Th22 cells express CCR10 but do not express CD161 [18]. Furthermore, Th22 cells differ from Th17 and Th1 cells in that they have low level expression of the Th1 and Th17 transcription factors T-bet and RORC, while AHR is considered to be the main transcription factor of the Th22 subset [19]. Additionally, under conditions of coexisting IL-6 and tumor necrosis factor-α (TNF-α), naive CD4 + cells can differentiate into Th22 subtypes [17]. Based on the above, Th22 cells represent a particular and terminally-differentiated Th subset [16].
The main effector cytokine of Th22 cells is IL-22. IL-22 exerts its biological effects by a heterodimeric transmembrane receptor complex composed of IL-22R1 and IL-10R2, as well as subsequent Janus kinase-signal transducers and activators of transcription (JAK-STAT) signaling pathways, consisting of STAT-3, Jak1, and Tyk2 [20]. Recent studies showed that Th22 cells and IL-22 might participate in the pathogenesis of some tumors, such as multiple myeloma [21], gastric cancer [22], hepatocellular carcinoma [23], and colorectal cancer [24]. These studies suggest a possible role of Th22 cells in tumorigenesis, and it may represent a new type of tumor biomarker. As we all know, no data are available on the role of Th22 cells or IL-22 in the pathogenesis of RCC. Our study investigated the prevalence of Th22, Th17, and Th1 cells in peripheral blood of patients with RCC. The relevant quantitative mRNA expressions of transcription factors such as AHR and RORC were also investigated. Furthermore, we evaluated the associations of Th22 cells and IL-22 with different tumor stages or grades of RCC to identify their potential predictive and prognostic importance. Physical examination and computed tomography (CT)/ magnetic resonance imaging (MRI) was used for clinical staging. Tumors were classified and staged according to the American Joint Committee on Cancer's tumor-node-metastasis (TNM) classification system (Edition 8). Stages I-II and III-IV were regarded as early and advanced stages of the disease, respectively. The tumor grade was assigned according to the WHO/ISUP grading system. The critical clinical information of the RCC patients is given in Table 1. Thirty healthy controls (17 males and 13 females; age range, 23-49 years; median age, 28 years) were also enrolled in our research. This study was approved by the Medical Ethical Committee of Qilu Hospital of Shandong University. According to the Declaration of Helsinki, informed consent was obtained from all patients before participating in this research.
Flow cytometric analysis of Th22, Th17, and Th1 cells
Measurements of intracellular cytokines by flow cytometry were used to reflect cytokine-producing cells. In short, heparinized peripheral whole blood (400µL) was incubated with an equal volume of Roswell park memorial Institute (RPMI)-1640 medium for 4h at 37°C in 5% CO2 under coexisting condition of 25 ng/mL of phorbol myristate acetate (PMA), 1.7 mg/mL of monensin, and 1mg/mL of ionomycin (all from Alexis Biochemicals, San Diego, CA, USA).
Monensin was used to inhibit intracellular transport mechanisms and caused cytokines to accumulate in the cells. PMA and ionomycin are pharmacological T cell activators that mimic the signals produced by the T cell receptor (TCR) complex, with the advantage of stimulating any antigen-specific T cells. When PMA activates cells, CD4 + has a tendency to be down modulated. For this reason, to define CD4 + T cells, we stained the cells with APC-Cy7-conjugated anti-CD3 and PE-Cy7conjugated anti-CD8 monoclonal antibodies. After incubation for 20minutes at room temperature in the dark with the monoclonal antibodies, the cells were fixed, permeabilized, and stained with PE-conjugated anti-IL-17 monoclonal antibodies, FITC-conjugated anti-IFN-γ monoclonal antibodies, and APC-conjugated anti-IL-22 monoclonal antibodies. All antibodies were from eBioscience (San Diego, CA, USA). Isotype controls were used to ensure correct compensation and to confirm the specificity of the antibody. Stained cells were analyzed by flow cytometry using a FACS Calibur cytometer equipped with CellQuest software (BD Bioscience PharMingen, San Jose, CA, USA).
Quantitative real-time PCR analysis of transcription factors
Total RNA was isolated by TRIzol (Invitrogen, USA) according to the manufacturer's instructions. The Prime Script RT reagent kit (Perfect Real Time; Takara) was used for reverse transcription reactions in line with the manufacturer's instructions. Reverse transcription reaction was performed at 37°C for 15 minutes and then at 85°C for 5 seconds. Real-Time quantitative PCR was done in Roche Applied Science Light Cycler®480II Real-time PCR systems (Roche Applied Science) conforming to the manufacturer's recommendations. The Real-time PCR reaction included, in a final volume of 20μL, 1μL of cDNA, 10μLof 2 × SYBR Green Real-Time PCR Master Mix, and 1μL each of the forward and reverse primers. The primers of RORC, AHR, and the endogenous control β-actin were as follows: RORC: forward 5′-TTT TCC GAG GAT GAG ATTGC-3′ and reverse, 5′-CTT TCC ACA TGC TGG CTACA-3′; AHR: forward 5′-CAA ATC CTT CCA AGC GGC ATA-3′ and reverse, 5′-CGC TGA GCC TAA GAA CTG AAA G-3′; β-actin: forward 5′-CCT TCC TGG GCA TGG AGT CCT G-3′ and reverse, 5′-GGA GCA ATG ATC TTG ATC TTC-3′. PCR products were analyzed by melting curve analysis and agarose gel electrophoresis to confirm product size and ensure that no by-products were formed. The results of the targets were expressed relative to β-actin transcripts as the internal control. All experiments were repeated three times.
IL-22 enzyme-linked immunosorbent assay
Peripheral blood samples were taken from a forearm vein into heparin-anticoagulant vacutainer tubes. Plasma was obtained from all subjects by centrifugation and stored at -80°C for determining the concentration of cytokine IL-22. Plasma IL-22 concentration was determined using the quantitative sandwich enzyme immunoassay technique in line with the manufacturer's instructions.
Statistical analysis
The results are expressed as the median (range) or mean ± SEM (standard error of the mean). Comparisons between the two groups were analyzed by the Wilcoxon rank-sum test. In correlation analyses, the Spearman's correlation test was used for the non-normal distribution data, and the Pearson's correlation test was used for the normal distribution data. Patient overall survival was defined as the time interval between the date of surgery and the date of death or last follow-up. Deaths due to causes other than RCC were not included in the death record. The cumulative survival time was analyzed with the use of Kaplan-Meier method, and the log-rank test was used to compare group differences. All statistical tests were performed using GraphPad Prism 6.0 software (GraphPad, San Diego, CA, United States) and SPSS (version 17.0; SPSS, Inc., Chicago, IL, USA). P<0.05 was considered statistically significant.
Elevated AHR and RORC mRNA in RCC patients
We tested the related transcriptional factors of Th22 and Th17 cells by RT-PCR. The results showed that there was a higher level of AHR mRNA in the RCC patients than the healthy controls (0.47 ± 0.07% vs. 0.23 ± 0.05%, *P<0.05; Fig. 4A). Furthermore, the RORC mRNA level in the RCC patients was also higher than in the healthy controls (0.38 ± 0.06% vs. 0.18 ± 0.03%, *P<0.05; Fig. 4B). The results obtained confirmed the flow cytometry and ELISA data.
The prognostic value of circulating Th22, Th17 and Th1 cells for the overall survival of RCC patients.
We evaluated the prognostic value of high and low levels of circulating Th22, Th17 and Th1 cells on the overall survival of RCC patients. There was a tendency for a decreased trend in the 24-month survival rate in the high percentage Th22 cell group compared to low percentage Th22 cell group, although this difference was not statistically significant (Fig. 8A). A similar survival rate trend was obtained when comparing patients with high versus low Th17 cell percentages (Fig. 8B). However, no similar trend of survival rate was found in patients between high versus low Th1 cell percentage level (Fig. 8C).
Discussion
RCC was considered an immunologically sensitive cancer more than 30 years ago. Given that RCC tends to spread to distant sites, a reliable prognostic marker is required. It has been reported that some CD4 + T cells are essential for tumorigenesis and involved in different human cancers [25][26][27]. Our research evaluated the probable role of preoperative circulating Th22 cells and the related Th subsets levels in RCC.
As expected, our study showed a higher proportion of circulating Th22 and Th17 cells in RCC patients compared with healthy controls. These results indicate that Th22 and Th17 cells may be related to T-cell-mediated immunity in RCC patients. Th22 cells have recently been defined as CD4 + IFN-γ − IL-17 − IL-22 + cells. It is a newly identified independent Th subset compared to the well-known Th1 and Th17 subsets [16]. Th22 cells are thought to have an impact on certain tumors and autoimmune diseases. As we all know, little data is available on the role of Th22 cells in RCC. In our study, we found a significant increase in the frequency of Th22 cells in RCC patients compared to healthy controls, suggesting that Th22 cells may have a potential pathogenic role in RCC.
IL-22 is a member of the IL-10 cytokines family and is secreted mainly by activated Th22 cells [17]. The role of IL-22 in cancer progression has been recognized in some epithelial cancers, such as breast and lung cancer. When IL-22 is released by immune cells, it can act on cancer cells to promote tumor growth, aggressiveness, and treatment resistance [28][29][30]. Our study showed a significantly higher plasma IL-22 level in the RCC patients than the healthy controls. Furthermore, a significant correlation was found between the plasma IL-22 levels and Th22 cells in RCC patients, but no association in the healthy controls. A positive relationship between plasma IL-22 concentration and the frequency of Th22 cells in RCC may be explained by the fact that Th22 cells are a significant subset of IL-22 producing T cells, accounting for 37% to 63% of all IL-22-producing T cells [20].
Our current research also showed that Th22 cells increased with tumor stage and grade in RCC. Meanwhile, we observed a positive correlation between Th22 cells and tumor stage or grade. These data suggested that Th22 cells may be associated with tumor development and progression in RCC. We further studied AHR, the critical transcription factor directing Th22 lineage commitment, and found that the expression of AHR mRNA was increased in RCC patients. All of the above indicated that the Th22 subset was positively correlated with the occurrence and development of RCC and may promote tumor progression and affect the prognosis of RCC patients.
There has been controversy concerning the role of circulating Th17 cells in human tumor immunity [31]. Our research showed that the frequency of Th17 cells in RCC patients was not only increased but also associated with both tumor stage and grade. We also found a significant positive correlation between Th22 and Th17 cells in patients with RCC. These results indicated that in the occurrence and development of RCC, abnormal differentiation of Th22 and Th17 cells may be induced in the same way.
RORC is the main transcription factor that plays a significant role in directing the Th17 lineage and modulates the polarization of Th22 cells [18]. We observed a significant increase in RORC expression in RCC patients. Th17 and Th22 polarization require the transcription factors RORC and AHR. Also, Th22 differentiation requires the transcription factor AHR [32][33][34]. Similarly, RORC and AHR are involved in IL-22 production [33,34]. Consequently, these associations may prime Th22 and Th17 cells and prompt a positive correlation between Th22 and Th17 cells in RCC patients.
Studies have suggested that Th1 cells play an anti-tumor role by mediating cellular immunity, activating CD8 + CTL cells, and promoting reproduction by secreting IL-2 and IFN-γ [35,36]. Therefore, we tested the frequencies of circulating Th1 cells to investigate its possible role in RCC patients. However, no significant difference was found in Th1 cell levels between RCC patients and healthy controls. Generally, Th1-mediated cellular immunity was thought to be related to early tumors with a shift toward other Th immune responses in advanced tumors [37]. Onishi's study demonstrated that there was a change in the effective response from Th1 to Th2 with increasing stage of RCC [9]. Although our research showed that the frequencies of Th1 cells were lower in RCC patients than in healthy donors, the difference was not significant.
Furthermore, Th1 cells showed no sign of correlation with Th22 cells in RCC patients. Also, no significant association was found between Th1 cells and tumor stage or WHO/ISUP grade in RCC. The specific function of Th1 cells in the tumor pathogenesis of RCC needs further investigation.
We also assessed the clinical value of Th22, Th17, and Th1 cells in the overall survival of RCC patients. Our study found that patients with high Th22 or Th17 cells frequency displayed decreased survival rates. Although this difference was not statistically significant, it may be clinically relevant in the long run.
In summary, our study demonstrated that the frequencies of circulating Th22 and Th17 cells were elevated in RCC patients compared with healthy controls. Furthermore, circulating Th22 and Th17 cells showed a significant association with advanced RCC tumor stage and high WHO/ISUP grade.
Given that there are few prognostic factors for RCC, circulating Th22 and Th17 cells might be useful clinical markers for assessing tumor diagnosis and progression of RCC. Further investigations on the functions of Th22 and Th17 cells in RCC patients may be conducive to designing novel therapeutic interventions in the future. | 3,876.6 | 2021-01-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
A breath of fresh air in microbiome science: shallow shotgun metagenomics for a reliable disentangling of microbial ecosystems
Next-generation sequencing technologies allow accomplishing massive DNA sequencing, uncovering the microbial composition of many different ecological niches. However, the various strategies developed to profile microbiomes make it challenging to retrieve a reliable classification that is able to compare metagenomic data of different studies. Many limitations have been overcome thanks to shotgun sequencing, allowing a reliable taxonomic classification of microbial communities at the species level. Since numerous bioinformatic tools and databases have been implemented, the sequencing methodology is only the first of many choices to make for classifying metagenomic data. Here, we discuss the importance of choosing a reliable methodology to achieve consistent information in uncovering microbiomes.
During the past two decades, the evolution of DNA sequencing technologies has allowed the gathering of a vast amount of genetic material, laying the foundation to study complex microbial communities, also called microbiomes.At the dawn of the metagenomic classification era, it was necessary to distinguish each taxon based on their 16S rRNA gene sequence to unveil the composition of the bacterial communities inhabiting specific environments [1] .However, for many years, a significant portion of the microbiome has been ignored using this approach, such as archaea, fungi, protists, and viruses [Figure 1].Nonetheless, to date, 16S rRNA microbial profiling is still a widely used methodology to dissect the composition of bacterial communities.To make up for its weakness, it is usually compensated by additional sequencing steps, e.g., internal transcribed spacer sequencing for fungal community identification [2] .Another weakness of this methodology is the depth of results that rely on the in silico generation of operational taxonomic unit (OTU) or amplicon sequence variant (ASV).While OTUs are usually used to classify the sequencing outputs at the bacterial family or genera level, ASVs claim to reach the classification at the species level.Unfortunately, using short-read sequencing targeting one or two variable regions of the 16S rRNA gene is not enough to reach the classification at the species level for all microorganisms.For example, variable regions between microorganisms can reach very high similarities values in both pathogenic bacteria such as Escherichia coli and Shigella spp. [3]and commensal bacteria such as species of the genus Bifidobacterium [4] .Thus, caution is necessary for the interpretation of 16S rRNA profiles when blindly using bioinformatic tools.Nowadays, longer length sequencing reads have been achieved, improving the accuracy of species detection by accomplishing the complete length of the 16S rRNA gene.For example, using Oxford Nanopore Technologies, the reconstructed ASVs will improve the resulting microbial profiling with respect to the same analysis performed using short-read sequencing systems such as Illumina technology.In the same fashion, PacBio single-molecule real-time (SMRT) technology is also capable of full-length 16S rRNA gene sequencing, and it has been proposed as an alternative approach to target all nine variable regions of the 16S rRNA gene [5] .
However, long-read sequencing technology cannot counteract other issues related to 16S rRNA gene profiling, such as the different number of rRNA loci distributed among genomes of the same genus and numerous taxa of the same species.Data normalization procedures are usually applied to balance the identified amount of rRNA, resulting in approximations of the actual abundance of each microbial taxon that may result in over-or under-estimation of the real microorganism abundance.Besides, the amplification protocol of metagenomic marker-based profiling may favor the amplification of contaminants, a notion that should not be underestimated in the interpretation of the results [6] .Moreover, the PCR amplification protocol represents a significant source of bias, generating PCR artifacts such as chimeras and heteroduplex molecules [7] .Furthermore, long-read technologies such as Oxford Nanopore and SMRT technology display a higher error rate compared to short-read sequencing systems, representing a serious issue in a reliable taxonomy assignment of microorganisms.It is now crucial to provide metagenomic datasets that can be compared in following up projects by the scientific community.Metagenomic projects will benefit from including microbial profiles previously analyzed by other groups to validate their results and compare microbiomes retrieved from other environments/conditions.Furthermore, the re-analysis of the DNA sequences from previous experiments that can be compared with new metagenomic datasets can also allow gathering a number of samples that could not be otherwise collected in a single study.Unfortunately, data obtained through different 16S rRNA gene profiling studies are not easy to compare due to the absence of a consensus standard in 16S rRNA microbial profiling protocols.In this context, so many different primers aiming at amplifying different variable regions are used, making it difficult to distinguish actual changes from profiled samples to problems related to the different specificity of distinct amplification methodologies [8] .
Based on the limitation of the short-read achieved in 16S rRNA gene profiling assays, alternative DNA sequencing strategies have been proposed to achieve more reliable information and avoid misclassification of microbes forcing re-analysis.Thus, the DNA sequencing of the whole microbial communities present in a biological sample, a procedure that is also called shotgun metagenomics, has been used to remove the amplification of marker genes, with the consequent reconstruction of a complete microbiome and the generation of data that are easy to compare between different datasets [9] .The main advantage of this approach is the ability to achieve the microbial composition of a microbiome in a single DNA sequencing step, including the makeup of bacteria, archaea, protists, and fungi [Table 1].Furthermore, based on the sequencing depth, the taxonomic classification of the sequenced reads is only a fraction of the information that can be acquired.Chromosomal sequence reconstruction and functional annotation of microorganisms harbored in the biological samples are clear examples of how shotgun metagenomics can be more informative than metagenomic analysis based on the amplification of microbial marker genes.On the other hand, shotgun metagenomic sequencing is more expensive than 16S rRNA gene profiling.Thus, it is understandable that small research groups interested in screening microbial communities alone continue to choose 16S profiling due to their low budget, especially bearing in mind that it is crucial to have an adequate number of samples to achieve solid results based on statistical significance.Nonetheless, the computational power required to analyze shotgun metagenomics data is much heavier than that of 16S rRNA gene profiling, and advanced bioinformatic skills are necessary to manage the analysis steps.However, under specific circumstances, even shotgun metagenomics may not detect certain microorganisms from challenging samples, such as sub-dominant microorganisms or within samples dominated by a large amount of host DNA in host-related environments.In these circumstances, a DNA filtering step or a targeted DNA approach is mandatory [10] ; otherwise, an even deeper shotgun sequencing is necessary, increasing the costs of these analyses.In this context, the hybridization capture targeting of the 16S rRNA gene, or other molecular markers, could be a complementary strategy to explore the microbial community at the species level [11] .An alternative methodology named shallow shotgun metagenomic sequencing has recently been developed to overcome the cost issue of deep shotgun metagenomic, focusing on sequencing a smaller amount of DNA from metagenomic samples [12] .Using the latter approach, the cost of the analysis is reduced and aligned with that of performing 16S rRNA microbial profiling, around 80 USD instead of hundreds of USD for deep shotgun sequencing.Notably, such shallow metagenomics is filling the gap between shotgun and 16S rRNA gene sequencing without losing the ability to retrieve a reliable taxonomic classification at the species level of each microorganism.In fact, it has been shown that the sequencing of 100,000 short reads, the depth usually used for shallow shotgun metagenomics, is the appropriate sequencing depth for classifying the microbial community at the species level with a solid statistical significance, instead of sequencing millions of reads in deep shotgun metagenomics [13] .Furthermore, shallow and shotgun metagenomic data can be shared within the scientific community to provide a feasible way to better compare public data.In this context, standardized metagenomic data can be used for in silico comparisons between multiple experiments, also called meta-analyses, to gain insights into the environmental dynamics among a huge number of samples that cannot be otherwise collected in a single study.Nonetheless, the implementation of pipelines and systems able to process shotgun data is essential to have a reliable overview of each microorganism inhabiting the sample.
Many tools have been developed focusing on classifying shotgun sequencing data using different alignment strategies.For example, basic local alignment and search tool (BLAST) is one of the most sensitive metagenomic alignment methods and, consequently, one of the most used software packages for DNA searches.On the other hand, BLAST is also computationally intense, resulting in time-consuming analyses.Thus, many tools aiming at profiling shotgun metagenomics data use different approaches to increase the speed of execution of analyzes, such as searching identical portions of DNA sequences (k-mers) or reducing the computational load with a marker-based classification.However, it has recently been proven that the use of a database composed of microbial marker genes does not provide a complete and accurate picture of the microbiome complexity [14,15] .This is correlated with the misclassification of a large portion of the sequenced DNA that cannot be classified if it does not explicitly belong to the unique genes of classified microorganisms.Thus, it is essential not to implicitly trust the profiling of such tools since very clean profiles showing few microorganisms can only summarize the actual complexity of the analyzed microbiome.In a sense, the currently ongoing competition in providing the fastest methodology to classify microbiomes can jeopardize the ability of the developed bioinformatics approaches to provide an accurate and reliable overview of the actual microbial biodiversity residing in a biological sample.
Another fundamental instrument for the classification of shotgun metagenomic data is correlated with the completeness of the database used to infer the microbial classification.If the database is filled with misclassified sequences, the output of the analysis will not be reliable.Furthermore, as mentioned above, if the database lacks many bacterial species, the resulting microbial profile will be an underestimation of the actual complexity of that microbiome.Moreover, the need for a continuous and proper update of databases used in metagenomic analyses should not be underestimated.Microbial taxonomy is in continuous evolution, and many changes can occur in a few months, providing an actual revolution in the classification of microorganisms.In this perspective, we would like to encourage the scientific community to investigate poorly characterized microbiomes through culturomics experiments to gain access to the genome sequence of novel microbial species not already discovered.In this context, it has been shown that unknown microorganisms, also referred to as microbial dark matter, can be easily found in unexplored environments, such as rural human populations, exotic animals, soils, and waters [10,16] .A fundamental step to uncover the complexity of microbiomes is to retrieve genomic sequences not already classified, for example, through the DNA sequencing of putative novel species identified using peptide mass fingerprints by matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) [17] .Limitations of MALDI-TOF MS technology are related to similarities between organisms and databases with a limited number of spectra, leading to poor discrimination between species.Besides, applying MALDI-TOF MS to discovering novel species is useful for enriching the database with additional spectra aiming to isolate these putative unknown microbial species.Thus, a constant update in the microbial taxonomy is crucial to provide reference genomes that will uncover the genuine complexity of microbial biodiversity for future metagenomic assays.This will also give an instrument to re-analyze the vast number of sequencing data collected in the last two decades.
To summarize, nowadays, it is essential to provide reliable metagenomic data that can be analyzed with comprehensive bioinformatics tools and, at the same time, that can be compared with other studies.The shotgun metagenomic methodology provides the complete repertoire of the microbial DNA within a sample, and, to reduce cost, a shallow approach can be applied without affecting the quality of the profiling results.It is also crucial to choose an adequate bioinformatics tool associated with a solid database that is progressively updated to minimize the number of misclassified microorganisms within the analysis.Additionally, shotgun metagenomic sequencing can be coupled with flow cytometry assays, qRT-PCR, or supported by synthetic chimeric DNA spikes added directly to environmental samples, allowing the estimation of the bacterial load of the analyzed biological sample.Thus, relative abundances assessed by bioinformatic pipelines can be finally converted into absolute values unveiling those microbiome dynamics that cannot be otherwise uncovered with standard profiling.
Figure 1 .
Figure 1.Schematic representation of the methodologies based on 16S rRNA gene microbial profiling and shotgun metagenomics. | 2,888.2 | 2022-02-25T00:00:00.000 | [
"Biology"
] |
The Apple and the Ear, Grasping Sounds in Space—A Theory of Sound Localization
Sensing the
direction of origin of a sound in space has long been attributed to the delay
between arrival times between the two ears. This, now discredited two
dimensional theory, was put to rest by the observation that a person deaf in
one ear can locate sounds in three dimensional space. We present here a new
theory of sound localization that has the re-quired three dimensional
measurement. It is a theory that interprets the well researched biological
structure of the mammalian cochlea in a new and logical way, which leads to a
deeper understanding of how sound localization functions.
INTRODUCTION
The localization of sounds in space has long been explained by the difference in arrival times in the two ears. This is a reasonable, but overly simplistic, explanation. A review of the current thinking on arrival time localization by Li, X. discusses issues of accuracy and utility [1]. The two ears provide, at most, a two dimensional localization of a sound in space. A two dimensional measurement can only locate a sound source in a single plane, such as the horizontal plane. It is well known, and obvious to the casual observer, that mammals can locate sounds in three dimensional space [2]. In order to achieve an understanding of this ability of mammals to locate a sound in three dimensional space, we must look for mechanism that has three elements, rather than the two elements of the opposing ears.
The nail in the coffin of the two ear theory of sound localization is the observation that profound deafness in one ear does not result in the loss of the ability to locate sounds in three dimensional space [3,4]. This ability to locate a sound in three dimensional space with one ear can be easily experienced with a simple maneuver. Just disable one ear temporarily with an ear plug, or with a finger, and listen to a sound a moderate distance away. In my experience, it is easy to locate sounds with one ear.
The localization of sounds in space has an obvious value to a free ranging animals' hunting and predator avoidance behavior, just as the ability to grasp an apple and pluck it from a tree is a necessary skill for Eve to fulfill her role.
This note equates these two behaviors, the grasping of an apple and the grasping the origin of a sound Open Access in space, and suggests a parallel in the mechanisms that underlie both of these behaviors. It takes three fingers to grasp an apple. Similarly it takes three independent acoustic measurements to grasp the origin of a sound. This is the task the cochlea is set up to perform ( Figure 1).
ANALYSIS
Looking at the cochlea we see that there are three rows of outer hair cells which send out no afferent signals. Each row of outer hair cells has an independent system of efferent enervation [6]. Overlooking these three rows of outer hair cells is a row of inner hair cells that send afferents to higher centers. The vibrations that the inner hair cell sees is a composite of three simultaneous components of the traveling wave that passes down the three rows of outer hair cells. Each row of outer hair cells can modify its propagation velocity under the control of that rows efferent enervation. The efferent enervation controls the membrane potential of the outer hair cell. The membrane potential controls the lengthening and shortening of the outer hair cell thus stiffening or loosening the loading of that row which will effect the propagation velocity in that row [7,8]. We now have a mechanism for lining up three components of an acoustic wave with the outer hair cells and a way of sensing, with the inner hair cells, when the appropriate match is achieved.
The question now is "what are these three components" of the acoustic wave that travels down the cochlear membrane?
They are illustrated in Figure 3 which represents the head of a primitive mammal. The three acoustic signals are: 1) The direct reception by the ear that arrives directly from the sound source.
2) The reflection from the wet surface of the nose. A reflection from a solid surface occurs with a 180 degree phase inversion.
3) The reflection from the entrance to the oral cavity (the open mouth) which will be in phase with the arriving acoustic wave.
In many small mammals the areas on the animals face between the ear and the reflective surfaces on the snout are covered in fur. Fur keeps the animal warm, but it also is a poor sound reflector which serves to dampen unwanted sound allowing For a cleaner reflection from the wet nose, oral cavity and possibly the eye.
Let's pause here to look at the nature of these two types of reflection.
REFLECTION FROM A SOLID SURFACE
A solid surface cannot support the variations in pressure presented by an arriving acoustic wave. In order to meet the boundary condition of no pressure changes the arriving wave must be canceled by a
REFLECTION FROM THE OPEN END OF AN ACOUSTIC CAVITY
Reflection from the open end of an acoustic 1/4 wave cavity (think of blowing across the open end of a beer bottle) is a little more complicated. The arriving signal hits the open end and travels to the bottom of the cavity. If the cavity is a 1/4 wavelength of the incoming vibration there will be a 90 deg phase shift when it reaches the cavity bottom. The wave then reflects off the bottom, which is a solid surface, with a 180 deg phase shift. This reflected wave is shifted by 90 deg by the trip back to the open end of the cavity. All these phase shifts add up to 360 degrees. The emerging wave (the reflected wave) is in phase with the incident wave, but delayed by 360 degrees of the incoming wave. This delay results in the illusion that the reflection originates one wavelength in front of the opening of the oral cavity.
The ear is in effect looking with three ears spaced far enough apart to allow triangulation. Since only one ear is required to perform this triangulation we have an answer as to how people deaf in one ear can locate sounds in space.
DEDUCTIONS
When eve goes for that apple, signals are sent by motor neurons to the muscles that move the fingers until tactile signals say that the apple is contacted. There are three independent feedback networks, one per finger, that act simultaneously to insure that each finger provide the correct force to accomplish that fateful task of plucking the apple.
In the mammalian ear the efferent signals to the outer hair cells are sent by neurons derived from motor neurons. They are in effect "motor neurons" driving what are three acoustic fingers that are reaching out to grasp a sound in space.
Since the advent of the Cochlear Implant which elicits neural activity along the cochlea it is increasingly important to understand the fine structure of how sound vibrations travel and are processed through the organ. This paper aims to expand our understanding of the complex behavior of the cochlea.
In mammals the vibrations, which produce a traveling wave, contain temporal components from reflections originating at different parts of the head with different delays based the distance of the reflecting surface to the ear drum. The particular sound that is received at the ear drum is the sum of the original wave and the reflections from prominent reflecting surfaces on the animals head. This sound vibration now has a temporal dimension that sweeps across the extended surface of hair cell sensors in the inner ear of the animal. The sensors are now able to simultaneously sense both the original wave and its delayed reflections as the wave travels across the extended sensory surface and are able to, it is speculated, infer the direction of the sounds origin. In lower animals, such as turtles and reptiles, reflections originate on parts of the body requiring he animal to keep its' body very still while listening. In mammals, however the acoustic reflections required for sound localization originate from points on the head allowing the animal to keep his head focused on the sound while allowing the body freedom of motion.
It is the job of the auditory part of the brain to institute the necessary feedback functions to control propagation velocity of each row of outer hair cells independently. It must be able to independently adjust the propagation velocity of each row to allow it to achieve simultaneity in the part of the spectrum of interest. By keeping track of the propagation velocities that are required to achieve simultaneity, the location of an incoming sound can be determined.
Occam's Razor, {"the simplest explanation is the best"} [9], does not seem to hold for the theory presented here that piles complexity upon complexity to solve a very difficult problem. Yet looking at Figure 2 we see a clean direct design that can carry the spirit of Occam's Razor.
CONFLICTS OF INTEREST
The author declares no conflicts of interest regarding the publication of this paper. | 2,158.8 | 2018-11-28T00:00:00.000 | [
"Physics"
] |
The evolution of SPHIRE-crYOLO particle picking and its application in automated cryo-EM processing workflows
Particle selection is a crucial step when processing electron cryo microscopy data. Several automated particle picking procedures were developed in the past but most struggle with non-ideal data sets. In our recent Communications Biology article, we presented crYOLO, a deep learning based particle picking program. It enables fast, automated particle picking at human levels of accuracy with low effort. A general model allows the use of crYOLO for selecting particles in previously unseen data sets without further training. Here we describe how crYOLO has evolved since its initial release. We have introduced filament picking, a new denoising technique, and a new graphical user interface. Moreover, we outline its usage in automated processing pipelines, which is an important advancement on the horizon of the field.
overall context of the particles. Therefore, the approach enables highly accurate picking, e.g., it does not select particles on the carbon film or specifically picks particles attached to liposomes. In addition, a pretrained, generalized model further allows the selection of particles in previously unseen data sets with high accuracy.
Recent evolution of crYOLO
Since the release of crYOLO we have improved the software by modifying the network architecture, adding new functionalities, and increasing its usability. In particular, we have integrated a new method for denoising micrographs to increase the signal-tonoise ratio for improved particle detection. By default, crYOLO uses a standard low-pass filter for denoising. However, this method requires parameters to be manually set and has its inherent limits. To enable automated denoising, we therefore implemented the recently introduced neural-network-based approach noise2noise 3 into a new tool called JANNI, that can be chosen in crYOLO as alternative denoising method. We pretrained JANNI on movies from various cryo-EM data sets and used it to denoise previously unseen data sets (Fig. 1). JANNI might be helpful especially for data sets with low signal-to-noise ratio.
Another important new functionality of crYOLO is filament picking. Owing to their structure, the picking of filaments poses a challenge and is often not supported by automated particle picking procedures. Optimally, only single filaments are selected and positions where filaments cross or overlap are omitted. In case of helical specimens, the boxes should be placed along the filament in a distance according to its helical rise to allow the use of helical reconstruction procedures 4 . The new filament picking procedure initially follows the general workflow of crYOLO. In a post-processing step, it uses the picked particles as support points to trace the filaments. The boxes are then placed along the filaments in a distance defined by the user (Fig. 2).
CrYOLO offers now the possibility to improve an existing model, which is of advantage when fine-tuning a general model on a specific data set. In this case, only the last few layers of the network are retrained while previous layers are fixed. This effectively reduces the amount of training data needed to improve a working model. A major advantage of this approach is a substantial speed-up along with reduced GPU memory consumption.
With the evolution of crYOLO, more options have become available to the user, which increases the complexity of the command line interface. Therefore, we most recently added a new graphical user interface, which makes crYOLO more accessible for new or less technically oriented users (Fig. 3).
Impact of crYOLO
CrYOLO has found widespread use, and since its release mid-2018 already >15 structures were solved with the support of Fig. 1 Example micrographs denoised by JANNI. XaxAB toxin 32 and Tc toxin 33 without denoising (a, c) and with denoising (b, d), respectively. Details for the sample and grid preparation can be found in ref. 32 for the XaxAB toxin and in ref. 33 for the Tc toxin. Scale bars: 50 nm.
In addition, crYOLO was made available through the SBGrid software collection 25 , enabling easy access to crYOLO for groups without advanced computational facilities. CrYOLO was also integrated into COSMIC 26 , a web platform for cryo-EM data processing via cloud computing. Very recently, Li et al. 27 used crYOLO in a user-free preprocessing pipeline. This shows that crYOLO has been broadly used by other groups and proven flexible enough for a wide variety of applications.
The general model and automated processing
Since CrYOLO provides a generalized model, it is the optimal particle selection software to be integrated in an automated cryo-EM single-particle analysis procedure. The general model of crYOLO was pretrained on >60 different data sets, including proteins of various sizes and shapes. This allows to pick previously unseen particles not included in the training data set. To this end, crYOLO is a crucial part in our software package SPHIRE that we are optimizing to be used in a completely automated fashion 28 . In Scipion, crYOLO is supported for the construction of intelligent workflows 29 . A recent integration of crYOLO into the automatic pipeline of Relion 30 is successfully used at the Electron Bio-Imaging Center (eBIC) at Diamond Light Source 31 .
Whereas the generalized model offers great opportunities for automated processing there remain limitations. The amount of data used for the general model is still limited and might be biased towards the set of proteins used for the initial training. The general model is also not able to distinguish between intact and dissociated or fragmented particles in the same sample. This requires additional training to fine-tune the general model with particles manually picked from a few micrographs. A drawback is that this requires manual intervention and is therefore not suitable for automated processing. A better strategy is to automatically fine-tune the general model based on 2D classification, where particles representing similar views are grouped together, aligned and averaged.
During 2D classification, broken particles will be separated from intact particles. The latter ones will then be used to train a crYOLO model or fine-tune the general model.
Optimally, a fully automated pipeline would also include a deep-learning-based 2D class selection tool. Our group is currently developing such software, that we call Cinderella. While it is still under development, it is already publicly available and successfully integrated in SPHIRE 28 . Cinderella provides a pretrained general model and is able to separate 2D classes into good and bad classes.
In the future, a combination of Cinderella and crYOLO will allow automated feedback loops to improve the picking quality in an iterative manner. With these tools at hand, we believe that real-time automated processing even for challenging data sets is in reach.
Data availability
All data supporting the findings of this study are available from the corresponding author on reasonable request. | 1,548.4 | 2020-02-11T00:00:00.000 | [
"Computer Science"
] |
Polystyrene / Montmorillonite Nanocomposites : Study of theMorphology and Effects of Sonication Time on Thermal Stability
Polymer nanocomposites of polystyrenematrix containing 10%wt of organo-montmorillonite (organo-MMT)were prepared using the solution method with sonication times of 0.5, 1, 1.5, and 2 hours. Cetyltrimethylammonium bromide (CTAB) is used to modify the montmorillonite clay aer saturating its surface with Na ions. Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), scanning electron microscopy (SEM), and transmission electron microscopy (TEM) were used to characterize the montmorillonite before and aer modi�cation by CTAB. e prepared nanocomposites were characterized using the same analysismethods.ese results con�rm the intercalation of PS in the interlamellar spaces of organo-MMTwith a very small quantity of exfoliation of the silicate layers within the PS matrix of all samples at all studied times of sonication. e thermal stability of the nanocomposites was measured using thermogravimetric analysis (TGA). e results show clear improvement, and the effects of sonication time are noted.
Introduction
Recently, nanocomposites, such as polymer-layered silicate nanocomposites, have become effective alternatives to conventional polymer composites in many applications.ese systems may be classi�ed according to the dispersion of clay in the polymer.In general, two types of polymer-layered clay nanocomposites can be obtained, intercalated, and exfoliated, and the latter are known as delaminated nanocomposites.Nanocomposites are de�ned as composites in which the dispersed particles are in the nanometer range in at least one dimension [1,2].e thickness of the layer in layered clay is approximately 1 nm, and the lateral dimensions of these layers may vary from 300 ′ Å to several microns or even larger depending on the particular silicate.e layers organize to form stacks with a regular van der Waals gap, which is called the interlayer [3].Intercalates are obtained when a single polymer chain is located between silicate layers in a manner that increases layer spacing while attractive forces between the layers maintain regularly spaced stacks.Exfoliates are obtained when the layer spacing is sufficiently increased to overcome the interactions between the layers, which are randomly dispersed in a continuous polymer matrix [4].e preparation method of nanocomposites can be classi�ed into three categories according to the starting materials, and processing techniques: intercalation polymerization, which includes the intercalation of one or more suitable monomers and subsequent polymerization [5][6][7][8]; polymer intercalation by the solution method [9][10][11]; and polymer intercalation by the melt method [12].However, melt intercalation is advantageous because of the absence of the need for organic solvents, which makes this method environmentally benign, and because of the ability to adopt processing techniques such as extrusion and injection molding, the solution method gives good control on the homogeneity of the constituents, which helps to understand the intercalation process and nanocomposite morphology.It also leads to a better understanding of the structure and dynamics of the intercalated polymers in these nanocomposites, which can provide molecular insight and lead to the design of materials with desired properties [13].erefore, in this work, modi�cation of local montmorillonite clay to use as �ller will be achieved by the solution method and will be used to prepare PS/organo-MMT nanocomposites.e effect of sonication time, which is performed during the preparation process, on the thermal stability of the PS/organo-MMT nanocomposites will be studied.e morphology of the clay and nanocomposites will be characterized using FTIR, XRD, and TEM.
Washing and Saturation of Montmorillonite.
To prepare the MMT suspension, 31 mL of distilled water was added to 150 g of MMT and shaken for 24 h.Aer, the suspension was allowed to sit for 30 min.Distilled water was added to the separated MMT suspension, and the mixture was shaken for 48 h.e mixture was allowed to sit for 24 h.Aer, the MMT suspension was separated and saturated by shaking with 0.5 mol/L of NaCl solution for 24 h.is step was repeated �ve times.e sediment was washed with distilled water to remove excess salt.is was con�rmed by a negative chloride ion test using AgNO 3 .e saturated montmorillonite was dried at 105 ∘ C and grounded.e montmorillonite was stored in well-stoppered bottles in desiccators over CaCl 2 .e samples of Na-saturated montmorillonite were labeled with Na-MMT.
Modi�cation of Montmorillonite
. e organo-MMT was prepared using the ion exchange method.A cetyltrimethylammonium bromide (CTAB) solution was used to modify Na-MMT using a concentration of CTAB that was twice that of the cation exchange capacity (CEC = 80 meq/100 g) of the montmorillonite.Ten grams of Na-MMT was dispersed in 1000 mL of deionized water, and the dispersion was vigorously stirred overnight.Approximately 7.3 g of CTAB was dissolved in 100 mL of deionized water and was slowly added to the Na-MMT suspension.e mixture was magnetically stirred for 24 h at room temperature [14].Organo-MMT particles were separated by centrifugation at ambient temperature and repeatedly washed to remove excess Br − .e product was dried in a vacuum oven at 80 ∘ C. e sample was labeled as organo-MMT.
PS/MMT Nanocomposites.
Ten percent of the weight of organo-MMT was added to 20 mL of toluene.e suspension was stirred magnetically for 1 h and sonicated for 1 h before PS (2 g) was added.e resulting mixture was magnetically stirred for 1 h and further sonicated for 0.5, 1, 1.5, and 2 h to study the effect of sonication time.e mixture was cast into petri dishes.e thin �lms, which were obtained, were removed from the glass plates aer 24 h at room temperature, and the solvent was evaporated.is procedure was performed at room temperature and ambient pressure.e nanocomposites obtained were labeled as PS/organo-MMT.A virgin polystyrene sample was prepared for comparison by mixing it with toluene using a magnetic stirrer.is sample was labeled as PS.
Characterization and Measurements.
Particle size distribution measurements of Na-B and MB suspensions were performed using Malvern Zetasizer, ZS, ZEN3500 (UK).Aqueous suspensions were prepared as follows.0.1 g of clay sample was dispersed in distilled water.e suspension was kept in ultrasonic for 5 minutes in order to break powders' agglomerates resulting �ne, colloidal particles dispersed in water.e structures of Na-MMT, organo-MMT powder, and the prepared nanocomposite �lms were investigated using Fourier-transform infrared (FTIR) spectroscopy (FI-IR Spectrometer 1000, Perkin Elmer).e structure of nanocomposites were monitored using X-ray diffraction (XRD), RIGAKU Ultima-IV diffractometer (Japan), and intensity data were collected in the 2 range of 0 ∘ -25 ∘ at a step of 0.02 ∘ and a 2 s count time using Cu-K radiation.e surface morphology of the prepared organoclays was examined by using a JEOL JSM-6360LV scanning electron microscope.Small amounts of the dried powders (approximately 0.01 g) were placed on sticky carbon tape on standard Al mounts, then sputter coated with a thin conductive layer of gold.
Transmission electron microscopy (TEM) images were recorded to investigate the morphology and inner structure of the PS/organo-MMT nanocomposite samples.TEM images were recorded on a JEOL JSM-6060LV transmission electron microscope.TEM is at an accelerating voltage of 100 kV.For TEM observation, small drops of dilute suspensions of 0.1 g of the clays (Na-B and MB) in 5 cm 3 of doubly distilled water were placed on Cu mesh grids which had been coated with a thin carbon �lm.e grids were air-dried then brie�y placed in a 60 ∘ C oven to ensure complete drying prior to insertion into the instrument.e samples of nanocomposites �lm were microtomed at room temperature with a diamond knife using a Reichert utramicrotome in order to obtain 80 nm thick sections.Microtomed sections were transferred from water onto 200 mesh copper grids and used without staining.ermodegradation of the PS/organo-MMT nanocomposites were determined using thermo gravimetric analysis (TGA) with a Perkin-Elmer analyzer.ermograms, using approximately 10 mg of sample, were recorded from 25 ∘ C to 800 ∘ C at a heating rate of 10 ∘ C min −1 under nitrogen �ow.
Size Distribution of Montmorillonite Suspensions.
Particle size distribution analysis (PSD) is a measurement designed to determine information about the size and range of a set of particles in a representative material.Particle size was determined using dynamic light scattering measurements.Figures 1 and 2 show the size distribution histogram of Na-MMT and organo-MMT suspensions.e Zetasizer soware uses algorithms to extract the decay rates for a number of size classes to produce a size distribution.e -axis shows a distribution of size classes, whereas the -axis shows the relative intensity of the scattered light.e particle size distribution of the Na-MMT suspension had a wide distribution (three peaks); therefore, its distribution is very heterogeneous.e particle size distribution for Na-MMT was 88.2% of the particles were 748.5 nm, 6.4% were 105.4 nm, and 5.4% were 426.3 nm.e organo-MMT suspension shows one peak with a narrow size distribution in the histogram, which may be related to one type of size distribution.is result revealed the particle size of organo-MMT to be highly monodispersed, indicating that they have the same size and shape with an average diameter of 1298 nm, which was obtained with 100% intensity.e increase in the size was attributed to an aggregation of particles because of an interaction of the positively charged CTAB molecules with the negatively charged clay particles.e average size and intensity are presented in Table 3.
Infrared Absorption Spectra
3.2.1.Infrared Absorption Spectra of Montmorillonite.e infrared absorption spectra (FT-IR) of Na-MMT and organo-MMT were recorded in the region from 400-4000 cm −1 .e full FT-IR spectra of the samples are presented in Figure 3.For the Na-MMT spectrum, the absorption band at 3624 cm −1 is because of the stretching vibrations of structural OH groups coordinated to Al-Al pairs, the complex broadband at approximately 1032 cm −1 corresponds to Si-O For the organo-MMT spectrum, there are several absorption bands that appeared at different positions.e asymmetric stretching vibration of the structural OH groups of Na-MMT at 3626 cm −1 , the symmetric stretching vibration of the hydroxyl groups at 3430 cm −1 , and the bending inplane vibration of H-O-H at 1656 cm −1 correspond to the structural changes from hydrophilic to hydrophobic characteristics [18,19].e stretching band of the OH groups at 3624 cm −1 (Na-MMT) shied to 3626 cm −1 in the organo-MMT sample.is slight shi towards the smaller frequencies implies the replacement of H 2 O molecules from the galleries when CTAB is adsorbed.Similarly, the bending in-plane vibrations of the OH groups are characterized by a broad band at 1639 cm −1 .is band shied to 1656 cm −1 in the FT-IR spectrum of the organo-MMT sample, which indicates an intercalation of surfactant molecules between the silica layers in montmorillonite.e broad band observed at 3447 cm −1 corresponds to the stretching vibrations of the structural and free OH groups.is band shied to 3430 cm −1 in organo-MMT.e sharp peaks that appear at 2922 and 2822 cm −1 are characteristics of CTAB, and these peaks are indicative of intermolecular attractions between adjacent alkyl chains of CTAB in Na-MMT galleries [20,21].A small peak appeared at 915 cm −1 and belonged to the in-plane stretching vibrations of the Si-O bonds of the tetrahedral silica layers that shied to 911 cm −1 .All of these changes refer to the organophilic modi�cation of Na-MMT by CTAB in the prepared sample of organo-MMT [19,[22][23][24][25][26].Table 4 shows the assignments of Na-MMT and organo-MMT.absorptions bands that are attributable to PS and organo-MMT.All PS bands appeared in these spectra with slight shiing.New bands that appeared in the regions of 3625, 1027, and 466 cm −1 were attributed to OH stretching of structural hydroxyl groups (Al-OH), Si-O stretching, and bending vibrations of organo-MMT and were indicating the existence of organo-MMT in the PS matrix where the polymer chain was inserted between the layers of the organo-MMT by secondary valence forces.
X-Ray Diffraction of Montmorillonite
. XRD analysis provides information on changes in the interlayer spacing of the clay layers.e formation of an intercalated structure should result in a decrease in the 2 value, which would indicate an increase in spacing, whereas the formation of an exfoliated structure usually results in the complete loss of registry between the clay layers such that no peak can be observed in the XRD trace [4].us, the Na-MMT and organo-MMT samples were studied by WAXD measurements in the range of 2 = (1-10) ∘ .Figure 5 shows the XRD patterns of Na-MMT and organo-MMT.Upon intercalation of CTAB, the basal spacing increased as expected.e XRD pattern of Na-MMT and the characteristic peak (001) re�ections of montmorillonite appeared at 2 1 ∘ , which corresponds to 12.4 Å, and this peak in organo-MMT was observed at a lower angle compared to Na-MMT.e spacing increased from 12.4 (2 1 ∘ ) for Na-MMT to 19.6 Å (2 ∘ ) for organo-MMT.is increase in spacing provides evidence to support the exchange of the interlayer sodium by CTA + ions.Figure 6 demonstrates the probable mechanism of adsorbing CTAB molecules onto the montmorillonite surface layers as well as the relationship between CTAB adsorption and the structure of the adsorption layer in�uenced by the charge distribution of the clay surface.Initially, the clay saturated by a concentration of CTAB less than a CEC of 0.7, CTAB adsorption occurs by cation exchange (Figure 6(a)) [27], which causes nonuniform interlayer swelling.As the CTAB concentration approaches the CEC (0.8-1.0 CEC) of clay, the selective intercalation of CTAB develops along the silicate layers (Figure 6(b)).When the CTAB concentration increases beyond the CEC (>1.0 CEC) of the clay, CTAB adsorption may predominantly occur through hydrophobic bonding, which increases interlayer spacing to ∼40 (Figures 6(c) and 6(d)) [27].e result, which was obtained for organo-MMT, con�rms the probable mechanism where the spacing of the (001) peak shis directly from 12.4 Å (2 1 ∘ ) in the Na-MMT to 19.6 Å (2 ∘ ) in the organo-MMT (see Table 6).
T 7: e XRD data of organo-MMT and PS/MMT nanocomposite spectra.
X-Ray Diffraction of Nanocomposites. X-ray diffraction
(XRD) measurements were also applied to analyze the structural characteristics of PS and PS nanocomposites, which were prepared at different sonication times.Figure 7 shows the XRD patterns of PS and PS/organo-MMT nanocomposites, and Table 7 shows the 2 and -spacing values.For PS, only a broad amorphous peak, which appeared within the range of 2 = (18-22) ∘ , can be observed.
For PS/organo-MMT nanocomposites, it is clear from �gure that some diffraction peaks of organo-MMT and the broad peak of PS appeared in the spectra for all nanocomposite samples.e diffraction peak of montmorillonite, which appears at 2 ∘ in the organo-MMT spectrum, appeared at a smaller angle with increasing spacing for all PS/MMT nanocomposite spectra, and the angle was approximately the same for all samples (see Table 7).e increase in spacing indicates that intercalated nanocomposites were formed.It was also noticed from the �gure that there are sharp peaks that clearly appeared in the region of amorphous PS with increasing sonication times.is may indicate that the crystalline structure of the polymer was positively affected.An optimal sonication time of 1 h was found to produce nanocomposites with maximum peak intensity.e sonication time for 0.5 hour is not enough for the order and regularity of polymer chains between the interlayers of MB.At sonication time of 1 h resulted in region with crystalline structure (Figure 7).When the time of sonication increases more than an hour this may be lead to entering the polymer chains between the clay layers with random orientation due to the instability of the structure of clays.However, the effect of sonication on the clay polymers nanocomposites still needs further studies to identify the regularity of polymer chains between the clay layers.
Transmission Electron Microscopy (TEM).
Transmission electron microscopy (TEM) provides an actual image of the clay layers, which permits the identi�cation of the morphology of the nanocomposites [4].us, TEM was another method to observe evidence for the formation of the intercalated structure of the prepared nanocomposites.Figure 8 shows the TEM images of PS/organo-MMT at 1 h of sonication time at a variety of magni�cations.As shown in these images, the clay platelets were still stacked on each other (see the darkest areas).In addition, the clay layers have expanded with varied interlayer spacing, such as 2.15, 2.56, and 2.6 nm, whereas little individual clay layers were well distributed in the polymer matrix.Finally, from these images, it can be determined that intercalated structures and a very small amount of exfoliated structures were formed, which con�rms and agrees with the XRD results.
Scanning Electron Microscopy (SEM)
3.5.1.SEM of Montmorillonite.To determine the shape, size, and morphology of the particles, SEM was used.In Figure 9, micrographs of powder of raw MMT, Na-MMT, and organo-MMT samples are presented.It is clear that particles of all samples are irregularly shaped and contain many edges with different sizes.It is known that these factors play an important role in the interaction between �ller, matrix, and interfacial adhesions.For the morphology of the samples, the images showed that the original Na-MMT had massive and curved plates.Compared to the morphology of Na-MMT, the montmorillonite clay, which was modi�ed with CT��, showed signi�cant changes in morphology, and there were a lot of aggregated particles, and the plates became �at.
SEM of Nanocomposites.
To observe the dispersion of the particles, the nanocomposites aer 1 h of sonication were examined using SEM, and the images are presented in Figure 10.Incorporation of 10% organo-MMT with polystyrene matrix resulted in a region with spherulite texture (lamella structure).is texture could be related to the crystallinity of the polymer.Spherulite-looking crystals with beautiful impinged boundaries were observed in PS/organo-MMT aer 1 h of sonication time compared to PS. Good dispersion of this percentage of MMT throughout the PS matrix under sonication could be led to a nucleation effect and increase the percentage of crystallinity.is result strongly agrees with the XRD results.
ermogravimetric Analysis (TGA)
3.6.1.TGA of Montmorillonite.Weight loss because of the formation of volatile products aer degradation at high temperature was monitored as a function of temperature.e data, which were obtained using TGA analysis, included the temperature at which 10% degradation occurred, which is a measure of the onset temperature of degradation [4].Figure 11 shows the TG curves as well as the corresponding derivative DTG curves of Na-MMT and organo-MMT.For Na-MMT, thermal decomposition occurred in two steps.For organo-MMT, the thermal decomposition took place in three steps.e �rst step of the TG graph of Na-MMT up to 110 ∘ C was related to the dehydration of water molecules adsorbed in pores and between the silicate layers during heating, such that the �rst endothermic peak appeared at onset 66 ∘ C in the DTG graph, which is where the mass loss accounted for 13% at 110 ∘ C. For organo-MMT, the dehydration peak appeared smaller in the DTG graph, and the corresponding mass loss percentages was 2%.
e second step of the TG curve of organo-MMT began at approximately 200-216 ∘ C and ended at approximately 300-350 ∘ C, and the maximum mass loss rates occurred at onset 229 ∘ C. At the end of this stage, the mass loss percentage of organo-MMT was 13%.ese losses were attributed to the thermal degradation of the alkyl tails (-CH 2 ) and ammonium heads (-N(CH 3 ) 3 ) [18].In the third step of TG, which went up to 600 ∘ C, thermal degradations lowered down, and decomposition was attributed to the remaining alkyl chains.ese �ndings verify the successful bonding of CTA� to the montmorillonite surface and the replacement of the OH groups [28][29][30][31].e residues at 600 and 450 ∘ C, which were roughly 12% Na-MMT and 45% organo-MMT, respectively, were also distinctive, which explains the permanent structural modi�cations of the clay, although they retained their inorganic character.
3.6.2.TGA of Nanocomposites.e thermal stability of the nanocomposites and virgin PS were studied by TGA. Figure 12 and Table 8 show the TG mass loss curves and corresponding derivative curves (DTG) of PS and a series of PS/organo-MMT nanocomposites,which were prepared at various sonication times (0, 0.5, 1, 1.5, and 2 h).For the PS sample, the curve shows a single degradation step, which is the decomposition of the polymer backbone, and an onset temperature of 388.07 ∘ C. e thermal decomposition of all PS/organo-MMT samples exhibit a single degradation step of weight loss from 450 to 500 ∘ C. All samples exhibit an increase in the onset temperature of degradation relative to virgin PS, which indicates an enhancement in thermal stability.From the DTG curve, the maximum rate of change of the curve (DTG) of PS/organo-MMT nanocomposites appeared at 457, 460, 453, and 457 ∘ C for 0.5, 1, 1.5, and 2 h of sonication time, respectively.e improvement in thermal stability of PS/organo-MMT nanocompositesis explained as follows.First, formation of organo-MMT, which acts as a mass transport barrier and insulator between the polymer and super�cial zone where polymer decomposition takes place [32,33].Second, it can be attributed to restricted thermal motions of the polymer localized in the galleries [34].ird, the increase in the crystallinity of PS, which was suggested from the XRD and SEM results.However, it is clear that there is little difference between the onset readings of the nanocomposites at different sonication times.is indicates that the difference in sonication time does not play an important role on the thermal decomposition of nanocomposites.
Conclusions
Nanocomposites containing locally produced polystyrene and modi�ed local montmorillonite were prepared.e modi�cation of montmorillonite was achieved using CTAB with concentrations of 2.0 CEC.Preparation of PS/organo-MMT using the solution method by applying different sonication times was tested.e results of FT-IR, XRD, and SEM analyses have shown the following.
(i) e modi�cation of the montmorillonite surface by CTAB is successfully intercalated into the clay gallery.(ii) Using sonication during PS nanocomposite preparation gives good miscibility between organo-MMT and the PS matrix.(iii) e type of nanocomposites of PS/organo-MMT is intercalation.(iv) e improvement in thermal stability of PS/organo-MMT nanocomposites is noted.(v) e difference in the sonication time that was applied during preparation stage is not an important factor on the thermal stability of PS/nanocomposites.
Lastly, the study of a wide range of sonication times during the preparation stage with different materials is recommended to give a clear picture and excellent judgment regarding this factor.
T 3 :F 1 :F 2 :
e particle size distribution of Na-MMT and organo-e particle size distributions of the Na-MMT suspension.e particle size distributions of the organo-MMT suspension.
T 4 :F 3 :
e infrared spectra absorption data of Na-MMT and organo-MMT.Assignments Na-MMT Organo-MMT Shi (cm −1 ) OH stretching of structural hydroxyl groups (Al-OH) 3624 3626 2 OH stretching of water 3447 3430 −17 Asymmetric stretching of the CH 2 (in CTAB) -2922 -Symmetric stretching of the CH 2 (in CTAB) FT-IR spectra of Na-MMT and organo-MMT.stretching, and the 528 and 467 cm −1 bands are related to Al-O-Si, Si-O-Si, and Si-O deformations [15, 16].Adsorbed molecules of water result in a broad band at 3447 cm −1 , which corresponds to the H 2 O-stretching vibrations, and the absorption band of the H 2 O bending vibration was at 1639 cm −1 [17].
F 4 :
FT-IR spectra of PS and PS/MMT nanocomposites at different sonication times.
F 7 :
XRD patterns of organo-MMT, PS, and PS/MMT nanocomposites at different sonication times.
F 10 :F 11 :
S�M images of PS/organo-MMT aer 1 h of sonication time at two magni�cation.TGA curve of Na-MMT and organo-MMT.
F 12 :
TGA curve of virgin PS and its nanocomposites at different sonication times.
Spectra of PS and the PS Nanocomposites.e infrared absorption spectra (FT-IR) of PS and the PS/organo-MMT nanocomposites aer different sonication times were recorded in the region from 400-4000 cm −1 , as shown in Figure4.einfraredspectrum of PS features bands at 3066, 3025, 2922, 2851, 1666-1945, 1491-1599, 1188-1368, 1026, 698-756, and 543 cm −1 , and salient absorption bands and explanations are listed in Table5.e FTIR spectra of the PS/organo-MMT nanocomposite at all applied times of sonication clearly exhibit the characteristic F 5: X-ray diffraction patterns of Na-MMT and organo-MMT.T 5: Positions and assignments of the IR vibration bands of PS.
T 8 :
TG data of PS and PS/organo-MMT nanocomposites at different sonication times. | 5,527.2 | 2013-01-01T00:00:00.000 | [
"Materials Science"
] |
Creep and fracture of warm columnar freshwater ice
. This work addresses the time-dependent response of 3m (cid:2) 6m floating edge-cracked rectangular plates of columnar freshwater S2 ice by conducting load control (LC) mode I fracture tests in the Aalto Ice Tank of Aalto University. The thickness of the ice plates was about 0.4 m and the temperature at the top surface about (cid:0) 0 : 3 (cid:14) C. The loading was applied in the direction normal to the columnar grains and consisted of creep/cyclic-recovery sequences followed by a monotonic ramp to fracture. The LC test results were compared with previous monotonically loaded displacement control (DC) experiments of the same ice, and the effect of creep and cyclic sequences on the fracture properties were discussed. To characterize the nonlinear displacement–load relation, Schapery’s constitutive model of nonlinear thermo-dynamics was applied to analyze the experimental data. A numerical optimization procedure using Nelder–Mead’s (N-M) method was implemented to evaluate the model functions by matching the displacement record generated by the model and measured by the experiment. The accuracy of the constitutive model is checked and validated against the experimental response at the crack mouth. Under the testing conditions, the creep phases were dominated by a steady phase, and the ice response was
Introduction
Understanding the deformation and fracture processes of columnar freshwater ice is important in many engineering problems.For example, freshwater ice sheets fracture when in contact with ships, river ice fractures during interaction with bridge piers, and thermal cracks form in lakes and reservoirs.Deformation and fracture processes of freshwater ice are highly dependent on temperature, strain rate, sample size, grain type, and grain size.Qualitatively, high temperature and low strain rate lead to viscous behavior and ductile fracture; low temperature and high strain rate lead to elastic behavior and brittle fracture (Gharamti et al., 2021).However, quantitatively these relations are not well known.
As the response of freshwater ice is time-dependent, a general constitutive model should incorporate elastic (immediate and recoverable), viscoelastic (or delayed elastic, time-dependent, and recoverable), and viscoplastic (timedependent and unrecoverable) components (Jellinek and Brill, 1956;Sinha, 1978).The importance of each component depends on the problem studied.For example, thermal deformations of ice in dams can have a timescale of a few days and creep behavior dominates.In ice-structure interaction problems, the timescale of interest is often seconds and hours, so all three components of deformation need to be modeled.
This paper reports results from laboratory experiments which were conducted to study the time-dependent response and fracture of columnar freshwater ice.The work is directly relevant to a number of practical problems (Ashton, 1986) but has also general relevance in ice research by studying the coupled creep and fracture in a quasi-brittle material.Unless just short timescales are involved, where only elastic response is relevant, the creep deformations must be modeled to obtain the true fracture behavior.In materials with timedependent properties, the fracture and creep responses are coexistent.
Phenomenological laws are classified into two groups.The first group is empirically based relations (Sinha, 1978;Schapery, 1969).Their equations relate macroscopic variables: stress/load, strain/displacement, and time.They do not contain state variables that describe the internal state of the material and are valid only for constant stress/load.The functions in these models can be easily calibrated to simulate the experiments.The second group of phenomenological models starts from physically based models involving internal state variables (dislocation density, internal stresses reflecting hardening, etc.); they develop differential equations for the evolution of these variables with time and quantify the dependence of these variables on stress, temperature, and strain (Le Gac and Duval, 1980;Sunder andWu, 1989, 1990;Abdel-Tawab and Rodin, 1997).These models provide insights into the microscopic mechanisms taking place, and the state variables describe the deformation resistance offered by changes in the microstructure of the material.However, they require a proper identification of the deformation mechanisms.
The effect of time-dependent loading on the strength of freshwater ice has been examined in the literature.Subjecting freshwater ice to cyclic loading apparently leads to a significant increase in the tensile, compressive, flexural strength, and fracture toughness of that ice (Murdza et al., 2020;Iliescu et al., 2017;Iliescu and Schulson, 2002;Jorgen and Picu, 1998;Rist et al., 1996;Cole, 1990).On the other hand, no detailed investigation of the effect of creep and cyclic loading on the fracture properties of freshwater ice has been conducted in the past.
Laboratory experiments were conducted to measure the time-dependent response and fracture behavior of 3 m × 6 m floating edge-cracked rectangular plates of columnar freshwater S2 ice, loaded in the direction normal to the columnar grains.The ice studied was warm, and the temperature at the top surface of the samples was about −0.3 • C. Compared to earlier studies with freshwater ice, the samples were large (3 m × 6 m) and very warm.A program of five load control (LC) mode I fracture tests was completed in the test basin (40 m square and 2.8 m deep) at Aalto University.Creep/cyclic-recovery sequences were applied below the failure loads, followed by monotonic ramps leading to complete fracture of the specimen.The LC results were compared with the fracture results of monotonically loaded displacement control (DC) tests of the same ice (Gharamti et al., The constitutive modeling used in this paper was presented by Schapery (1969) and applied to polymers.Schapery's model belongs to the first phenomenological group and originates from the theory of nonlinear thermodynamics.This study presents the first attempt to use Schapery's model for freshwater ice.The choice of this model for freshwater ice is motivated by the fact that the model was successfully applied to saline ice (Schapery, 1997;Adamson and Dempsey, 1998;LeClair et al., 1999LeClair et al., , 1996) ) with encouraging results.The model accurately described the deformation response during load/unload applications over varying load profiles.
The experiments in this study aim to assess the timedependent nature of warm columnar freshwater S2 ice.In particular, the study aims to examine (1) the extent to which the elastic, viscoelastic, and viscoplastic components contribute to the ice deformation as defined through the crack mouth opening displacement; (2) the effects of the testing conditions on the creep stages (primary/transient and steady state/secondary) present in the ice; (3) the effects that creep and cyclic sequences have on the fracture properties -i.e., failure load and crack growth initiation displacements; and (4) the ability of Schapery's nonlinear constitutive model to predict the experimental response.
The rest of the paper is structured as follows.In Sect.2, a description of the experimental setup, testing conditions, and the applied loading profile is presented.Section 3 introduces Schapery's model, which is used to analyze the experiments.In Sect.4, the experimental and model results are summarized and analyzed.Section 5 concludes the paper.ments were conducted at an ambient temperature of −2 • C. The ice was columnar freshwater S2 ice having a mean grain size of 6.5 mm (Fig. 2b).The temperature at the top surface was about −0.3 • C, as shown in Fig. 2a.An edge crack of length A 0 (A 0 ≈ 0.7 L) was cut and tip-sharpened in each ice specimen.The response of the ice was monitored by using a number of surface-mounted linear variable differential transducers (LVDTs).LVDTs were placed at five different locations along the crack to measure directly the crack opening displacements.Figure 1 labels these positions as CMOD, COD, NCOD1, NCOD2, and NCOD3 for the crack mouth, intermediate crack, 10 cm behind the initially sharpened tip, 10 cm ahead of the tip, and 20 cm ahead of it, respectively.A hydraulically operated device was inserted in the mouth of the pre-crack to load the specimen horizontally, in the direction normal to the columnar grains, with a contact loading length of 150 mm, denoted by D in Fig. 1.The tests were load controlled by a computer-operated closedloop system that also recorded the displacement measurements.Creep/cyclic-recovery sequences were applied below the failure loads, followed by monotonic ramps leading to complete fracture of the specimen.The loading rate used is similar to that used in earlier sea ice studies (LeClair et al., 1999;Adamson and Dempsey, 1998) and thus allows for comparison of these two materials.The global behavior of the crack propagation was straight through the gauges.Detailed description of the experimental setup, ice growth, microstructure, and fractographic analysis is provided in Gharamti et al. (2021).
Creep-recovery and monotonic loading profile
In two tests, ice specimens were subjected to creep-recovery loading followed by a monotonic fracture ramp.The creeprecovery sequences consisted of four constant load applications, separated by zero-load-recovery periods.Each sequence was composed of alternating load/hold and release/recovery periods.Creep phases were applied at load levels of 0.4, 0.8, 1.2, and 0.4 kN, as given by the loading signal in Fig. 3a.The loads were chosen low enough to avoid crack propagation and failure of the specimen.Each loadhold-unload was applied in the form of a trapezoidal wave function to avoid instantaneous load jump and drop; the loading was applied in approximately 10 s and released in approximately 10 s.The slopes of the wave on loading and load release were 0.04, 0.08, and 0.12 kN s −1 for the 0.4, 0.8, and 1.2 kN load levels, respectively.Once at the desired hold level, the load was kept constant for a predetermined time interval.The load intervals were multiples of the hold interval for the 0.4 kN load level, t 1 = 126 s.For the 0.8 and 1.2 kN load levels, the time interval was doubled and quadrupled: 2 t 1 = 252 s and 4 t 1 = 504 s, respectively.The four zero-load-recovery periods, separating the creep load periods, were also a function of t 1 .Three recovery periods were held at the zero-load level for 5 t 1 = 630 s, but the last recovery period was maintained for a longer interval of 10 t 1 = 1260 s.
Immediately following the creep and recovery loading sequences, the specimen was loaded monotonically to failure on a load-controlled linear ramp.The ramp up to the peak load and unloading were each applied over an interval of t 1 .
Cyclic-recovery and monotonic loading profile
In three tests, ice specimens were loaded with cyclicrecovery sequences followed by a fracture ramp, as shown in Fig. 3b.The cyclic-recovery loading consisted of three sequences, with each being composed of four fluctuating loads, at the levels of 0.4, 0.8, and 1.2 kN.Each cyclic sequence continued for a constant time interval t 2 = 480 s.The slopes of the wave on the loading and load release were 1/150, 1/75, and 1/50 kN s −1 for the 0.4, 0.8, and 1.2 kN load levels, respectively.The 0.4, 0.8, and 1.2 kN cyclic load periods were followed by zero-load-recovery periods of 1.25 t 2 = 600 s, 1.25 t 2 = 600 s, and 2.5 t 2 = 1200 s, respectively.
At the completion of the cyclic-recovery loading sequences, the specimen was loaded to failure by a monotonic https://doi.org/10.5194/tc-15-2401-2021 The Cryosphere, 15, 2401-2413, 2021 linear ramp.The ramp up to the peak load and unloading were each applied over an interval of 0.25 t 2 = 120 s.
Nonlinear time-dependent modeling of S2 columnar freshwater ice
The model applied in this section to characterize the nonlinear viscoelastic/viscoplastic response of S2 columnar freshwater ice was presented by Schapery; it was used to model the time-dependent mechanical response of polymers in the nonlinear range under uniaxial stress-strain histories (Schapery, 1969).Schapery's stress-strain constitutive equations are derived from nonlinear thermodynamic principles and are very similar to the Boltzmann superposition integral form of linear theory (Flügge, 1975).Schapery's model represents the material as a system of an arbitrarily large number of nonlinear springs and dashpots.The equations in this section are presented in terms of load and displacement instead of the original stress-strain relations.The notations of the original equations in Schapery (1969) are modified to bring out similarity between all the equations in the paper.
When the applied loads are low enough, the material response is linear.For an arbitrary load input, P = P (t) applied at t = 0, Boltzmann's law approximates the load by a sum of a series of constant load inputs and describes the linear viscoelastic displacement response of the material using the hereditary integral in a single integral constitutive equation.The Boltzmann superposition principle states that the sum of the displacement outputs resulting from each load step is the same as the displacement output resulting from the whole load input.If the number of steps tends to infinity, the total displacement is given as where C 0 is the initial, time-independent compliance component and C(t) is the transient, time-dependent component of compliance.
Turning now to nonlinear viscoelastic response, Schapery developed a simple single-integral constitutive equation from nonlinear thermodynamic theory, with either stresses or strains entering as independent variables (Schapery, 1969).Using load as the independent variable, the displacement response under isothermal and uniaxial loading takes the following form: where C 0 and C are the previously defined components of the Boltzmann principle; ψ and ψ are the so-called reduced times defined by ; (3) and g 0 , g 1 , g 2 , and a P are nonlinear functions of the load.Each of these functions represents a different nonlinear influence on the compliance: g 0 models the elastic response, g 1 models the transient response, g 2 models the loading rate, and a P is a timescale shift factor.These load-dependent properties have a thermodynamic origin.Changes in g 0 , g 1 , and g 2 reflect third-and higher-order stress dependence of the Gibbs free energy, and changes in a P are due to similar dependence of both entropy production and the free energy.These functions can also be interpreted as modulus and viscosity factors in a mechanical model representation.In the linear viscoelastic case, g 0 = g 1 = g 2 = a P = 1, and Schapery's constitutive Eq. ( 2) reduces to Boltzmann's Eq. (1).Equation ( 2) contains one time-dependent compliance property, from linear viscoelasticity theory, C, and four The Cryosphere, 15, 2401-2413, 2021 https://doi.org/10.5194/tc-15-2401-2021nonlinear load-dependent functions, g 0 , g 1 , g 2 , and a P , which reflect the deviation from the linear viscoelastic response, that need to be evaluated.Schapery's model uses experimental data to evaluate the material property functions in Eq. ( 2).Lou and Schapery outlined a combined graphical and numerical procedure to evaluate these functions (Lou and Schapery, 1971).In their work, a data-reduction method was applied to evaluate the properties from the creep and recovery data.Papanicolaou et al. proposed a method capable of analytically evaluating the material functions using only limiting values of the creep-recovery test (Papanicolaou et al., 1999).Numerical methods are also employed and are the most commonly used techniques; they are based on fitting the experimental data to the constitutive equation (LeClair et al., 1999).In the current study, a numerical-experimental procedure is adopted.An optimization procedure is applied using the Nelder-Mead (N-M) method (Nelder and Mead, 1965) to back-calculate the values that achieve the best fit between the model and the experimental data.To avoid multiple fitting treatments of data and account for the mutual dependence of the functions, the properties were determined from the full data.This avoided errors that may result from separating the data into parts and estimating the functions independently from different parts.Schapery later updated his formulation (Schapery, 1997).He added a viscoplastic term to account for the viscoplastic response of the material and stated that the total compliance can be represented as the summation of elastic, viscoelastic, and viscoplastic components.Adamson and Dempsey applied Schapery's updated constitutive equation to model the crack mouth opening displacement of saline ice in an experimental setup similar to the current study (Adamson and Dempsey, 1998).The theory represents the displacement at the crack mouth (δ CMOD ) as the sum of elastic, viscoelastic, and viscoplastic components: where In the above equations, ψ and ψ are defined in Eq. (3).g 0 , g 1 , g 2 , g 3 , and a P are nonlinear load functions to be determined.The coefficients C e , C ve , and C vp are the elastic, viscoelastic, and viscoplastic compliances, respectively.Schapery's equation has been developed for uniaxial loading.
The response of the test specimen is dominated by the normal stresses at the direction normal to the x axis, ahead of the crack (Fig. 1).This stress state can be approximated as uniaxial in the same way as in beam bending; the stress is uniaxial tension at the crack tip and then changes linearly.Thus, Schapery's equations are used to analyze the experimental data.Few assumptions are applied at this point and are based on the choices made in Adamson and Dempsey (1998).For ice, the elastic displacement is linear with load; this immediately leads to g 0 = 1.Schapery stated that g 1 = a P = 1 if the instantaneous jump and drop in the displacement are equal (Schapery, 1969).Examination of the current data shows that this condition is not valid, and the functions need to be evaluated.Accordingly, the following approximations are employed: From Eq. ( 3), The viscoelastic compliance is assumed to follow a power law in time with a fractional exponent n.This gives Incorporating each of these conditions, the total displacement is expressed as where δ CMOD , P , and t are in meters, newtons, and seconds, respectively.It follows from Eq. ( 11) that two unknown parameters (C e and C vp ), one unknown constant (κ), and five unknown exponents (a, b, c, d, and n) need to be determined.As previously mentioned, the problem is optimized through the N-M technique, by minimizing the objective function F given by the difference between the model and data, as shown in Eq. ( 12).The components of the total displacement were computed and optimized using MATLAB.A positive constraint was applied to the model variables.Initial guesses of the exponents on the load and time functions were assumed based on previous work on saline ice.The optimized values were then obtained by comparing the model response and the experimentally measured response over the full length of the test up to crack growth initiation.where M i and D i refer to the CMOD values given by the model ( 11) and the experimental data, respectively. . 2 is the Euclidean norm of a vector.N is the number of data points (≈ 2 × 10 6 points).This problem is typically called a least-squares problem when using the Euclidean norm.It is a convex problem because F is a convex function and the feasible set is convex.Thus, the optimization algorithm will converge to the global optimal solution.As mentioned earlier, Schapery's model originated from the thermodynamic theory.The model is not physically based, and its parameters are not linked to the microstructural properties of the ice (dislocation density, grain size, etc.).In addition, the analysis does not account for the formation of a fracture process zone in the vicinity of the crack tip.Schapery's formulation models the experimental response until crack growth initiation and does not account for crack propagation.
Experimental and modeling results
This section presents the results measured and computed for the LC tests.The current results are compared with the fracture results of monotonically loaded DC tests of the same ice and same specimen size (3 m × 6 m) (Gharamti et al., 2021).The main aim is to elucidate the effect of creep and cyclic sequences on the fracture properties.
Effect of the creep and cyclic sequences on the fracture properties
Table 1 shows the measured and computed parameters for the LC experiments.P max is the measured peak load, which is also the failure load.t f represents the time to failure, computed from the fracture ramp.CMOD is measured at crack growth initiation.ĊMOD indicates the displacement rate at the crack mouth and is obtained by dividing CMOD by the failure time.Similarly, NCOD1 (see Fig. 1) represents the displacement at crack growth initiation near the initially sharpened crack tip.
ṄCOD1 indicates the displacement rate in the vicinity of the tip and is obtained by dividing NCOD1 by the failure time.
Figure 4 gives the results of the peak load P max , crack mouth opening displacement CMOD, and near-crack-tip opening displacement NCOD1 as a function of the loading time for the DC tests (Gharamti et al., 2021) and the current LC tests.In these subplots, first-order power-law fits were applied to the data of the DC tests.The LC values lie above, below, and along the DC fit.No clear effect of creep and cyclic loading on the fracture properties was detected.
Figure 5a and b show the experimental load versus the crack opening displacement at the crack mouth for the DC and the LC tests, respectively.Figure 5c displays a magnified view of the fracture ramp of the LC tests.Comparing the failure loads of the DC and LC tests indicates that the failure loads, of tests with comparable loading rates, were similar.Therefore, in these experiments, the creep and cyclic sequences had no influence on the failure load.
Table 1 presents several elastic moduli for each test.The elastic moduli were calculated from the load-CMOD record following Sect.4 of Gharamti et al. (2021).For the creep tests (RP15 and RP16), this procedure is repeated for the four creep cycles, resulting in E 1 , E 2 , E 3 , and E 4 , and for the fracture ramp, resulting in E f .Similarly for the cyclic tests (RP17, RP18, and RP19), the moduli calculation was done for the last cycle of each cyclic sequence, giving steady-state moduli E 1 , E 2 , and E 3 , and for the fracture ramp, resulting in E f .Some of the values are missing, caused by the fact that the initial portion of the associated load-CMOD curve was very noisy.The values of the elastic moduli calculation for the creep/cyclic sequences and fracture ramps were similarly linear upon load application, as shown by the loading slope in Fig. 5c and Fig. 6a and b.This linearity justifies the choice of g 0 = 1 in the elastic CMOD component in Eq. ( 5).
Table 1 in Gharamti et al. (2021) presents the elastic modulus (E CMOD ) calculated at the crack mouth for the DC tests; E CMOD is similar to E f in Table 1 here; both values lie within the same range.Therefore, the creep and cyclic sequences preceding the fracture ramp did not affect the load-CMOD prepeak behavior.However, the sequences affected the postpeak response as can be distinguished from Fig. 5b, which displays a longer decay behavior than Fig. 5a.The gradual decay of the load portrays the time dependency in the behavior of freshwater ice.
Ice response under the testing conditions
Figure 7 shows the experimental results for RP16: the applied load and the crack opening displacements at the crack mouth (CMOD), halfway of the crack (COD), and 10 cm behind the tip (NCOD1) (see Fig. 1).Similarly, Fig. 8 shows the experimental response for RP17.The time-dependent nature of the ice response is evident.A complete load-CMOD curve was obtained during loading and unloading for each test of Table 1, indicating stable crack growth.
It is clear from Figs. 7b and 8b that the CMOD, COD, and NCOD1 displacements were composed mainly of elastic and viscoplastic components.No significant viscoelasticity was detected in the displacement-time records for all the tests.The primary (transient) creep stage was almost absent or instantaneous.The load sequences were characterized by a non-decreasing displacement rate at all levels.The displacement-time slope was linear and constant, indicating that the secondary/steady-state creep regime dominated during each load application.Although the recovery time was longer than the loading time, ≥ 1.25 t 1 (creep test, Fig. 3a and Sect.2.2) and ≥ 1.25 t 2 (cyclic test, Fig. 3b and Sect.2.3), the recovery (unload) phases consisted mainly of an elastic recovery (instantaneous drop) and unrecovered viscoplastic displacement.The behavior as observed resem- The Cryosphere, 15, 2401-2413, 2021 https://doi.org/10.5194/tc-15-2401-2021bles the response of a Maxwell model composed of a series combination of a nonlinear spring and nonlinear dashpot (Fig. 7c).There is no delayed elastic recovery, but there is the elastic response and a permanent deformation.Figure 6a and b support the same analysis.Unlike the viscoelastic response (Fig. 6c), which displays no residual displacement in the loading and unloading hysteresis diagram, the current load-CMOD plots showed large permanent displacement after each loading cycle.This concludes that the response of columnar freshwater S2 ice in these tests was overall elastic-viscoplastic.
Nonlinear modeling analysis
The nonlinear theory, outlined in Sect.3, was used to analyze the experiments.Modeling the viscoelastic term (second term of Eq. 11) proved to be very challenging.Instead, a simplified version was modeled by setting a p = g 2 = 1.The results of the initial optimization trials confirmed the previous analysis; the viscoelastic component δ ve CMOD had no effect on the final fit between the data and the model.The optimization algorithm fine-tuned κ (Eq.11) to a very small number (10 −18 ), indicating that the best model-data fit is attained when the viscoelastic term goes to zero.
The final optimization runs were carried out by considering the elastic and viscoplastic components (first and last terms of Eq. 11) only.This resulted in two parameters, C e and https://doi.org/10.5194/tc-15-2401-2021 The Cryosphere, 15, 2401-2413, 2021 C vp , and one exponent, c, that need to be optimized.The optimization converged results are given in Table 2: C e , C vp , and c.For all the tests, the percent reduction of the objective function exceeded 95 %, and about 110 iterations were needed to reach convergence.A value of c = 1 for the viscoplastic load function provided the best fit between the model and the experiment at all load levels over the total experimental time up to the peak load.The final compliance values of the elastic and viscoplastic components were in the ranges 1.8-3.8×10−8 mN −1 and 0.2-1×10 −10 mN −1 s −1 .respectively.Figures 9 and 10 give the model results, obtained using Eqs.( 4)-( 10), and the experimental results for experiments RP16 and RP17, respectively.Figure 9a and b show the measured load and the load applied to the model and the measured CMOD-time record compared to the response of the model, respectively, for RP16. Figure 10 shows similar plots for experiment RP17.Test RP17 showed an excellent model-experiment fit for the three cyclic-recovery sequences over the load and unload periods.The model succeeded in following the increasing and decreasing load levels and the corresponding recovery phases.The experimental response for the creep-recovery test RP16 appeared to generally conform to the model results, but the model overestimated the recovery displacement in the first two cycles.It is unclear to the authors why the model did a better job in fitting the cyclic-recovery than the creep-recovery sequences.This probably hints at some mechanisms that took place in the creep-recovery tests and are not accounted for The Cryosphere, 15, 2401-2413, 2021 https://doi.org/10.5194/tc-15-2401-2021by Schapery's model.Schapery's model has been tested for creep-recovery sequences of saline ice with an increasing load profile (Schapery, 1997;Adamson and Dempsey, 1998;LeClair et al., 1999LeClair et al., , 1996)).This is the first application of the model with a load profile of increasing and decreasing load levels (Fig. 3).
Considering the fracture ramp, Schapery's nonlinear equation succeeded in modeling the monotonic displacement response up to crack growth initiation perfectly well for all the tests.As previously mentioned, the model does not account for crack propagation, so modeling was applied until the peak load.The model was also successful in predicting the critical crack opening displacement values at the failure load.Thus, https://doi.org/10.5194/tc-15-2401-2021 The Cryosphere, 15, 2401-2413, 2021 In this study, Schapery's constitutive model is tested for the first time for freshwater ice.The match between the model and the measured data, especially for the cyclic-recovery tests, provides a firm support of the ability of Schapery's constitutive model to describe the timedependent response of columnar freshwater S2 ice up to crack growth initiation.Figure 11a and b show the contribution of each individual model component, elastic and viscoplastic, to the total CMOD displacement, for RP16 and RP17, respectively.As mentioned earlier, the elastic and viscoplastic components account for the total deformation.For RP16, the viscoplastic component dominated over the elastic component.For RP17, the elastic and viscoplastic components contributed equally to the total displacement.
The applicability of the proposed model and the fitted parameters is limited to the studied ice type, geometry, specimen size, ice temperature, and the current testing conditions.
Variation in the operating conditions will change the dominant deformation mechanisms and the ice behavior; and accordingly, new model parameters are needed to adapt to the new response.
Discussion
Interestingly, the ice behavior in the current study differs from previous experimental creep and cyclic work on freshwater ice.A large delayed elastic or recoverable component has been previously observed.Several researchers performed creep experiments on granular freshwater ice at lower temperatures (Mellor and Cole, 1981, 1982, 1983;Cole, 1990;Duval et al., 1991) and reported considerable recovery.Duval conducted torsion creep tests on glacier ice at a similar testing temperature of −1.5 • C (Duval, 1978).When unloaded, the ice exhibited creep recovery.According to his analysis, during loading, the internal stresses opposing the dislocation motion increase; upon unloading, the movement of dislocations produced the reversible deformation and is caused by The Cryosphere, 15, 2401-2413, 2021 https://doi.org/10.5194/tc-15-2401-2021 the relaxation of internal stresses.Sinha (1978Sinha ( , 1979) ) studied columnar-grained freshwater ice and concluded that the hightemperature creep is associated with grain boundary sliding.Cole developed a physically based constitutive model in terms of dislocation mechanics (Cole, 1995) and quantified two mechanisms of anelasticity: dislocation and grain boundary relaxations.He demonstrated that the increased temperature sensitivity of the creep properties of ice within a few degrees of the melting point is due to a thermally induced increase in the dislocation density (Cole, 2020).The question then arises as to why warm columnar freshwater ice tested here showed no significant delayed elastic effect and why the microstructural changes were mainly irreversible upon unloading.
The measured ice response is a novel result for any type of ice.It is important to emphasize that in comparison with earlier freshwater ice studies, the tested samples are very warm and large.Viscoelasticity normally happens due to the elastically accommodated grain boundary sliding.Upon loading, internal stresses build up at local stress concentrations in the grain boundary geometry (triple points and grain boundary ledges).Assuming there is no microcracking, the growing stress impedes further grain boundary sliding and causes sliding in the reverse direction, giving rise to the recoverable component after unloading.However, in the present case, the measurements showed that the grain boundary sliding produced permanent deformation.Several reasons can be discussed, related to the ice temperature, microstructure, and nonlinear mechanisms in the process zone.
Concerning the effect of temperature: the warmer the temperature, the more liquid on the grain boundary.The high homologous test temperature (top ice surface ≈ −0.3 • C) causes liquidity on the gain boundaries (Dash et al., 2006).The intergranular melt phase on the grain boundary renders the ice as a two-phase polycrystal and significantly influences the creep and recovery response.In fact, the grain boundary sliding then consists theoretically of two processes: (1) the sliding of grains over one another and (2) the squeezing in/out of the liquid between adjacent grains (Muto and Sakai, 1998).The shear behavior of the liquid film is a function of its properties (thickness and amount).The presence of this liquid at the triple points and the boundary acted as a resisting obstacle for the grains to shear and deform back to their original form, causing the viscoplastic deformation.
The microstructure (grain size, crystalline texture) could be another contributing factor.Sinha (1979) developed a nonlinear viscoelastic model, incorporating the grain size effect, to describe the high-temperature creep of polycrystalline materials.He concluded that delayed elastic strain exhibits an inverse proportionality with grain size.This suggests that the grain size (3-10 mm, Fig. 2b) of the ice samples is coarse enough not to produce any measurable viscoelastic deformation under the testing conditions.It is also probable that for this grain size, there were not enough local concentration points to arrest the grain boundary sliding and drive the recoverable and reverse sliding.In addition, Gasdaska (1994) discussed that regularly ordered and packed microstructures limit the amount of sliding and rearrangement and lead to less anelastic strain.The ice growth in the Aalto Ice Tank was very controlled and resulted in homogeneous ice sheet.
Knauss presented a thorough review of the time-dependent fracture models available to date (Knauss, 2015).The essence of the models is based on modeling the behavior in a finite cohesive/process zone which is attached to the tractionfree crack tip.The one-parameter fracture mechanics encompassed by the apparent fracture toughness is not applicable (Dempsey et al., 2018).It is believed that the mechanisms taking place in the process zone play an influencing role in the current tests.The nonlinearity in the fracture zone relieved the internal stresses that would ordinarily accommodate the grain boundary sliding and generate some viscoelastic deformation upon unloading.Thus, any microstructural damage that occurred during loading manifested as permanent deformation at the end of the test.
It is noteworthy that the earlier studies used test sizes which are smaller than the plate size used here.It was shown https://doi.org/10.5194/tc-15-2401-2021 The Cryosphere, 15, 2401-2413, 2021 in the DC fracture tests (Gharamti et al., 2021) that scale had an effect at the tested loading rates.It is probable that the specimen size influenced the time-dependent deformation of freshwater ice.The current tests suggest that for the large sample size and the kind of ice studied (very warm freshwater ice) under the loading applied, viscoelasticity is not an important deformation component.The experimental results support this prediction, but more tests are needed to make more general conclusions.All the abovementioned factors might have contributed to the measured elastic-viscoplastic response.However, the question as to which factor influenced mostly the behavior is an important research question that requires more testing programs.Testing the effect of each factor separately requires a set of experiments that considers this factor while keeping all the other conditions fixed.
Conclusions
In the present work, five 3 m × 6 m warm freshwater S2 ice specimens were tested under creep/cyclic-recovery sequences followed by a monotonic ramp.The temperature at the top surface was about −0.3 • C. The tests were load controlled and led to complete fracture of the specimen.The purpose of this study was to examine the time-dependent behavior of freshwater ice using a joint experimental-modeling approach.
In the experimental part, the tests aimed to (1) measure and examine the time-dependent response of columnar freshwater S2 ice through the applied creep/cyclic-recovery sequences and (2) investigate the effect of creep and cyclic sequences on the fracture parameters/behavior through the fracture monotonic ramp.The current tests were compared with other monotonically loaded tests of the same ice.The results showed that the creep and cyclic sequences had no clear effect on the failure load and the crack opening displacements at crack growth initiation.The ice response at the testing conditions was overall elastic-viscoplastic.The loading phases displayed an instantaneous transformation from the primary (transient) stage to the steady-state regime, which resulted in permanent (unrecoverable) displacement.The conducted experiments provided a novel observation for the time-dependent behavior of freshwater ice.Though the delayed elastic component has been reported as a major creep component in freshwater ice, no significant viscoelasticity was detected in this study.Several factors were discussed as possibly contributing to the observed behavior: the very warm columnar freshwater ice, liquidity on the grain boundary, large sample size, coarse grain size, and nonlinear mechanisms in the fracture zone.Testing the effect of each factor on the ice response requires a different set of experiments that varies this factor only while keeping the other conditions fixed.
In the modeling part, Schapery's nonlinear constitutive model was applied for the displacement response at the crack mouth.The elastic-viscoplastic formulation succeeded in predicting the experimental response of columnar freshwater S2 ice over the applied loading profile up to crack growth initiation.The model parameters were obtained via an optimization procedure using the N-M method by comparing the model and experimental CMOD values.
The proposed model parameters are valid only for the studied ice type, geometry, specimen size, ice temperature, and the range of applied load experienced in the experiments.Schapery's model was selected in this study, as it is able to capture the sort of time-dependent behavior known to occur in ice and produces a simple and expedient way to help understand the observed behavior.More thorough analysis with a physically based approach is left to the future.
Figure 1 .
Figure 1.Specimen geometry.Edge-cracked rectangular plate of length L, width H , and crack length A 0 .
2
Figure 2. (a) Temperature profile.Each data point represents the average of measurements taken at the same depth of different ice cores throughout the 1-month duration of the test program.(b) Grain size distribution.Each data point is measured from one thin section.
Figure 3 .
Figure 3. Loading consisting of (a) creep-recovery and (b) cyclic sequences followed by a monotonic fracture ramp.The number above each segment indicates the duration in seconds.
Figure 4 .
Figure 4. Experimental results for the (a) peak load P max , (b) crack mouth opening displacement CMOD, and (c) near-crack-tip opening displacement NCOD1 at crack growth initiation, as a function of time to failure t f for the monotonically loaded DC tests (Gharamti et al., 2021) and the creep/cyclic and monotonically loaded LC tests.
Figure 5 .
Figure 5. Measured load versus CMOD for the (a) DC tests (Gharamti et al., 2021), (b) LC tests, and (c) LC tests up to the peak load.
Figure 6 .
Figure 6.Load versus CMOD over the (a) creep-recovery cycles for RP15 and the (b) cyclic-recovery sequences for RP17.(c) Schematic illustration of the hysteresis load-displacement diagram.The whole of the hysteresis loop area is the energy loss per cycle.The dashed area is the part of that total that is due to the viscoelastic mechanism and the rest is due to viscous processes.
Figure 7 .
Figure 7. Experimental results for RP16.(a) Load at the crack mouth; see Fig. 1.(b) Displacement-time records.(c) Load-displacement record.(d) Typical response of a Maxwell model, consisting of a nonlinear spring and nonlinear dashpot, to a constant load step.
Figure 9 .
Figure 9. Experimental and model results for RP16.(a) Load at the crack mouth; see Fig. 1.(b) CMOD-time records.
Figure 10 .
Figure 10.Experimental and model results for RP17.(a) Load at the crack mouth; see Fig. 1.(b) CMOD-time records.
Figure 11 .
Figure 11.Contribution of each individual model component to the total CMOD displacement for (a) RP16 and (b) RP17.
Table 1 .
Measured experimental data and computed results for the LC tests. | 8,884.4 | 2021-05-27T00:00:00.000 | [
"Engineering"
] |
Where Beta is going – case of Viet Nam hotel, airlines and tourism company groups after the low inflation period
Tourism, airline, hotel are industries those can be affected much by environment and social risks. Vietnam hotel, entertainment, airline & tourism industries are growing fast and contributing much to the economic development and have been affected by inflation. This paper measures the volatility of market risk in Viet Nam Hotel, entertainment, airline & tourism industries after this low inflation environment (2015-2017). The main reason is the necessary role of these financial companies and their system in Vietnam in the economic development and growth in recent years always go with risk potential and risk control policies. This research paper aims to figure out how much increase or decrease in the market risk of Vietnam Hotel, entertainment, airline & tourism firms during the post-low inflation environment 2015-2017. First, by using quantitative combined with comparative data analysis method, we find out the risk level measured by equity beta mean values in the Hotel, entertainment, airline & tourism industries are acceptable, as they are lower than (<) 1. Then, one of its major findings is the comparison between risk level of hotel industry during the post-low inflation period 2015-2017 compared to those in the airline & tourism industries. In fact, the research findings show us market risk level of entertainment industry, one kind of financial risks, is the highest among 3 groups. Whereas risk fluctuation in airline & tourism industry is the highest. Finally, this paper provides some ideas that could provide companies and government more evidence in establishing their policies in governance. This is the complex task but the research results shows us warning that the market risk need to be controlled better during the post-low inflation period 2015-2017. And our conclusion part will recommend some policies and plans to deal with it.
Introduction
Over many recent years (2006 until now), Viet Nam Hotel, entertainment, airline & tourism market are evaluated as one of active financial markets, which has certain positive effect for the economy and become one of vital players in the financial system of the nation.
The Vietnam economy experienced acceptable inflation (exhibit 1) during a long time (2009)(2010)(2011)(2012)(2013)(2014) and it reached a low inflation rate of 0.6% in the year 2015. High inflation may harm the whole economy in general and Hotel, entertainment, airline & tourism sectors in specific whereas low inflation may stimulate the local economy by reducing borrowing interest rates. This is why we would like to see, what the real scenario of market risk level of financial sector in Vietnam is during the post-low inflation environment, i.e. 2015-2017 years.
This study will calculate and figure out whether the market risk level during the post-low inflation time (2015) has increased or decreased compared in three industries.
The paper is organized as follows: after the introduction it is the research issues, literature review, conceptual theories and methodology. Next, section 3 will cover main research findings/results. Section 4 gives us some risk analysis, then section 5 presents discussion and conclusion and policy suggestion will be in the section 6.
Research issues
The scope of this study embrace the following issues. Issue 1: Whether the risk level of hotel, entertainment and tourism firms during post-low inflation period 2015-2017 increase or decrease considerably, especially under the debt leverage impact, shown by asset beta measure? Issue 2: Because Viet Nam is an emerging and immature financial market and the stock market still in the starting stage, whether the dispersed distribution of beta values become large in the three industries, especially under the debt leverage impact, shown by asset beta measure.
Hypothesis for testing:
Because stock market and financial market in Vietnam is still young, the market risk level of hotel, entertainment and tourism companies can be high.
Literature review
Leverage and risk level of firms has certain correlation. First of all, Martin and Sweder (2012) pointed out that incentives embedded in the capital structure of banks contribute to systemic fragility, and so support the Basel III proposals towards less leverage and higher loss absorption capacity of capital. Najeb (2013) suggested a positive relationship between efficient stock markets and economic growth, both in short run and long run and there is evidence of an indirect transmission mechanism through the effect of stock market development on investment. Yener et. all (2014) found evidence that unusually low interest rates over an extended period of time contributed to an increase in banks' risk. Mohamad et. all (2014) showed that by applying both ROA and ROE in the performance equation, financial risk is significant. Furthermore, by considering financial performance in the risk equation as endogenous, both ROA and ROE are significant. The implication of this result is that the inverse relation of financial risk and performance cannot be avoided; hence, the commercial banks together with the bank supervisors should make a trade-off between risk and performance.
Next, Emilios (2015) mentioned that bank leverage ratios are primarily seen as a microprudential measure that intends to increase bank resilience. Yet in today's environment of excessive liquidity due to very low interest rates and quantitative easing, bank leverage ratios should also be viewed as a key part of the macroprudential
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(55) frmework. As such, it explains the role of the leverage cycle in causing financial instability and sheds light on the impact of leverage restraints on good bank governance and allocative efficiency. Atousa and Shima (2015) found out the econometric results indicate that life insurance sector growth contributes positively to economic growth. Shevyakova et al. (2019) stressed impact of tourism industry on economic development of a country. Then, Gunarathna (2016) revealed that financial leverage positively correlate with financial risk. However, firm size negatively affects the financial risk. Aykut (2016) suggested two main findings: (i) Credit risk and Foreign exchange rate have a positive and significant effect, but interest rate has insignificant effect on banking sector profitability, (ii) credit and market risk have a positive and significant effect on conditional bank stock return volatility. Then, Mojtaba and Davoud (2016) generated results show that public banks are more successful in using risk management tools in compared with private banks. More meaningful relationship has been found between financial risk management tools and shareholder wealth in public banks.
Last but not least, Riet (2017) mentioned that after the euro area crisis had subsided, the Governing Council of the ECB still faced a series of complex and evolving monetary policy challenges. As market volatility abated, but deflationary pressures emerged, the main task as from June 2014 became to design a sufficiently strong monetary stimulus that could reach market segments that were deprived of credit at reasonable costs and to counter the risk of a too prolonged period of low inflation. Hami (2017) showed that inflation has a negatively significant effect on financial depth and also positively significant effect on the ratio of total deposits in banking system to nominal GDP in Iran during the observation period. Last but not least, Lubos et. all (2018) confirmed that entrepreneurs who started their business because of money perceived the effects of crisis on their company's financial risk more intensely.
Finally, Chizoba et. all (2018) revealed that inflation rate had a positive but insignificant effect on insurance penetration of the Nigerian insurance industry. The implication is that the macroeconomic variable (inflation) increase the level of insurance penetration in Nigerian insurance industry but it increase was not significant. And Miguel et. all (2018) found a consistently negative and nonlinear effect of price increases on financial variables; in particular, it is statistically significant in the full sample of countries, significant in developing countries, and insignificant in developed countries. Marcelo (2018) observed that the use of unrealistic assumptions (Modernist perspective) in risk management increases model risk, and is thus not suitable for risk model estimation. However, the absolute lack of measurement of the Postmodernist paradigm can be too radical in the sense that, in the practical field, there is a crucial need for quantitative information to enable financial institutions and investors to protect their investments.
Conceptual theories
Positive sides of low inflation: Low (not negative) inflation reduces the potential of economic recession by enabling the labor market to adjust more quickly in a downturn, and reduces the risk that a liquidity trap prevents monetary policy from stabilizing the economy. This is explaining why many economists nowadays prefer a low and stable rate of inflation. It will help investment, encourage exports and prevent boom economy. The central bank can use monetary policies, for instance, increasing interest rates to reduce lending, control money supply or the Ministry of finance and the government can use tight fiscal policy (high tax) to achieve low inflation.
Negative side of low inflation: it leads to low aggregate demand and economic growth, recession potential and high unemployment. Production becomes less vibrant. Low inflation makes real wages higher. Workers can thus reduce the supply of labor and increase rest time. On the other hand, low product prices reduce production motivation. The central bank might consider using monetary policy to stimulate the economic growth during low-
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(55) inflation environment. It means that an expansionary monetary policy can be used to increase the volume of bank loans to stimulate the economy. There are various ways to classify risks. For instance, business risk can be categorized into: market risk, credit risk and operational risk. In banking operation, market risk includes interest rate risk, liquidity risk and foreign exchange rate risk.
On the other hand, risks can be classified into two types: systematic risk (such as market risk) and unsystematic risk. Systematic risk, known as market risk or volatility or undiversifiable risk, affects to the overall market. It cannot be avoided totally by diversification, but only by using asset allocation strategy. If you want to know the market risk, you can estimate beta (this study suggests 2 beta calculations: equity beta and asset beta, under debt leverage impact). Another example of systematic risk is interest rate risk which affects the whole market and the entire stocks.
Beta equals to 0: means the stock price uncorrelated to the market. Beta negative (less than 0): means the stock price go opposite to the market index. Beta equals to 1: means the portfolio moves in the same direction with the market and sensitive to market risk. Beta higher than (>) 1: means there are more volatility, the portfolio moves in the same direction with the market and very sensitive to market risk. Beta between 0 and 1: i.e less volatility and stock price moves in the same direction with market index. Beta is a popular measure of market risk which cannot be eliminated by diversification due to its nature, but it can be insurable. Investors can only reduce a portfolio's exposure to systematic risk by sacrificing expected returns.
On the contrary, unsystematic risk, known as diversifiable risk or nonsystematic risk or residual risk, is specific risk in each industry or firm or security. For instance, risk coming from competitors in the market and market share will affect our business and profit. This kind of risk might be reduced via diversification strategy. So it is also called controllable risk. Unsystematic risk normally happens due to internal factors (ex. Employees, industry regulation change, manipulation in financial statements…) which are associated with that business only and affect a single stock or segment.
Risk can also be divided into various groups: market risk (due to risk factors such as interest rate, foreign exchange or stock price), market liquidity risk (a real example is the real estate market in Vietnam during the financial crisis 2007-2009), funding liquidity risk (unexpected outflow of funds), credit risk, operational risk (such as processing risk, IT system risk, legal risk, Human resource risk, reputational risk, Information risk, tangible asset risk).
Financial and credit risk in the bank system can increase when the financial market becomes more active and bigger, esp. with more international linkage influence. Hence, central banks, commercial banks, electrical and computer firms and the government need to organize data to analyze and control these risks, including market risk.
For the hotel, entertainment and tourism industry, high inflation may harm the electric companies and cause higher losses and increase the operational costs. In case of low inflation, interest rates may fall and hence, it is not a benefit for investment portfolio. Hence, risk assessment and control mechanisms are necessary for them to reduce these losses.
Methodology and data
We use the data from the stock exchange market in Viet Nam (HOSE and HNX) during the postlow inflation time 2015-2017 to estimate systemic risk results. We perform both fundamental data analysis and financial techniques to calculate equity and asset beta values.
We use quantitative research method to collect, gather quantifiable data from stock market and analyze data with mathematical techniques of calculating equity beta var and asset beta var during the period 2015-2017. This sampling method helped us a lot with the available data from the live stock market in public domain. We choose quantitative method because it is objective and investigational in nature.
We select a sample of 26 listed firms in three (3) industries or groups of company: hotel, entertainment and tourism sectors. Then, estimating equity beta has been done by using the traditional covariance formula, and we estimate asset beta under the impact of leverage. We also make a comparison of equity and asset beta values in these three (3) industries, calculate and analyze the gap between groups. We choose cross-industrial survey and sampling in a condition that these 3 industries are linked together in a whole financial system. This is, in fact, a simple random sampling, but we also pay attention to selecting key players in each category of three industries. The sample size will reflect and represent for the target market.
Under our beta calculation and comparison, we can draw a picture of the whole market risk of Vietnam electrical and computer industries. Hence, we can answer research questions or issues on how much market risk in each company group increases or decreases, and later we can figure out the above hypothesis test is true or false. Then, the research results can be generalized for the whole market.
Last but not least, government macroeconomic data are also collected and presented in 4 Exhibits. This will helps us to see the macro picture of Vietnam economy during the post-low inflation environment and through a long time (10-year periods). Our quantitative data are shown by tables, charts, graphs to make it easy to understand. In summary, quantitative method is mainly used because it helps to collect data quickly, concisely with reliable and accurate data. When we conduct this research, the number presents the honest picture of research and accurate, as well as less time consuming. It, hence, eliminated biasing of results which are fair in this study. In data analysis section, we also combine interpreting the data results and descriptive analytical method.
Finally, we use the results to suggest policy for both these enterprises, relevant organizations and government.
Main results 4.1 General Data Analysis
We get some analytical results form the research sample with 12 listed firms in the airline & tourism market, 8 hotel firms and 8 entertainment companies with the live data from the stock exchange.
Empirical Research Findings and Discussion
In the below section, data used are from total 28 listed hotel, entertainment and tourism companies on VN stock exchange (HOSE and HNX mainly). Different groups are created and comparison of the calculation of risk data among 3 groups has been made. Market risk (beta) under the impact of debt, includes: 1) equity beta; and 2) asset beta. We model our data analysis as in the below figure 1: The above table shows us there is no firm having beta higher than 1. The gap between max and min values is 1.358, which means higher than that of hotel industry.
Both equity beta max value and equity beta mean value are lower than 1, which is acceptable in this industry.
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(55) 2288 Asset beta max, asset beta var and asset beta mean values have been decreasing, as shown in the above table. It shows us the debt leverage impact on reducing the risk level. The above table1 shows us that there is no firm with beta values > 1. And table 2 shows that beta mean values are acceptable < 1.
We summarize the data in the above chart 1 as the analysis follows: For the airline and tourism industry, different from hotel industry, the market risk volatility has been just slightly decreasing during the post-low inflation environment (2015-17) as shown by equity beta var in the above chart, whereas the risk level (equity and asset beta mean) decreased much. We also realize that there is a big gap between equity beta max in the crisis (1.207 and 1.084) compared to those in the post-L inflation time (0.654 and 0.488).
B. Hotel Industry during the postlow inflation environment: The above table 3 shows us there is no company having beta higher than 1. And asset beta values have been decreased. The gap between max and min values is 0.399, which means lower than that of tourism group. Both equity beta max and equity beta mean values are lower than 1, which is acceptable in this industry.
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
Also, asset beta var has been decreased considerably, as shown in the above table. It shows us the debt leverage impact on reducing the risk level. The above table 3 shows us that there is no firm with beta values > 1. And table 4 shows that beta mean values are small and negative.
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
We summarize the data in the above chart as the analysis follows: For the hotel industry, the market risk level has been reduced during the post-low inflation environment (2015-17) as shown in the above chart, while the risk fluctuation has decreased much (equity and asset beta var). We also realize that there is a big gap between equity beta max in the crisis (0.978 and 0.415) compared to those in the post-L inflation time (0.015 and 0.015).
Entertainment Industry during the postlow inflation environment:
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(55) The gap between max and min values is 0.589, which means lower than that of tourism industry. We also see equity beta max value and equity beta mean value < 1: It is acceptable in this industry. Asset beta max, asset beta var and asset beta mean values have been decreased considerably, as shown in the above table 5. It shows us the debt leverage impact on reducing the risk level. The above table 5 shows us that there is no firm with beta values > 1. And table 6 shows that beta mean values are acceptable < 1. We summarize the data in the above chart 3 as the analysis follows: For the entertainment industry, different from hotel industry, the market risk level has been decreasing during the post-low (L) inflation environment (2015-17) as shown by equity beta mean in the above chart, while the risk fluctuation has been reduced (equity beta var). We also realize that there is a big gap between equity beta max in the crisis ( Based on the above calculation result table, we analyze data as follows: Firstly, the above chart tells us value of equity beta mean in the airline & tourism and hotel industries are lower than that of entertainment industry, which means lower risk level. Then, shown by equity beta var, risk volatility in the hotel industry is the lowest, while that in airline & tourism industry is the highest.
Risk analysis
Inflation can affect negatively on market capitalization, but low inflation could be beneficial to economic recovery and might have benefits for financial system as investors can perform more transactions. However, Vietnam inflation rate in 2015 is at a low level, still acceptable, in the context that global economies in many developed countries also reached low rate.
Furthermore, when the Vietnam financial system has been becoming more active and bigger in size, there will be potential risk, esp. in the context of the global impact from international financial markets became bigger. There are several factors affecting market risk level and fluctuation including, but not limited to: the entire financial market instability of global financial or economic crisis or catastrophic events can cause market risk, or fluctuations and volatility interest rates, foreign exchange rate or stock price.
Discussion for further researches
We can continue to analyze risk factors behind the risk scene (risk increasing as above analysis) in order to recommend suitable policies and plans to control market risk better. Also, the role of risk management and risk managers need to be developed more.
Specifically, Vietnam stock market has been established and developed since 2005-2006 until now, it has gained a lot of operational experiences with many newly-established companies, and some bankruptcies as well. Our analysis stated the risk level of Hotel, entertainment, airline & tourism group has been decreasing, but risk management tools always needed to enhance to prevent losses happened as it was in the financial crisis 2007-2009. Vietnam Hotel, entertainment, airline & tourism companies can reduce risk by using reinsurance contracts and improve risk management practices, or perform good contract appraisal, or improving customer service to receive, evaluate customer awareness and client feedback to have proper plans to reduce customer complaints.
For all three (3) financial industries: Hotel, entertainment, airline & tourism companies, in order to reduce risk, they all need to enhance corporate governance structure, mechanisms and standards. Vietnam Hotel, entertainment, airline & tourism firms, as well as in other developing and developed countries, need to adapt to international corporate governance standards which are standardized and recommended by many international organizations such as ADB, OECD, IFC, WB, ECODA, CFA…. In addition to, these financial service firms also pay attention to technology, process and esp., to people or human resources in order to train them about risk management tools and practices to reduce business risk. Establish risk management team will help to manage all market risk, credit risk and operational risk. This risk management team, with management accountability and with experienced supervisory board, will bring together risk management model assessment, technology expertise and regulatory experience. To put in another word, the need of risk management and corporate governance has been increasing since the financial crisis 2007-2009. The roles of risk team and roles of compliance officer, internal control (self-control) and audit committee need to be clarify more in management system. Even in some specific cases, some companies might consider hiring a third-party firm (for example, law firm) to perform risk
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(55) management activities. Not only Hotel, entertainment, airline & tourism firms take care of operational risks and technology-driven change and higher competition level, but also they manage financial risks. The fundamental step is to quantify market risk or financial risks with a risk management model which is cost-effective and analyzes or involves risk factors. Therefore, it is necessary to consider and evaluate both benefits and drawback of implementing cost saving risk functions. Another thing to consider is the biases happening and affecting the decision making process in many Hotel, entertainment, airline & tourism companies; hence, we need to reduce bias when making decision by using debate techniques to recognize them and then, eliminate biases to achieve a fair and true decision. For better and transparent processes to eliminate financial risks, Hotel, entertainment, airline & tourism firms also take care of implementing ISO 9001 standards to build up their operational processes for all functions and departments. Financial risk could be considered as one of core arts of strategic planning.
Market risk or systematic risk can be insurable or reduce through hedging techniques. The meaning of hedging just similar to insurance, i.e hedge and reduce losses when an unexpected event or bad scenario in future happens. For instance, investors might buy and use options to hedge risk, or reduce risk of a stock or portfolio when the price of the underlying asset goes down. Another method to avoid market risk is choosing modern portfolio theory to identify investor risk tolerance and then build an optimal portfolio by using statistical measures to examine the correlation between assets, between risk and returns. Using statistical techniques and software constructs an efficient frontier which shows a linear relationship between risk and return. Portfolio managers and investors and firms might consider using hedging techniques to manage and reduce their exposure to risk. Hedging, known as using financial instruments or derivatives such as options and future, helps you to reduce losses, rather than making money and you have to pay premium or cost of hedging (this is the price of hedging). Our discussion on risk factors, risk management framework, and risk management model here might be applied and might be true for several developing countries in which central bank and bank system play a major role and leading role in corporate restructuring, and with the young, newly established and active stock market. For investment strategy, it depends on risk attitude of each investor when they choose a portfolio based on risk level measured by beta values. For instance, risk-adverse investors may prefer stocks with beta less than 1 so that they will reduce losses when the market declines sharply. On the other hand, risk takers might prefer stocks with higher beta which aim for higher profits.
As we can see from Exhibit 1, the risk management plan and scheme need to be put in the context that Vietnam economy has controlled inflation well in many years (4-5%), and achieved good GDP growth rate (see Exhibit 2) annually more than 5%. Also, in the whole picture of the local economy, loan growth rate also slightly decreases and has been controlled at rate of about 16% (see exhibit 3) whereas the lending rate tends to reduce and the gap between deposit rates and borrowing rates also have been shorten since 2017.
Conclusion and policy suggestion
In general, Hotel, entertainment, airline & tourism companies system in Vietnam have been contributing significantly to the economic development and GDP growth rate of more than 6-7% in recent years (see Exhibit 2). The above analysis shows us that most of risk measures (equity beta max, mean and var) are decreasing under leverage impact during the post-low inflation period. However, these 3 groups of companies in Vietnam need to continue increase their corporate governance system, structure and mechanisms, as well as their competitive advantage to control risk better. For instance, Hotel, entertainment, airline & tourism system might consider proper measures and plans to manage bad scenarios in future. Another way is increasing productivity while reducing management or operational costs. It is the time for our Hotel, entertainment, airline & tourism companies to set a private budget for risk management practices and risk management team, not only foresee the risk and opportunity to capitalize on. In many big corporations, they organize not only ALM committee and audit committee but also risk management committee (might cover and have a linkage with Human resource, IT, Legal, compliance and Public relation departments) in their corporate governance structure to foresee, access and manage market risk, credit risk and operational risk. They also need to clearly define responsibilities, tasks and roles between divisions,
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(55) code of conduct and ethical guidelines, set clear reporting lines and control measures and head quarter, group and branch levels. We can continue to expand risk governance discussion in Hotel, entertainment, airline & tourism sector in order to standardize risk management framework and build up organization characteristics.
This research paper provides evidence that the market risk potential has decreased under the impact of debt leverage in 2015-2017 post-low inflation period (looking again chart 1equity and asset beta mean values), while the Exhibit 3 also suggests that the credit growth rate increased in 2016 and slightly decrease in later years (2017)(2018). It means that the local economy is trying to control credit growth reasonably and logically, however we need to analyze risk factors more carefully to reduce more market risk. Additionally, central banks and other governmental bodies also need to evaluate good impact of debt leverage and continue to issue suitable credit programs rationally and loan packages to various economic sectors to both stimulate the economic growth and reduce market risk.
This research paper generates quantitative results on market risk in order to give warning for specific industry in case there is any high increase in market risk level and volatility. From this, Hotel, entertainment, airline & tourism companies might continue to measure and control risk level more rationally and better. They can build their own risk management model to evaluate and measure market risk and other risks periodically, annually. Good risk management involves the meanings of identifying, assessing and mitigating risk.
Last but not least, as it generates the result that the risk level became lower under the leverage impact in the postlow inflation period, the government and relevant bodies such as Ministry of Finance and State Bank of Vietnam need to consider proper financial policies (including a combination of fiscal, monetary, exchange rate and price control policies) aiming to reduce/control the risk better and hence, help the stock market, 3 above groups as well as the whole economy become more stable in next development stage. For these firms, they need policies to encourage SMEs development and capital to participate in global supply chain.
The global financial crisis has passed since 2007-2009, but several corporate scandals and bankruptcies still left lessons for failures in risk management and corporate governance framework and structure of financial institutions and corporations. It is the time for our Hotel, entertainment, airline & tourism firms to enhance standards and mechanisms as well as new perspectives in management, corporate governance and leadership. Tourism and hotel industries in Vietnam need to invest and develop in depth, not in width, and with safety and environment protection.
Finally, this study opens some new directions for further researches in risk control policies in Hotel, entertainment, airline & tourism system as well as in the whole economy. For instance, how increasing inflation and deflation affects the risk level of Hotel, entertainment, airline & tourism industry and how much inflation is sufficient for financial system and economic development. | 7,183.4 | 2020-03-30T00:00:00.000 | [
"Business",
"Economics"
] |
Technology-enhanced pre-instructional peer assessment: Exploring students' perceptions in a Statistical Methods course
There has been strong interest among higher education institution in implementing technology-enhanced peer assessment as a tool for enhancing students' learning. However, little is known on how to use the peer assessment system in pre-instructional activities. This study aims to explore how technology-enhanced peer assessment can be embedded into pre-instructional activities to enhance students' learning. Therefore, the present study was an explorative descriptive study that used the qualitative approach to attain the research aim. This study used a questionnaire, students' reflections, and interview in collecting student's perceptions toward the interventions. The results suggest that the technology-enhanced pre-instructional peer assessment helps students to prepare the new content acquisition and become a source of students' motivation in improving their learning performance for the following main body of the lesson. A set of practical suggestions is also proposed for designing and implementing technology-enhanced pre-instructional peer assessment.
Introduction
There has been strong interest among higher education institutions in implementing peer assessment as a tool for enhancing students' learning. Indeed, the growth of computer technology has a significant role in improving peer assessment applications in various educational settings (Yang & Tsai, 2010). It is also the case in mathematics learning. Mathematics education researchers have shown substantial evidence of technologyenhanced peer assessment's benefits on the students' learning (Chen & Tsai, 2009;Peter, 2012;Willey & Gardner, 2010). Specifically, Tanner and Jones (1994) posit that peer assessment helps the students to perform reflection through reviewing the works of others and recalling their own works.
Reflection process through which the students recall their existing mental context is fundamental components in learning (Lee & Hutchison, 1998;van Woerkom, 2010;Wain, 2017). Therefore, this process of reflection meets the purpose of pre-instructional activities. In the pre-instructional activities, it is expected that students can link their prior knowledge with the new content to be learned (Dick, Carey, & Carey, 2015). For this rationale, it is acceptable to stimulate reflection process by conducting peer assessment in preinstructional activities. However, little has been shown in the literature that peer assessment is used in pre-instructional activities, though Scott (2017) has utilized the simulated peer assessment in improving numerical problem-solving skills as a prerequisite for learning Biology. The questions and the solutions used in Scott's study were not genuine students' works but were constructed by the researcher. Therefore, the present study tries to shed a light on how to embed technology-enhanced peer assessment into pre-instructional activities to enhance students' learning. This paper ISSN 2460-6995 Technology-enhanced pre-instructional peer assessment... -106 Yosep Dwi Kristanto investigates students' perceptions in an attempt to portray students' learning.
Technology-Enhanced Peer Assessment
In understanding peer assessment, this study refers to the definition proposed by Topping (1998). He defined peer assessment as a process in which student measures the learning achievement of his/her peers. In the process, students have two different roles, namely assessors and assessees. As assessors, they evaluate and, in many cases, provide feedback to the works of their fellow students. In assessees role, they receive marking and feedback for their works and may act upon it.
Recent studies found that peer assessment has positive impacts on the students' learning. Several studies demonstrate that peer assessment can benefit the students in the assessment task, i.e. the quality of assessment they provided (Ashton & Davies, 2015;Gielen & De Wever, 2015;Jones & Alcock, 2014;Patchan, Schunn, & Clark, 2018). Furthermore, peer assessment also has effects on the students' acquisition of knowledge and skills in the core domain. In their study, Hwang, Hung, and Chen (2014) show that peer assessment effectively promotes the students' learning achievement and problemsolving skills. In particular, gaining learning achievement was also shown in Statistics class (Sun, Harris, Walther, & Baiocchi, 2015). One possible rationale of such benefits of peer assessment in the students' learning is the exposure to the works of their peers. When the students view their peers' works, they compare and contrast the works with their alternative solutions. This process of comparing and contrasting has the potential to facilitate students learning (Alfieri, Nokes-Malach, & Schunn, 2013;Reinholz, 2016).
Even though peer assessment has a number of advantages in facilitating learning, it also has several issues. The major concern in peer assessment is its validity as well as reliability (Cho, Schunn, & Wilson, 2006). Topping (1998) found disagreement on the degree of validity and reliability of peer assessment on his review, some studies report high validity and reliability (Haaga, 1993;Stefani, 1994;Strang, 2013), and the others report otherwise (Cheng & Warren, 1999;Mowl & Pain, 1995). However, the issues regarding validity and reliability can be reduced by providing the students with assessment rubrics (Hafner & Hafner, 2003;Jonsson & Svingby, 2007) since it makes expectations and criteria explicit.
Another issue regarding the peer assessment system is about administrative workload (Hanrahan & Isaacs, 2001). When implementing peer assessment in their class, instructors at least should manage the students' submission, assessment, and grading evaluation. Fortunately, these functions can be administered by using technology (Kwok & Ma, 1999). Technology can be used to record and assemble the results of scoring and commentary efficiently. In addition, technology also enables the teacher to provide immediate feedback based on the automated score calculation.
In the spirit of making the most of peer assessment's benefits and addressing its problems, peer feedback can be employed to accompany the peer assessment process. In peer feedback, the students discuss each other regarding performance and standards (Liu & Carless, 2006). They comment or annotate the draft or final assignments of their peers to give advice for the improvement of the assignments. When feedback comes with grading, it can be used to explain and justify the grade. It is also used to pose thought-provoking questions. The presence of the thoughtprovoking questions can foster the assessees' reflection on their assignments.
Pre-instructional Activity: Theory and Practice
From the instructional design perspective, Gagné, Briggs, and Wager (1992) posit that an instruction should be designed systematically to affect the students' development. Thus, instructional activities should be designed to facilitate the students' learning. One major component of the activities is pre-instructional activities. The activities are done prior to beginning formal instruction and it is significantly important to motivate the students, inform them the learning objectives, and stimulate recall of prerequisite skills. This study ISSN 2460-6995 107 -Technology-enhanced pre-instructional peer assessment...
Yosep Dwi Kristanto
will not theoretically discuss all of the preinstructional activities in depth. Instead, it will briefly present the examples of pre-instructional activities that appear in literature.
Pre-instructional activities can be done in different strategies. It also applies to mathematics learning. Loch, Jordan, Lowe, and Mestel (2014), in the Calculus of Variations and Advanced Calculus class, use screencasts to facilitate students in revising the prerequisite knowledge regarding the calculus techniques. Further, some scholars (Jungić, Kaur, Mulholland, & Xin, 2015;Love, Hodge, Corritore, & Ernst, 2015) use peer instruction as a pre-instructional strategy. The lesson introduction also can be done by simply telling the students of the prerequisites or testing them on entry skills (Conner, 2015).
Method
This study was an explorative descriptive research employing a qualitative approach in exploring how technology-enhanced peer assessment can be embedded into pre-instructional activities to enhance students' learning. The following sections give details of the research's setting, data collection, and also data analysis.
Research Setting
The research was conducted at a private university in Yogyakarta, Indonesia to investigate students' perceptions of the peer assessment system in Statistical Methods class. The class was conducted in a multimedia laboratory in which students have a computer to assist them in learning statistics. The author was the instructor of the class. The class utilized Exelsa, Moodle-based learning management system developed by the university, for the course administration purpose. In Exelsa, the students can access learning materials, post to a forum, and discuss with their peers about a certain topic, submit their assignments, assess and give feedback to their peers' works. The class was conducted biweekly with 24 meetings of instruction, one meeting of the midterm exam, and one meeting of the final exam. Each meeting consisted of 100-minute learning activities.
In three out of twenty-four meetings, the class was begun with peer-assessment activity. Therefore, students must submit their assignments before the class started. The assignments used in peer assessment were on the topics of one-way and two-way ANOVA. The assignments were done individually and required Microsoft Excel and SPSS Statistics in processing and analyzing real data given in the problems. The more details of the assignments will be described in the Findings section.
The peer assessment system used in this study was a workshop module (Dooley, 2009) provided by the LMS. The peer assessment takes place during the pre-instructional activities. The peer assessment system has five phases, i.e. setup, submission, assessment, grading evaluation, and reflection phases. In the setup phase, the instructor should set the introduction, provide submission instructions, and create an assessment form. After all of the components are set up, the instructor can activate the submission phase. In this phase, students can submit and edit their assignment. Optionally, they also can give a note on their assignments. However, students can only submit and edit their assignments before the class started.
Right after the class started, the instructor activated the assessment phase. In this phase, each student was assigned randomly to review assignments by their two peers. Thus, each student has two assessors. In reviewing their peers' assignments, students used a rubric to obtain a more objective assessment. The grading strategy used in the peer assessment system is the number of errors through which students grade each criterion by answering yes or no questions and optionally provide comments on the criterion. After all of the assignments were reviewed, the instructor can switch on the grading evaluation phase in which submission and assessment grade of each student were calculated automatically. In the end, students can directly see their score and feedback provided by their peers and reflect on it. The last mentioned is a reflection phase. The peer assessment process can be seen in Figure 1. The data collection process in this study was conducted between May and June 2018 and has been carried out in three phases. In the first phase, the researcher asked the students to write the reflection about their learning experiences in the course. The researcher prompted the students to use Gibbs reflective model (Gibbs, 1988). One learning experience that should be reflected by students was their experience in peer assessment activity. This phase of the data collection process was administered by the LMS.
In the second phase, a questionnaire adapted from Brindley and Scoffield (1998) was used to examine students' perceptions on the peer assessment. The questionnaire consists of three sections. The first section asked the students' personal data while the second section asked students' perceptions of peer assessment. The last section invited students to assess how useful the peer assessment process was. The second phase was done in the week right before the final exam and administered by Google form.
The third phase was conducted by interviewing three students on their general opinion about the learning process. The three students were purposively chosen to represent students' achievement. These students were interviewed simultaneously so they feel comfortable since the interviewer was their lectur-er. The interview was recorded with the approval of these students to prevent data loss.
In addition, logs of three peer assessment activities in the LMS was also generated and downloaded. This logs file records the students' activities in the peer assessment system. Once downloaded, the logs data were then sorted in Microsoft Excel to know the duration of assessment task fulfillment done by each student. Moreover, the data also were used to find total time-frame of the assessment phase in each meeting.
Data Analysis
Data from the questionnaire and data logs were analyzed using descriptive statistics. Students' response from each item of the questionnaire was described as a proportion or mean value whereas data logs were described as a mean value for each meeting. Data from students' reflection and interview were examined and categorized by the researcher. The categories are derived from Wen and Tsai (2006, pp. 33-34) study, i.e. positive attitude, online attitude, understanding-and-action, and negative attitude. The data were labeled with the corresponding codes and analyzed via the Atlas.ti package program (for more information about conducting qualitative data analysis with Atlas.ti see, Friese, 2014). ISSN 2460-6995 109 -Technology-enhanced pre-instructional peer assessment...
Research Participants
In total, 34 students were enrolled in the author-taught course under study. Student gender demographics consisted of eight male and 26 female students. Most students were in their junior year with only five students from senior year. All of the students were prospective mathematics teachers.
Pre-instructional Activities Profiles
The three meetings utilized technologyenhanced peer assessment in the pre-instructional activities. At the beginning of each session, the instructor informed the students about the learning objectives that should be achieved and linked the objectives with the previous assignments. The students were then asked to assess their peers' works through LMS. During the peer assessment process, the instructor moved about the classroom, observed students' progress on the assessment task, provided guidance if necessary, and answered questions if they arose. After the peer assessment process was complete, the instructor gave the students the opportunity to reflect on the score and feedback they received. The latter activity was the end of the preinstructional activities.
The description of the assignments to be submitted before each meeting started is as follows. First meeting required students to submit an assignment on the topic of one-way ANOVA. The assignment asked students to investigate if there is a difference in the mean of football players' height in each position, i.e. forward, midfielder, defender, and goalkeeper. In the assignment, the instructor provided real data obtained from various sources. In this meeting, two students did not submit their assignment and there were also two stu-dents who submitted their assignment but did not attend the class.
In the second meeting, the students should have submitted a one-way ANOVA problem from the accompanying textbook (Bluman, 2012, p. 632). The problem asked them to determine the effective method in lowering blood pressure by examining the mean of individuals' blood pressure from three samples categorized by the methods they follow. The peer assessment process used in this meeting was slightly different from the previous meeting. In the assessment phase, the students had to assess an example submission provided by the instructor as an assessing practice before they assessed their peers' works. Three students did not submit their assignment in this meeting.
In the third meeting, students should have submitted their assignment for the 'Car Crash Test Measurements' problem from the accompanying textbook (Triola, 2012, p. 643). In this problem, the students were instructed to test for an interaction effect, an effect from car type and car size. One student did not submit their assignment in this meeting and there were also three students who submitted their assignment but did not attend the class.
The mean of assessment tasks carried out by all students in each meeting was calculated and reported in Table 1. On average, the period starting from the assessment phase begins until the assessment phase closes were 43.50 minutes. The table reveals that there has been a sharp decrease in the mean of first and second assessment tasks period carried out by the students in each meeting. In particular, the decreasing trend also applied in the second meeting when the students first reviewed an assessment example. In this meeting, the students reviewed example assessment in nearly a half of an hour (26.83 mins), the first peer's works in almost a quarter of an hour (12.69 mins), and the second peer's works in just over six minutes (6.72 mins).
Students' Perception
To investigate the students' perceptions of peer assessment, this study employed both quantitative and qualitative data. The quantitative data were obtained from the questionnaire, while the qualitative data were obtained from the students' reflections, the questionnaire, and interview.
From the questionnaire result, it is reported that most of the students (86.21%) in this study had previous experience on peer assessment. It is also found that approximately three out of four students perceived the necessity of assessing their peers. Further, it is only 27.59% of the students who fully understood the expectation imposed on them when reviewing their peers' works, whereas the rest only have a moderate understanding. In other words, all students understood what others expect on them in assessment tasks.
Four items of the questionnaire were rating-scale questions and used to explore the students' perceived easiness, fairness, pressure, and benefit of peer assessment. A mean report of the students' responses to the items is shown in Table 2. The students gave a high rating on fairness and responsibility of their marking (M = 4.07) and benefits of peer assessment they receive (M = 4.21). With regard to the grading task, they tend to posit that they have difficulties in assessing their peers' works (M = 3.24). However, they were under moderate pressure when they are doing the assessment task (M = 3.03). The sources of the pressure are various, more than half comes from their role (62.07%), almost a third comes from their experiences (31.03%), and the rest comes from their peers (6.90%).
The students' written reflection and interview are used to examine the students' perceptions as well. The perceptions were grouped into four defined categories and presented in Table 3. The main theme of the students' statements was the helpfulness of peer assessment in enhancing their learning. Regarding this theme, students stated that peer assessment helps them to enable reflective process, viz., reflecting on their mistakes shown by peers as well as reflecting and reviewing their (2) Engaging (1) Motivating (1) Online attitude Anonymity (1) Efficiency (1) Transparency (1) Understanding-and-action Grading strategy (8) Action for improvement (7) Assessment criteria (2) Negative attitude Credibility (15) No feedback (6) Underestimating self-ability (3) ISSN 2460-6995 111 -Technology-enhanced pre-instructional peer assessment...
Yosep Dwi Kristanto
own works to be compared and contrasted to peers' works. Second, the students perceived the peer assessment process as a tool for knowledge building since they should review their knowledge when assessing others. They added that assessing their peers encouraged them to discuss to their friends if they are indecisive about their assessment. This discussion led them to construct new knowledge to provide marking and feedback on the assessment task. Third, the students thought that peer assessment process develops their evaluative judgment making skills regarding their own works or others when they provide feedback to peers. Finally, the process of reviewing peers' works gives critical understanding and develops higher-level learning skills, such as analyzing and evaluating. The quotations from five students that reflect the benefits of peer assessment with regard to its usefulness in enhancing their learning are given below:
In my opinion, the peer assessment is useful. (It is) because it encourages me to review my own works if there is a mismatch between my own works and peers. So, (I) learned twice at once regarding the works. (S6) … because I don't know (it is right or wrong) … I ask for help to my friend and found that my insight was improved. (S15)
This (peer) assessment was good to provide feedbacks to peers' works as well as to be responsible with my marking. (S31) (Peer assessment) help us to think critically in assessing friends' works. (S12) … we also must evaluate the answer of our friends which indirectly makes us reviewing the topics so that we can know/analyze where the friends' mistakes are. (S29) Assessment credibility is another major theme of students' perceptions on peer assessment. On one hand, the students agreed that peer assessment gives the instructor other perspectives to provide more accurate grading and timely feedback. On the other hand, the students also questioned their peers' ability in assessing their works. It is possible that their peer assessors made an inaccurate assessment if the assessors' own works were inaccurate since the assessors often referred to it when undertaking an assessment task. Underrating self-ability also becomes a source of credibility issues. When the students feel incompetence on the subject-specific tasks, they are afraid of not being able to provide appropriate judgments. Reliability is students' next concern on peer assessment. They found that their assessors give different grades on the same item. Hence, they questioned peers' understanding of rubric criteria given by the instructor. The following are the students' statements related to the credibility of peer assessment.
Peer assessment is very useful as if the instructor makes an error on assessment, it can be remedied by peers' grading. (S8) … However, the peer assessment doesn't work optimally when the assessor lacks understanding on what being assessed. (Moreover) the accuracy of each student's assessment is different from one another. (S34)
… Maybe the assessors' opinions are different from each other, since there are two friends that get different scores although their answers are more or less the same. (S19) The students thought that feedback is an important component in peer assessment. Corrective feedback provided by peers was helpful for the students to know the errors on their works whereas suggestive feedback useful to make improvements later on. The importance of feedback was also reflected in students' responses when they did not receive feedback. They believe that assessors' task was not only give marking but also provide constructive comments. Some of the students' comments regarding the importance of feedback are as follows.
The one who said 'no' also comment. It is a constructive thing for us (to know) Other peer assessment aspects did not escape the students' attention. With regard to the number of errors grading strategy, they perceived that it provided not many options in marking peers' works. Instead of answering yes or no in each criterion, they prefer to use scale-rating strategy. However, they thought that the peer assessment process can facilitate students' discussion as well as studentsinstructor interaction. Other benefits of technology-enhanced peer assessment were also unfolded. Students stated that such assessment model was transparent and efficient as well as engaging and motivating.
Discussion
The aim of this study was to explore how technology enhanced peer assessment can be embedded into pre-instructional activities to enhance the students' learning. This paper interprets the students' perceptions in an effort to investigate students' learning experiences. In general, the research results show that technology-enhanced peer assessment holds significant promise to be an effective pre-instructional strategy. The learning benefits provided by peer assessment meet the purpose of the pre-instructional strategy.
The findings of the present study show that the process of assessing and commenting on the works of others facilitate the students' learning. This finding is in line with the result of prior studies in peer assessment investigation (Hanrahan & Isaacs, 2001;Sun et al., 2015). One possible explanation of this finding can be derived from comparative thinking perspective (Alfieri et al., 2013;Silver, 2010). When the student reviews peers' works, they compare and contrast it with their own works. If they doubt their own works, they ask for help to others or the instructor. This process of comparing and contrasting helps them to rehearse their own understanding that is useful for preparing them to gain new knowledge related to it.
The findings also suggest that peer assessment stimulates reflective thinking that drives action for improvement. Similar to the results of other studies (Davies & Berrow, 1998;Liu, Lin, Chiu, & Yuan, 2001), the peer assessment process leads the students to think critically and reflect the quality of their own works compared to the others'. This evaluative process helps the students to devise a plan in improving their learning products later on. As a feedback receiver, the students also take advantages of the feedback to enhance their learning. In other words, peer assessment can become a source of students' motivation in improving their learning performance in the commencing main body of a lesson (Jenkins, 2005).
The study also shows the importance of feedback in students' learning. As a salient element of peer assessment, peer feedback facilitates students in taking an active role in their learning (Liu & Carless, 2006). When the students provide corrective feedback on the peers' works, they develop an objective attitude in conducting their assessment task (Nicol & Macfarlane-Dick, 2006). Through providing suggestive feedback, the students think critically on the drawbacks of their peers' works even when the works are correct (Chi, 1996). As a feedback receiver, the students use peers' comment to improve their works. Moreover, peer's comments are potential to spark cognitive conflict when the comments contradict the student's prior knowledge. From the socio-cognitive perspective, cognitive conflict is fundamental in facilitating students' learning when it is successfully resolved (Nastasi & Clements, 1992).
However, the results of this study also reveal the resistance of peer assessment. Many students in this study have negative attitudes toward the fairness of peer grading. The similar result also can be found in the literature (Cheng & Warren, 1999;Davies, 2000;Liu & Carless, 2006). The negative perceptions come from the students' skepticism about the expertise of their fellow students. Even when a rubric was provided, the students thought that some of their peers were not really fair in giving marking. Another issue arose from grading strategy used in the assessment task. The correct and not-correct dichotomy into which students should categorize their peers' work is considered to be inflexible (Sheatsley, 1983). The students want more flexible grading strategy in order to be more confident in assessing their peers. ISSN 2460-6995 113 -Technology-enhanced pre-instructional peer assessment...
Yosep Dwi Kristanto
Last but not least, the study has several limitations to be considered. The first limitation of the current study relates to its exploratory design in investigating the students' learning experience. Future studies with a larger sample and a longer period are needed to verify the evidence found in this study. Second, this study only focuses on implementing peer assessment. Comparative studies are needed to compare the effectiveness of peer assessment and other strategies, such as advance organizers and overviews, to be used in pre-instructional activities. Finally, design-based studies could contribute to future literature in giving peer assessment design that optimizes the learning transition from lesson introduction to the main body of the lesson.
Conclusion and Suggestions
The contribution of this study is to show the potential of technology-enhanced peer assessment to be used as pre-instructional activities. The results of the current study, in general, suggest that the technologyenhanced pre-instructional peer assessment helps the students to prepare the new content acquisition for the following lesson. It is also found that peer feedback has a significant role in the peer assessment process in facilitating students' learning.
Based on the findings in the present study, the author proposed a set of suggestions for designing and implementing technology-enhanced pre-instructional peer assessment. First, a training should be provided to students so that they can provide and manage feedback as well as take action upon it effectively. Second, discussions between students and the instructor about assessment criteria are needed in order to improve students' understanding about what to be assessed by their fellow students' works. If necessary, the instructor also can invite students to develop the assessment criteria. Third, the instructor should monitor students' attitude toward grading strategy. This monitoring process aims to know the suitability of the grading strategy to students, tasks, and learning context. Finally, the instructor should use the assignment features (e.g., its content and con-text) used in peer assessment as a link to the commencing main body of the lesson. | 6,412 | 2018-12-22T00:00:00.000 | [
"Education",
"Computer Science"
] |
Quantitative analysis of non-equilibrium systems from short-time experimental data
Estimating entropy production directly from experimental trajectories is of great current interest but often requires a large amount of data or knowledge of the underlying dynamics. In this paper, we propose a minimal strategy using the short-time Thermodynamic Uncertainty Relation (TUR) by means of which we can simultaneously and quantitatively infer the thermodynamic force field acting on the system and the (potentially exact) rate of entropy production from experimental short-time trajectory data. We benchmark this scheme first for an experimental study of a colloidal particle system where exact analytical results are known, prior to studying the case of a colloidal particle in a hydrodynamical flow field, where neither analytical nor numerical results are available. In the latter case, we build an effective model of the system based on our results. In both cases, we also demonstrate that our results match with those obtained from another recently introduced scheme. Thermal fluctuations play a crucial role in non-equilibrium phenomena at microscopic length scales, making it challenging to analyse and interpret experimental data. Here, the authors demonstrate that the short-time thermodynamic uncertainty relation inference scheme can estimate the entropy production rate for a colloidal particle in time-varying potentials and with background flows determined by the presence of a microbubble.
N on-equilibrium thermodynamics at microscopic length scales is dominated by a fascinating range of phenomena 1 , where thermal fluctuations play a crucial role. These phenomena can now be observed in great detail experimentally, due to the availability and scope of current microscopic manipulation techniques 2 . The interpretation and quantitative analysis of the experimentally available data are however lagging behind these advances, mostly due to the fact that the vast majority of these systems are too complicated to model without making several approximations 3 , despite having far fewer degrees of freedom than their macroscopic counterparts. Even when it is possible to build such simplified models, these are still usually too complicated to solve except sometimes by numerical analysis of specific systems, which however lack general insights 4 . There could also be other factors making the system hard to solve, such as the presence of a background flow, for which the spatial dependence of the flow velocity needs to be known by means of solving the corresponding Navier-Stokes equation; usually a difficult task, especially for unsteady flows 5,6 . In the face of all these challenges, a relevant question is whether it is at all possible to gain any precise quantitative information about a complex nonequilibrium system directly from experimental data, bypassing the first step of either having a known model to compare with or building in simplifying assumptions about the system.
Not surprisingly, this question has aroused a lot of recent interest. Broadly speaking, measurements from experiments can be used to obtain general information about the system, such as identifying that detailed balance is broken and hence the system is out-of-equilibrium [7][8][9] (not always obvious for microscopic systems such as at the cellular level), or to obtain more specific properties of the system such as the rate of dissipation of energy (equivalently the rate of entropy production) [10][11][12][13][14][15][16][17] , the average phase-space velocity field 7,18,19 related to the so-called thermodynamic force field 20,21 or the microscopic forces driving the system 16,19,22 . The motivation for such studies is that if quantitative information about the system can be directly obtained from experimentally observed quantities, then this understanding can be used for building more realistic and experimentally validated models of the system of interest 7,23,24 .
A very informative quantity about a non-equilibrium system is the rate of entropy production. This quantity not only signalswhen it is non-zero-that the system is out of equilibrium, but also provides a quantitative measure of how far from equilibrium a system is and the irreversibility of the dynamics [25][26][27] . In the context of microscopic machines 28 , a quantification of the amount of energy dissipated directly provides information about engine efficiencies [29][30][31] and prescriptions for obtaining optimal operating conditions 32 . The value of the entropy production rate can also be used to obtain information-theoretic quantities of interest 33 , or even information about hidden degrees of freedom 34 . The entropy production rate is also a very robust quantity to measure from the experimental point of view, since it is not so strongly affected by conversion-factor errors in measuring particle positions, as we remark later.
The entropy production rate can be obtained directly from experimental data, at least for systems where it is understood that the underlying dynamics is Markovian, by several means. These include utilizing the Harada-Sasa equality 10 that involves a spectral analysis of trajectory data 35,36 , determining the average steady-state current and steady-state probability distribution from the data 11 , determining the time-irreversibility of the dynamics 27,[37][38][39][40][41] and relatedly determining estimators for the ratio of forward and backward processes directly from the data 14,42,43 . Recent approaches 16,19 also advocate inferring first the microscopic force field from which the entropy production rate can be deduced.
An alternative strategy to direct estimation, is to set lower bounds on the entropy production rate 44-48 by measuring experimentally accessible quantities. One class of these bounds, for example, those based on the thermodynamic uncertainty relation (TUR) 3,48-51 , have been further developed into variational inference schemes, which translate the task of identifying entropy production to an optimization problem over the space of a single projected fluctuating current in the system 15,[52][53][54] . Recently, a similar variational scheme using neural networks was also proposed 55 . As compared to some of the other trajectorybased entropy estimation methods, these inference schemes do not involve the estimation of probability distributions over the phase-space. Rather they usually only involve means and variances of measured currents. Hence they are known to work better in higher dimensional systems 15 . In addition, it is proven that such an optimization problem gives the exact value of the entropy production rate in a steady state as well as the exact value of the thermodynamic force field in the phase space of the degrees of freedom we can measure, if short-time currents are used [52][53][54][55] . However, these methods have not yet been tested against experimental data to the best of our knowledge.
Here we test the short-time TUR scheme against the challenges posed by experimental setups involving colloidal particles in time-varying potentials with (possible) background flows. In order to benchmark the scheme, we first test it in a setup where the entropy production rate of the system can be analytically predicted for any set of parameters. For this setup, we test our predictions against both analytical results as well as another recently proposed numerical scheme, namely stochastic force inference (SFI) 19 . After this benchmarking exercise, we apply our scheme to a modified system for which the underlying model is both unknown and hard to estimate. Though there is no theoretical value to compare within this case, the short-time TUR's predictions are again in perfect agreement with that predicted by the SFI technique 19 . These results provide a motivation for modeling this system in terms of coupled Langevin equations with two free parameters. We demonstrate that such a model does indeed capture the experimental observations, hence demonstrating the usefulness of these schemes in modeling complex scenarios.
Results and discussions
Model. Our results apply to systems with continuous state-space but a finite-number of degrees of freedom, described by overdamped Langevin equations of the type Here μ ¼ 1; ; d is the number of degrees of freedom of the system and we use ⋅ to refer to the Ito convention. F μ (X) is a function of X, but not an explicit function of time t, ξ μ is dÀ dimensional white-in-time noise such that ξ μ ðtÞξ ν ðt 0 Þ ¼ δ μν δðt À t 0 Þ, where Á h i denotes averaging over the statistics of the noise. The corresponding Fokker-Planck equation for the probability distribution function P is given by: where the repeated indices are summed over. In the steady state ∂ t P = 0. The total rate of entropy production σ can be obtained as 11,39 , is called the thermodynamic force field 15 . Overdamped Langevin equations are excellent descriptions for colloidal particle systems. Even for systems where the Langevin equation is not known, the fact that such a description exists in principle is all that is needed in order to apply Eq. (3a) and obtain σ by determining the current and steady-state probability density directly from the time-series data 11,15 . Another approach is to first infer the terms in the Langevin equation, F μ and D 16,19 and use Eq. (3a) to obtain σ. These methods can be applied directly on data obtained from tracking the system or even by using tracking-free methods in image space 16 .
Short-time TUR approach. In this paper, we demonstrate an alternative method for the simultaneous determination of both the entropy production rate as well as the thermodynamic force field F μ from experimental data, using the recently introduced short-time thermodynamic inference relation [52][53][54] . Our method is built on an exact result obtained in [52][53][54] : where k B is the Boltzmann constant and J is a weighted scalar current constructed from the non-equilibrium stationary state as shown below. The notation 〈⋅〉 stands for an ensemble average. The current that maximizes the term within the square brackets is J ∝ ΔS tot . Here Δt is the short-time interval over which the mean and variance of the current is evaluated 52 . In this work, it also coincides with the sampling rate of the trajectory. As for the ordinary TUR 56 , our result too holds for any X that is even under time reversal. The equality in (4) holds only when X includes all degrees of freedom of the system. If not, then the RHS of (4) gives a lower bound. The proof presented for Eq. (4) in ref. 52 was based on exact results for non-trivial models. It was shown that Eq. (4) is a consequence of fluctuations of ΔS tot becoming Gaussian, in the Δt → 0 limit. Later in refs. 53 and 54 , Eq. (4) was rigorously proved for overdamped diffusive processes.
Let us now discretize X in time with time interval Δt: We use latin indices as superscripts for the discrete time labels and the Einstein summation convention is applied to the greek indices. For a given function d(X) we can now define a time-discretised scalar function constructed from the steady-state current, Any such current, when substituted in the expression inside the square brackets of Eq. (4) can be shown to give a lower bound σ L which is ≤σ. In addition, for a special value of d = d*, J ∝ ΔS tot and σ L = σ. The algorithm we use, which obtains this d* and σ through a maximization procedure is as follows: 1. We first obtain a time-series of experimental data: X k . 2. To be able to perform the maximization we use a set of basis functions ψ m (X), m = 1, …, M, in the space spanned by X such that where w m 2 R d and are the parameters to be optimized. We use two sets of basis functions: Gaussian and linear and generate all our results in both these bases, for comparison.
3. Maximize Eq. (4) to obtain σ. This maximization is done using a numerical optimizer: We start with an initial guess for w m , calculate the time-series J k , construct the function within the square brackets in (4) and then maximize over w m to obtain σ and also the set of values w à m such that d à ¼ ∑ M m¼1 w à m ψ m ðXÞ maximizes Eq. (4). The maximizing current J* is constructed from d* using Eq. (5) and in addition can be shown to be proportional to ΔS tot 52 .
Furthermore, the thermodynamic force is proportional to d* that maximizes (4) 52-54 , i.e., Hence, by solving an optimization problem, where the RHS of Eq. (4) is maximized in the space of all currents we can obtain σ as the optimal value as well as its conjugate thermodynamic force field, F ¼ c d à where the proportionality constant can be fixed by using Var(J*) = 2〈J*〉 at Δt → 0 52 as, c ¼ 2hJ à i Var ðJ Ã Þ . We note that, for any set of basis functions ψ m (X), m = 1, …, M which give an adequate representation of d(X), an analytic solution to the maximization problem is known 54 . This solution gives a deterministic estimate of σ as, where Further, the optimal coefficients can be directly computed without any optimization as Repeated indices are summed over as before. Numerically, this involves inversion of the matrix Ξ. On the one hand, if d; N, and M are not very large, this deterministic scheme is faster compared to a numerical optimization algorithm, and does not get stuck in local maxima. On the other hand, numerical optimization schemes can in principle simultaneously handle the optimization of parameters of the basis functions. This is discussed in some detail in ref. 53 . In addition, numerical optimization schemes have also been extended to systems driven in a time-dependent manner 57 , where it is as yet unclear how the deterministic scheme will perform. In this work, we implement the numerical optimization scheme using a particle-swarm optimizer. We provide a brief introduction to the algorithm in the "Methods" section. We note that refs. 53,54 have already demonstrated the feasibility of the scheme described here with numerical data. Here we test this scheme instead on controlled experimental setups.
Colloidal particle in a stochastically shaken trap. To test the inference scheme we first apply it to an experimental problem for which the rate of entropy production is known from theory 58-61 -a colloidal particle in a stochastically shaken optical trap. This model was first experimentally studied in 62 . We study it again in order to understand the limitations posed by experimental setups for our inference scheme as well as test and benchmark our scheme for a system where the results are known.
We trap a polystyrene particle in an optical trap; further details of how the experiment is performed may be found in the "Methods" section. We modulate the position of the center of the trap λ(t) along a fixed direction x on the trapping plane perpendicular to the beam propagation (+z). The modulation is a Gaussian Ornstein-Uhlenbeck noise with zero mean and covariance λð0ÞλðsÞ ¼ Aτ 0 expðÀjsj=τ 0 Þ, i.e., where η is Gaussian, has zero-mean and is white-in-time. The correlation time τ 0 is held fixed for all our experiments. Note that Aτ 0 can be interpreted as an effective temperature 63 . The dynamics of the colloidal particle is well described by an overdamped Langevin equation, where K is the spring constant of the harmonic trap, γ is the drag coefficient, ξ is the thermal noise, D = k B T/γ is the diffusion coefficient of the particle, and T the temperature of the medium. The noise ξ is also Gaussian, zero-mean, and white-in-time and mutually independent from the noise η in Eq. (10). Equations (11) and (10) together define the model we call the Stochastic Sliding parabola. Starting from arbitrary initial conditions for x and λ, the system reaches a non-equilibrium steady state, with the probability distribution function and current given respectively by 58 where the dimensionless parameters θ and δ are defined as, The rate of entropy production and the thermodynamic force field for this model are, In Fig. 1, we compare the above exact results to the outcome of the inference algorithm applied to numerically generated data for this model. Different sets of time-series data were generated by varying the noise amplitude ratio θ by varying A, keeping the other parameters fixed. In Fig. 1a, we show the trajectories of the system in the (x, λ) space for three different θ values. In Fig. 1b, we see that the inference algorithm predicts a value σ L which is lower than the true value σ in the beginning, but gets very close to the true value, after a relatively modest number of steps.
As we run the algorithm longer, σ L saturates to something very close to the actual value. The inference algorithm also simultaneously gives an optimal force field d*(x) which is very similar to the thermodynamic Force field F μ ðxÞ expected from theory (see Supplementary Fig. S1 in Supplementary Note 1). From Eq. (14), it is clear that σ increases linearly with θ or equivalently the parameter A. Figure 1c illustrates that the inference algorithm captures this behavior accurately. Since we are limited by the minimal resolution of the time series in probing the Δt → 0 limit of Eq. (4), the inferred value of entropy production is in general different from the exact value by an O½Δt term. For this model, we can also compute this correction analytically as (using expressions previously obtained in ref. 61 where σ Δt is the result one gets from Eq. (4) for a fixed value of Δt. Notice that the O½Δt correction increases with the value of θ. The inferred values of σ indeed lie between these two limits. Next, we tested the algorithm on experimentally generated data for the same model. In the experiments, we varied A ranging from 0.1 to 0.35 in units of 0:6 10 À6 À Á 2 m 2 s À1 (corresponds to θ varying from 0.22 to 0.77), while the other experimental parameters such as the trap stiffness, as well as the bath temperature, were assumed to be constant for the entire length of the experiment. In reality, however, the laser used to trap the particle is prone to power fluctuations, and there can also be minor changes in the bath temperature due to heating caused by the long exposure to the laser. For large values of θ, we also expect non-harmonic effects to be significant, due to the particle exploring the peripheral regions of the trap 64 . We comment in the following paragraph on the implications of these fluctuations for our results. An immediate consequence is however that the theoretically predicted values Eq. (12a) can only be used as a reference. We benchmark our results instead by comparing them with values obtained by the application of the stochastic force inference technique (SFI) scheme recently proposed in 19 , which gives an independent estimate of both σ as well as the force fields. Experiments for individual parameter sets were carried out for a duration of 100 s, with a sampling rate of 10 kHz for the particle position. Only about 2/3rd of the available experimental data was used and the remaining 1/3rd was discarded because of the presence of uncontrolled experimental errors in them. For the analyzed data, each of the 100 s long data sets were further divided into 12.5 s long patches, upon which the inference algorithm was then tested. In Fig. 2, we demonstrate the results of the analysis of the experimental data. The dark-blue dashed line corresponds to the theoretically predicted value, Eq. (12a), of the entropy production rate for the model given by Eqs. (10) and (11) with the given parameters. The blue line is the entropy production for a slightly modified model obtained by analyzing the data obtained from SFI and calculating the drift and diffusion terms from it (see Supplementary Note 2 for details). The region between the red dashed lines corresponds to the error bar set by the variation in model parameters (namely the drift and diffusion coefficients) in different experiments as quantified by the SFI analysis. The data points are the results of our inference algorithm as well as the SFI scheme. As is evident, our inference scheme predicts exactly the same or very similar values for σ as the SFI algorithm, for all values of A.
The prediction of the inference scheme and SFI matches also for the thermodynamic force (Fig. 3a, b). Namely, the optimal current d*(x), which we get as an outcome of our inference algorithm, also matchesF ðxÞ, which is F ðxÞ estimated from the trajectory data by means of the SFI technique 19 . We conclude that our inference algorithm infers the correct entropy production value, as well as the correct thermodynamic force field, for the experimental data, since we get the same results when using a completely independent and different technique.
A colloidal particle trapped near a microbubble. We now demonstrate how our scheme performs in estimating the rate of entropy generation for the case where the mechanical force on the colloidal particle is not known. For this purpose, we study a particle trapped in the vicinity of a microscopic bubble of size 20-22 μm. We have already used this experimental setup to study one or more microbubbles with colloidal particles moving in the liquid in a different context 65,66 . The microbubbles are nucleated on a liquid-glass interface. The surface is pre-coated by linear patterns of a MB-based soft oxometalate (SOM) material. We focus a laser beam on any region along this pattern, the SOM material gets intensely heated, and a microbubble forms. The top of the bubble is colder than its bottom where it is anchored to the interface. As the surface tension is a function of temperature, the variation of the surface tension along the surface of the bubble sets up a Marangoni stress, driving a flow along the surface of the bubble. Marangoni flow around freely floating bubbles under a temperature gradient have been studied both experimentally 67 and analytically 68 . The additional complexity here is the presence of the bottom surface on which the flow must satisfy no-slip boundary conditions. The flow around the bubble in this setup is not yet known in detail although an approximate description, valid if we are not too close to the bubble, has been developed 66 , as shown in Fig. 4. This flow drags the trapped colloidal particle and changes its steady-state probability distribution (Fig. 5a, b). Since the flow streamlines are directed towards the bubble, we expect that these will confine the trapped particle more than the case without the bubble. This is indeed the case as we show later. We expect that the underlying description of the particle is still an overdamped Langevin equation, including a flow velocity field u(x). However, the quantification of this flow field is rather difficult, even numerically, as argued above. As a result, we have a system where the details of the microscopic description and forces are unknown. Our inference scheme, on the other hand, is easily applicable even in this context.
At the level of the non-equilibrium trajectories of the system, we see that there is a qualitative difference from the case without the bubble. First, we see that the particle is more confined in the trap along the x direction, when there is a bubble in the vicinity (see Fig. 5b) as mentioned earlier. This confinement is caused both by the flow towards the bubble (as shown in Fig. 4), which gets balanced at the confined position by the opposing force of Fig. 2 The short-time inference scheme tested on experimental data. Test of our inference algorithm on experimental data for different values of the parameter A (or θ) where A is the amplitude of the noise in the Ornstein-Uhlenbeck process defined in Eq. (10) and θ is defined in Eq. (13). The dark-blue dashed line corresponds to the theoretical value given by Eq. (14). The squares and triangles corresponds to the entropy production rate σ estimated from the experimental data using our thermodynamic uncertainty relation (TUR)-based inference scheme (Eq. (4)) with a Gaussian basis and a linear basis, and using Δt = 0.1 ms. The error bars correspond to averages over eight independent realizations of duration 12.5 s. The circles correspond to σ estimated using the stochastic force inference scheme (SFI) 19 for the whole 100 s data set, and the error bars for these correspond to a self-consistent estimate of the inference error that the SFI provides 19 . The blue line corresponds to σ predicted by a model obtained from SFI (Supplementary Note 2), and the red dashed lines correspond to error bars for this SFI-based model. The parameters used in the experiment are as follows: the corner-frequency of the harmonic trap, f c = 135 ± 10 Hz, relaxation time of the Ornstein-Uhlenbeck process τ 0 = 0.0025 s, temperature of the aqueous medium T = 298 K, and the rate at which the trajectory is sampled: Δt = 0.0001 s. the confining potential, as well as the reduced fluctuations close to the bubble due to proximity effects 69 . Further statistical analyses also reveal weaker non-equilibrium currents (see Supplementary Note 3 and Supplementary Fig. S2 for details). Consistent with these observations, on applying the inference algorithm, we observe that the value of σ is substantially reduced in the presence of the bubble. The corresponding entropy production rates estimated are σ = 244.68 k B s −1 for the no-bubble case and σ = 7.66 k B s −1 for the case with the bubble. We also find that the thermodynamic force, estimated using the inference scheme, is significantly reduced along the x direction, and the force field is less tilted along that direction as compared to the case without the bubble, as shown in Fig. 5c.
To further analyze the effect of the bubble, we performed another experiment, where we trapped the particles at different distances from the bubble. As we go a distance d~1.5r (r is the radius of the microbubble) from the surface of the bubble, we see that the inferred value of σ gets closer to the value the system would have had in the absence of the bubble. This is demonstrated in Fig. 6.
The significance of the inferred value of σ has to be discussed in the light of these findings. In the case without the bubble, it is exactly the total heat dissipated to the environment as a consequence of maintaining the system in a non-equilibrium steady state (by shaking the trap). In the case with the bubble, however, this is not the case. We present a possible mathematical description of this situation as an overdamped Langevin equation with space-dependent diffusion and damping terms in an unknown flow field u(x). Since the trap constrains the particle motion on scales that are at least two orders of magnitude smaller than the distance to the bubble, u(x) is further assumed to be a constant u d at a distance d from the surface of the bubble. σ calculated from this model, reproduces the values we find from the experimental data, independent of u d , and purely as a consequence of the space-dependent diffusion and damping term, and the two fitting parameters a and b. This is demonstrated in Fig. 6. As we discuss in Supplementary Note 4, however, there is another component of the entropy production, related to the Fig. 3 The thermodynamic force fields obtained from the inference scheme. Thermodynamic force field obtained as the optimal field d*(x, λ) using Eq. (7) (shown in black) compared toF ðx; λÞ (shown in blue) which is the thermodynamic force field obtained using the stochastic force inference technique 19 work that the flow does against the confining potential 42,70 . This component, which does indeed depend on the value of u d , is not estimated by our inference scheme, due to the fact that u d is a field (corresponding to the velocities of the molecules of the thermal bath) which is odd under time reversal, for which the TUR does not hold 3,56,[71][72][73] . Hence, we expect that the values of σ we find close to the bubble are underestimates of the true value. We elaborate on this point in Supplementary Note 4.
Mathematical model. The colloidal system in the presence of the bubble and consequently the flow u d , can be simulated using the following equations: where, Here the parameters a and b can be tuned to match the experimental data. Particularly, 1/b stands for a characteristic length scale below which the flows created by the bubble are significant. When the distance of the trapped particle from the bubble is much greater than 1/b, we expect that the expressions will match the case without the bubble. Using a trial and error approach, we obtained the fit parameters as a = 282.743 and b = 1/3 μm −1 . We remark that, as we did for the case without the bubble, the SFI technique could be used to model this case as well, since it explicitly gives the drift and diffusion terms. These are however particularly susceptible to erroneous estimates of a conversion factor, which is needed to obtain the particle trajectory data in units of nm. We expand on this issue in the "Methods" section as well as in Supplementary Note 2. This error can be thought of as assigning wrong units to the affected phase-space coordinates. Since σ is a sum over all phase-space coordinates, its evaluation is not affected by such an error unlike other quantities such as forces, diffusion terms, and the thermodynamic force. Another way to understand this is to note that σ quantifies the irreversibility of the dynamics, which again is clearly not affected by a choice of units. The colloidal system in the presence of the bubble. a The microbubble-colloidal particle system. b System trajectories without (red) and with (green) the bubble in the neighborhood of the colloidal particle. We see that the colloidal particle is strongly confined in the presence of the bubble. c The thermodynamic force field computed as the optimal field d*(x) (Eq. (7)) without the bubble (red) and in the presence of the bubble (green). The corresponding entropy production rates estimated are σ = 244.68 k B s −1 for the no-bubble case and σ = 7.66 k B s −1 for the case with the bubble. The parameters used in the experiment are as follows: the corner-frequency of the harmonic trap, f c = 57 ± 3 Hz, relaxation time of the Ornstein-Uhlenbeck process τ 0 = 0.025 s, temperature of the aqueous medium T = 298 K, the rate at which the trajectory is sampled: Δt = 0.0001 s and the amplitude of the Ornstein-Uhlenbeck noise A ¼ 0:3 ð0:6 10 À6 Þ 2 m 2 s À1 . Hence our model for the setup with the bubble only tries to reproduce the value of σ as a function of distance.
Conclusion
In conclusion, we have experimentally tested a simple and effective method, based on the thermodynamic uncertainty relation [52][53][54] for inferring both the rate of entropy production σ and the corresponding thermodynamic force fields, in microscopic systems in non-equilibrium steady states. We have confirmed that an entirely independent method, SFI 19 , gives the same answers in all the situations we have studied, hence adding weight to the physical significance of our findings. We have also carried out an extensive investigation of the convergence properties of our code as several parameters or hyper parameters are varied, as well as a comparison with the SFI algorithm (see Supplementary Notes 5, which includes Supplementary Figs. S3-S5 and Supplementary Notes 6, which includes Figs. S6 and S7). Our short-time inference scheme does not need any model in order to be applicable. However, we can use our findings to come up with plausible models, which give the same σ values for a range of parameters, even in cases where modeling the system from the first principles is complicated. In this regard, it would also be interesting to perform a systematic study of different algorithmic schemes available to model a complex nonequilibrium systems, with a focus on the advantages and disadvantages when applied to experimental data.
Experimental systems that would be particularly interesting to study are molecular motors or other cellular processes. Recently, ref. 74 tried to quantify the activity of a cell by measuring the power spectral density of the fluctuations of the position of a phagocytosed micron-sized bead inside a cell. As it is possible to also trap such beads inside a cell with optical tweezers 74 , this too could be a very interesting system to study. Finally, in other recent work 57 , it has been demonstrated that inference schemes of this kind can also be made to work for non-stationary non-equilibrium states, further diversifying the scope of this class of techniques.
Methods Experiment
A single colloidal particle in a stochastically shaken trap. The experimental setup consists of a sample chamber placed on a motorized xyz-scanning microscope stage, which contains an aqueous dispersion of spherical polystyrene particles (Sigma-Aldrich) of radius r = 1.5 μm. The sample chamber consists of two standard glass cover-slips (of refractive index~1.52) on top of one another. The thickness of the chamber is kept at 100 μm by applying double-sided sticky tape in between the cover-slips. The aqueous immersion is made out of double distilled water at room temperature, which acts as a thermal bath. A single polystyrene particle is confined by an optical trap, which is created by tightly focusing a Gaussian laser beam of wavelength 1064 nm by means of a high-numerical-aperture oil-immersion objective (×100, NA = 1.3) in a standard inverted microscope (Olympus IX71). The trap is kept fixed at a height, h = 12 μm from the lower surface of the chamber in order to avoid spatial variation in the viscous drag due to the presence of the wall. The corner-frequency of the trap (f c ) is set to be 135 Hz. For the first set of experiments, the center of the trap is modulated (λ(t)) using an acousto-optic deflector, along a fixed direction x in the trapping plane, perpendicular to the beam propagation (+z). Thus, the modulation may be represented as a Gaussian Ornstein-Uhlenbeck noise with zero mean and covariance λðsÞλðtÞ ¼ Aτ 0 expðjt À sj=τ 0 Þ. The correlation time τ 0 is held fixed for all our experiments. We determine the barycenter (x, y) displacement of the trapped particle by recording its back-scattered intensity from a detection laser (wavelength 785 nm, copropagated with the trapping beam) in the back-focal plane interferometry configuration. The measurement is carried out using a balanced-detection system comprising of high-speed photo-diodes 75 , with sampling rate of 10 kHz and final spatial resolution of 10 nm. In all cases, the trap parameters including the conversion factors for the trajectory data were calibrated by fitting the probability distribution of the particle position in thermal equilibrium to the Boltzmann distribution PðxÞ ¼ ð2πDτÞ À1=2 expðÀx 2 =2DτÞ, where τ is the relaxation time in the trap, given by τ = 1/(2πf c ). We assume that the diffusion constant D has the room temperature value, D = 1.645 × 10 −13 m 2 s −1 . It is important to note that we assume that the trap parameters, as well as the conversion factor for the trajectory data, are unaffected when the Ornstein-Uhlenbeck modulation is turned on. However, in practice, the trap parameters can indeed be altered by small amounts over long durations of measurement (−100 s), primarily due to the power fluctuations of the trapping laser. Further, the probe particle also moves in the y and z directions in the trap, which we have not measured here. These factors led to issues that prevented us from producing an exact replica of the theoretical model in the experiment. However, we have taken into account these limitations in our analysis as detailed in Supplementary Note 2.
In the second set of experiments, i.e., for those with the microbubble, we employ a coverslip that is pre-coated by a polyoxometalate material 76,77 absorbing at 1064 nm as one of the surfaces of the sample chamber (typically bottom surface), and proceed to focus a second 1064 nm laser on the absorbing region. A microbubble is thus nucleated-the size of which is controlled by the power of the 1064 nm laser 76 . Typically, we employ bubbles of size between 20 and 22 μm. Note that the sample chamber also contains the aqueous immersion of polystyrene particles. We trap a polystyrene probe particle at different distances from the bubble surface, and modulate the trap center in a manner similar to the experiments without the bubble. The particle is trapped at a axial height corresponding to the bubble radius. The other experimental procedures remain identical to the first set of experiments. However, an important additional step here is the determination of the distance of the particle from the bubble surface. This we accomplish by using the pixels-todistance calibration provided in the image acquisition software for the camera attached to the microscope, which we verify by measuring the diameters of the polystyrene particles in the dispersion (the standard deviation of which is around 3% as specified by the manufacturer), and achieve very good consistency. Note that we obtain a 2-d cross-section of the bubble as is demonstrated in Fig. 5a, and are thus able to determine the surface-surface separation between the bubble and the particle with an accuracy of around 5%. During the experiment, we also ensure that the bubble diameter remains constant by adjusting the power of the nucleating laser-indeed the bubble diameter is seen to remain almost constant for the 100 s that we need to collect data for one run of the experiment.
As opposed to the previous setting, the particle is now trapped in a region where it experiences: (1) a temperature gradient created by the laser beam used to generate the microbubble, (2) the microscopic flow generated due to the bubble, which affects particle trajectory, and (3) Faxen-like corrections to the viscous drag coefficient of the encompassing fluid due to the proximity to a wall 78 -which is the bubble surface in this case. Now, since the trap parameters-particularly the conversion factor for the trajectory data-are determined for the equilibrium setting, it is clear that there could be significant deviations from those in the non-equilibrium configuration produced due to the presence of the bubble in close proximity of the trapped probe particle. It is also clear that the previous procedure for obtaining the trap parameters will not work in this case. This severely hampers any exact modeling and as a result we concentrate on getting only the value of σ and its variation as a function of the distance to the bubble, both of which are robust against the above errors.
Numerical algorithm. Our aim is to maximize a cost function C, which is a function of a set of parameters w. We use a particle-swarm optimization algorithm 79 to achieve this. A domain is chosen and N p particles are initialized in that domain. The kth particle follows Newtonian dynamics given by: Here ω k and V k are the position and velocity vector of the kth particle and A k is a stochastic function that depends on the position of all the particles. Different variants of this algorithm use different A. The simplest-the one that we use-is called the Original PSO. Let us first define the following:
•
The kth particle carries an additional vector P k which is equal to ω k for which the value of the function C as observed by the kth particle was maximum in its history.
• At any point of time let G denote the position of the particle in the whole swarm for which the function has the maximum value.
The function A is given by Here the Greek indices run over the dimension of space. W 1 and W 2 are two weights. The two terms in Eq. (19) push the particle in two different directions: one towards the point in history where the particle found the function to be a maxima and the other towards the point where the swarm finds the maximum value of the function at this point of time. These are multiplied by two random vectors U 1 and U 2 of dimension same as the dimension of space. Each of the components are independent, uniformly distributed (between zero and unity), random numbers. We keep track of the highest value of the function seen by the swarm and also the location of that point. There are two major advantages to this over standard gradient ascent algorithms: one, it does not require evaluation of the gradient of the function and two, it can be parallellized straightforwardly. All the numerical results reported in this paper are obtained using this algorithm. We implement this optimization scheme using open-sourced PYSWARM package in Python 80 , with a default choice for the hyper parameters.
Implementation of the algorithm. Here we describe how we applied this algorithm to numerical/experimental data. We generate numerical data using first order Euler integration of Eqs. (10) and (11) with a time step of Δt = 0.0001. In either case, we generate many copies of trajectories of length 12.5 s, and construct the cost function in Eq. (4) using Eq. (5) and Eq. (6). We have tried out two different choices of basis functions to construct d(X). The first one is a Gaussian basis in which we represent d(X) as, Making use of the spatial symmetry of the problem, we assume d(X) to be an antisymmetric function, with d( − X) =−d(X), and that reduces the dimensionality of the problem by a factor of 2. Here M is the number of Gaussian functions, and b i are the variance of the Gaussian in the x and λ direction. The centers of the Gaussian (x m , λ m ) are placed equally spaced in a rectangular region enclosing the data. Both M and b i are hyper parameters. We used M = 16 and b 2 x=λ ¼ fx=λg max =30. Secondly, we have also tried a linear basis (motivated by the prior knowledge of the linearity of the system) where we take In Supplementary Note 6 which includes Supplementary Figs. S6 and S7, we study the dependence of the output of the algorithm, for both basis functions, on Δt, length of the time-series data, N p as well as M.
Since we have used a finite amount of data to construct the cost function, it will be prone to statistical errors. Therefore, we independently maximize the cost function for different 12.5 s long data sets, and take their mean value as the optimized estimate of σ. We show the value of sigma inferred (σ L ) as a function of the number of steps in the optimization algorithm for different 12.5 s long data sets in Fig. 7.
Data availability
The data used to produce the results in this paper is available at https://doi.org/10.6084/ m9.figshare.14176664. Fig. 7 Inference for different copies of trajectories. The plot shows the inferred value of the entropy production rate σ L as a function of the number of steps in the optimization algorithm, using the linear basis. The data is shown for different data sets all spanning 12.5 s in total length, generated numerically for the same parameter choices as in Fig. 1b. The black dashed line corresponds to the theoretical estimate of the entropy production rate σ for this parameter choice. | 10,168.2 | 2021-12-01T00:00:00.000 | [
"Physics"
] |
Improving Machining Performance for Deep Hole Drilling in the Electrical Discharge Machining Process Using a Step Cylindrical Electrode
: The performance of electrical discharge machining for drilling holes decreases with machining depth because the conventional flushing and electrode cannot completely eliminate debris particles from the machining area. In this study, a modified electrode for self-flushing in the electrical discharge machining process with a step cylindrical shape was designed to improve machining performance for deep hole drilling. The experimental results of the step cylindrical electrode showed that the material removal rate increased by approximately 215.7%, 203.8%, and 130.4%, and the electrode wear ratio decreased by approximately 47.2%, 63.1%, and 37.3%, when compared with a conventional electrode for the diameters of 6, 9, and 12 mm, respectively. In addition, the gap clearance and concavity of the side wall of the drilled hole was reduced with the step cylindrical electrode. The limited high flank of the electrode led to an increase in the escape area of the debris that was partially removed from the machining area, and the limited secondary spark on the side wall of the electrode resulted in a reduction in machining time.
Introduction
Electrical discharge machining (EDM) is a modern processing machining method for materials that are difficult to use with conventional methods [1][2][3] as a result of high hardness, high wear resistance, and other special properties [3][4][5][6][7][8][9]. EDM is a machining process that removes some of the work material from the workpiece using the high temperature produced by spark discharges to erode the workpiece [3,8,10]. EDM is also known as spark machining, which involves spark eroding material from the conductive workpiece while it is submerged in a dielectric medium [11][12][13][14][15]. The principle of the EDM process is the controlled removal of material using an electrode as a cutting tool, which conducts electricity well and moves toward the workpiece by controlling the release of the electric current flowing through the sparking part under the machining gap [1,3]. EDM is suitable for non-contact workpieces as it sparks in a cycle of time (on-time) and stopping (off-time) during this period, called the duty factor [1,16]. EDM erodes the workpiece to produce the finished part to the desired shape and surface quality [17]. The EDM spark process is an eroding mechanism under a dielectric liquid substance that prevents short circuits in the discharge system, which prevents interruptions in the machining process. Using dielectric fluid to flush the system removes debris after discharge [18]. The spark discharge conducted in the dielectric fluid is suitable to machine most conductive materials, regardless of the hardness of the workpiece due to the non-contact of the cutting tool (electrode) and workpiece [19,20]. Therefore, the quality of the EDM post-processing is high and EDM can be used with conductive materials [3,20]. The obtained surface quality texture properties depend on the EDM parameters such as discharge current, discharge voltage, pulse duration, pulse interval, and flushing method [21,22].
The problems of spark discharge after the cycle has finished and debris removal by flushing dielectric fluid through the machining gap to remove debris material to maintain consistent electrical conditions are yet to be addressed [3,23]. The material removal rate is an important consideration; altering the machining gap can increase the material removal rate, as well as improve surface finish [24]. The EDM process has been widely used for manufacturing molds and dies, as well as in the aerospace and automotive industries [25][26][27][28][29][30]. However, problems with deep hole machining or deep hole drilling often lead to undesirable surface quality and geometry problems. The debris removal efficiency is hampered by the difficulty in debris elimination, and the residual debris that returns to the conductor during the second spark appears to be recast onto the wall of the hole, resulting in a poor roughness appearance of the internal wall [31][32][33], as well as an undesirable shape of the holes that are not well-drilled. Therefore, many researchers have studied how to improve the deep hole machining process. However, poor machining performance based on the removal of debris material, surface quality, and machining accuracy usually occurs in deep hole drilling due to the difficulties in gap cleaning and debris removal from the sparking area [34][35][36]. This leads to low production efficiency. Deep hole machining and small-sized holes in the EDM process are important for improving the performed result process, which is why it is necessary to study various parameters [37][38][39].
The improvement in machining performance in several distinct areas has been thoroughly researched. Wang et al. [20] researched the effects of electrode jump height and speed on the movement of debris and bubbles during electrical discharge machining. They showed that a large electrode jump height (mm) and jump speed (mm/min) can absorb cleaned oil (fresh dielectric fluid) into the bottom of the machining gap, resulting in more stable machining. Munz et al. [40] investigated the influence of flow rate for side flushing of dielectric fluid on the debris removal and process performance. They reported that the increase in the dielectric flow improved decontamination in the machining gap, flushing away gas bubbles and debris more rapidly, which improved the machining performance in the terms of material removal rate. Dwivedi et al. [41] improved machining performance of the EDM process using a rotating cylindrical electrode. They found that the rotating electrode increased the material removal rate by 41% and reduced surface roughness by 12% compared to using a stationary electrode. Teimouri and Baseri [42] used a rotational magnetic field and rotary electrode in the EDM process. They confirmed that the enhanced flushing of debris from the machining gap increases machining performance. Chuvaree et al. [34,43] investigated the dimension accuracy of an EDM deep hole using a multihole interior flushing electrode. They reported that the improved flushing technique is able to achieve a better closed gap tolerance and higher material removal rate compared with the conventional flushing method. Nadeem et al. [44] examined how to improve the performance of EDM through relief-angled tool designs on tungsten carbide material. The performance of the relief-angled electrodes was found to be significantly better than the performance of a conventional cylindrical tool. Chuvaree et al. [45] studied the effect of multi-aperture inner flushing on the characteristics of EDM deep hole. They showed that the multi-aperture inner flushing achieved a higher material removal rate, shorter machining time, and lower gap clearance than the side flushing of a conventional electrode. Moreover, various parameters affecting the material removal rate during EDM and the flushing method also play an important role, especially in the case of hole formation [44]. The melt debris accumulates in the internal walls around the drilled hole and forms a solidified recast layer [31,42]. The conditions of EDM machining depend on several factors, such as the current or duty factor and gap flushing efficiency [46][47][48]. Based on previous research, optimization of the machining conditions by adjusting the jump height is able to make machining more stable. However, a large electrode jump height is related to low sparking time, which results in a higher production time. In addition, modifying EDM equipment is a complicated process and has an increased machining cost. In this work, the design of the shape of the electrode was investigated to remove debris in the machining gap. The aim of this study was to examine the influence of a self-flushing electrode (cylindrical step shape) for EDM deep hole drilling on the improvement of machining performance, as well as machining time, material removal rate (MRR), electrode wear ratio (EWR), the quality of the machined surface (Ra), and the gap clearance of the drilled hole.
Experimental Materials
The schematic diagram of the experimental setup is shown in Figure 1a. The experimental testing was carried out on an electrical discharge machine (CNC EDM 430, Aristech, Taichung City, Taiwan). The work material was the grinding plate of plastic mold steel AISI P20 material, which is a material used for pre-quenching of the produced injection plastic mold, shown in Table 1. A workpiece setup in the form of a couple of steel plates with a thickness of 10 mm was mounted in the working tank of the machine by a vise. The alignment of the workpiece along the y-axis of the machine was checked by the dial test indicator. The electrode was mounted on the head of the machine, then the alignments in the x-axis and y-axis along the z-axis were controlled by ±0.005 mm for the moving distance in the z-axis of 50 mm. The drilling positions were located on the parting line of the workpiece which used an electrode touching on both sides of the edge of the workpiece then moving to the center of drilling hole. The depth of the drilling hole was limited to 50 mm as shown in Figure 1b. Distance and alignment were controlled by a Mitutoyo dial test indicator (Resolution 0.001 mm, Accuracy ±0.003 mm, Nitutoyp Corporation, Kanagawa, Japan). The machining time and depth of the drilling hole were directly recorded to calculate the MRR. The EWR was evaluated at the end of the machining tests. The quality of the machined surface was measured by athematic roughness (Ra) using a roughness tester (MarSurf PS1, MAHR, Göttingen, Germany), and the gap clearance along the depth of the drilled hole was measured using an optical microscope (STM6, OLYMPUS, Tokyo, Japan). a higher production time. In addition, modifying EDM equipment is a complicated process and has an increased machining cost. In this work, the design of the shape of the electrode was investigated to remove debris in the machining gap. The aim of this study was to examine the influence of a self-flushing electrode (cylindrical step shape) for EDM deep hole drilling on the improvement of machining performance, as well as machining time, material removal rate (MRR), electrode wear ratio (EWR), the quality of the machined surface (Ra), and the gap clearance of the drilled hole.
Experimental Materials
The schematic diagram of the experimental setup is shown in Figure 1a. The experimental testing was carried out on an electrical discharge machine (CNC EDM 430, Aristech, Taiwan). The work material was the grinding plate of plastic mold steel AISI P20 material, which is a material used for pre-quenching of the produced injection plastic mold, shown in Table 1. A workpiece setup in the form of a couple of steel plates with a thickness of 10 mm was mounted in the working tank of the machine by a vise. The alignment of the workpiece along the y-axis of the machine was checked by the dial test indicator. The electrode was mounted on the head of the machine, then the alignments in the x-axis and y-axis along the z-axis were controlled by ±0.005 mm for the moving distance in the z-axis of 50 mm. The drilling positions were located on the parting line of the workpiece which used an electrode touching on both sides of the edge of the workpiece then moving to the center of drilling hole. The depth of the drilling hole was limited to 50 mm as shown in Figure 1b. Distance and alignment were controlled by a Mitutoyo dial test indicator (Resolution 0.001 mm, Accuracy ±0.003 mm, Japan). The machining time and depth of the drilling hole were directly recorded to calculate the MRR. The EWR was evaluated at the end of the machining tests. The quality of the machined surface was measured by athematic roughness (Ra) using a roughness tester (MarSurf PS1, MAHR, Germany), and the gap clearance along the depth of the drilled hole was measured using an optical microscope (STM6, OLYMPUS, Japan).
Electrode Design
In this work, the electrical discharge machining was performed with conventional flushing of the copper electrode; the properties of the electrode are shown in Table 2. The conventional electrode was cylindrical in shape (CE), as shown in Figure 2a. The newly design shape electrode was created in the flank area in the region close to the sparking area (Step Cylinder Electrode, SCE), which is vital for self-flushing in the electrical discharge machining process for deep hole drilling, as shown in Figure 2b. For this design, we expect that the step shape of the electrode will increase the escape area for bubble flows, and the debris removed from the sparking area will lead to improvements in machining performance. Both sets of electrodes were produced by a computer numerical control turning machine (PC TURN50, EMCOTRONICS, Austria). The electrode designed in this work can be compared to the conventional electrode consisting of a shank electrode and body electrode, which is shown in Figure 2a. The newly designed electrode consists of a shank electrode, neck electrode, and body electrode, as shown in Figure 2b. The diameter of the electrode in the experiment was varied 3, 6, 9, and 12 mm, and the total length of any electrode was 103 mm, which was controlled in the range of ±10 µm. A feature of the workpiece included the design of a splice by couple plate, which is shown in detail in Figure 2c. The machining test was used in order to evaluated machining performance for any electrode designed, this was repeated 3 times with the new electrode.
Electrode Design
In this work, the electrical discharge machining was performed with conventional flushing of the copper electrode; the properties of the electrode are shown in Table 2. The conventional electrode was cylindrical in shape (CE), as shown in Figure 2a. The newly design shape electrode was created in the flank area in the region close to the sparking area (Step Cylinder Electrode, SCE), which is vital for self-flushing in the electrical discharge machining process for deep hole drilling, as shown in Figure 2b. For this design, we expect that the step shape of the electrode will increase the escape area for bubble flows, and the debris removed from the sparking area will lead to improvements in machining performance. Both sets of electrodes were produced by a computer numerical control turning machine (PC TURN50, EMCOTRONICS, Austria). The electrode designed in this work can be compared to the conventional electrode consisting of a shank electrode and body electrode, which is shown in Figure 2a. The newly designed electrode consists of a shank electrode, neck electrode, and body electrode, as shown in Figure 2b. The diameter of the electrode in the experiment was varied 3, 6, 9, and 12 mm, and the total length of any electrode was 103 mm, which was controlled in the range of ±10 µm. A feature of the workpiece included the design of a splice by couple plate, which is shown in detail in Figure 2c. The machining test was used in order to evaluated machining performance for any electrode designed, this was repeated 3 times with the new electrode.
Experimental Conditions
The experimental conditions for the investigated influence of the new electrode design on the improvement of machining performance are summarized in Table 3. The experimental tests were performed in the oil dielectric fluid (DIEL MS 7000, TOTAL) and
Experimental Conditions
The experimental conditions for the investigated influence of the new electrode design on the improvement of machining performance are summarized in Table 3. The experimental tests were performed in the oil dielectric fluid (DIEL MS 7000, TOTAL) and side flushing was supplied at a specific pressure through a nozzle to ensure continuous recirculation of dielectric fluid in the EDM tank. According to the optimized parameters with regard to the highest material removal rate [39,49] and characteristic surface appearance, the discharge current was fixed at 0.25 Amp/mm 2 for electrodes of any size. For the positive polarity electrode, pulse-on and pulse-off were controlled at 150 µs and 2 µs, respectively.
Results and Discussion
This study focused on the improvement in machining performance for a deep hole drilling tool composed of AISI P20 in the electrical discharge machining process using a step cylindrical electrode. Therefore, we examined the influence of a modified electrode on the improvement of machining performance, which was evaluated using the MRR and EWR. The quality of the machined hole was examined with respected to the surface roughness (Ra) and gap clearance. Figure 3a shows the obtained results of machining time versus depth of the drilled hole for the conventional electrode (CE). We observed that the slope of the testing curve slightly increased at shallow depths, but the slope had a steeper incline with the machining depth for any size of electrode. In addition, a steeper slope was obtained for the smaller electrode. This can be explained as the increase in the depth of the drilled hole resulted in more difficulties with eliminating debris particles from the sparking area by conventional flushing (side flushing) due to the lower flow of dielectric fluid at a higher depth and through a smaller area. This can be confirmed by the photograph of the accumulated particles on the bottom of the machined hole in the experimental test as shown in Figure 4. An electrode diameter of 3 mm was limited to drilling a hole 7.292 mm deep for the conventional electrode (CE) and 9.358 mm deep for the step cylindrical electrode (SCE) due to the accumulation of debris particles in the dielectric fluid leading to the generation of a secondary spark and concavity in the hole walls [39,49]. In the case of an electrode diameter of 6 mm, debris particles accumulated and increased the thickness of the recast layer, leading to electrode moving back for the depth of drilled hole of 33.893 mm.
Machining Time and Material Removal Rate
deep for the conventional electrode (CE) and 9.358 mm deep for the step cylindrical electrode (SCE) due to the accumulation of debris particles in the dielectric fluid leading to the generation of a secondary spark and concavity in the hole walls [39,49]. In the case of an electrode diameter of 6 mm, debris particles accumulated and increased the thickness of the recast layer, leading to electrode moving back for the depth of drilled hole of 33.893 mm. Figure 3b shows the experimental results for the step cylindrical electrode (SCE). Machining time proportionally increased with machining depth for electrode diameters of 6, 9, and 12 mm. This is because the step cylindrical electrode (large diameter and short length at the end electrode) pressed dielectric fluid into the sparking area when the electrode jumped down. The debris particles and bubbles produced during sparking were deep for the conventional electrode (CE) and 9.358 mm deep for the step cylindrical electrode (SCE) due to the accumulation of debris particles in the dielectric fluid leading to the generation of a secondary spark and concavity in the hole walls [39,49]. In the case of an electrode diameter of 6 mm, debris particles accumulated and increased the thickness of the recast layer, leading to electrode moving back for the depth of drilled hole of 33.893 mm. Figure 3b shows the experimental results for the step cylindrical electrode (SCE). Machining time proportionally increased with machining depth for electrode diameters of 6, 9, and 12 mm. This is because the step cylindrical electrode (large diameter and short length at the end electrode) pressed dielectric fluid into the sparking area when the electrode jumped down. The debris particles and bubbles produced during sparking were eliminated from the sparking area when the electrode jumped up, as the debris particles and bubbles were able to float throughout the sparking area due to a greater escape area at the side wall between the electrode and hole (the neck electrode is smaller than the end electrode). This led to the generation of a cycle of self-flushing and the ability to maintain stable conditions throughout the sparking process. This can be confirmed by the photograph of the side spark (secondary spark) on the electrode shown in Figure 5. The damage on the side surface of the electrode occurred along the conventional electrode (CE). The secondary spark on the step cylindrical electrode (SCE) can be found on the flank of electrode only (body electrode). In addition, the element on the surface of the electrode is analyzed and discussed in Section 3.4.
The material removal rate (MRR) was calculated using the volume of work material removed and machining time, as described in Equation (1) [50]. The experimental results of the material removal rate are shown in Figure 6. We found that the material removal rate for both conventional electrodes (CE) and step cylindrical electrodes (SCE) increased with the diameter of electrode. This was due to the debris becoming difficult to remove from the sparking area for the smaller electrode due to the incredibly small gap clearance and some debris accumulated on the machining surface due to the action of gravity [51]. Therefore, the sparking process was impeded, extending the machining time, which led to the decrease in material removal rate for the small electrode. The decreased area of the secondary spark on the side electrode and elimination of debris particles in the sparking area caused by the modified shape of electrode improved the material removal rate. The machining performance of the step cylindrical electrode clearly increased by approximately 215.7%, 203.8%, and 130.4% compared with the conventional electrode for electrode diameters of 6, 9, and 12 mm, respectively. However, the material removal rate of the step cylindrical electrode with a diameter of 3 mm was slightly higher than that for the conventional electrode. This can be explained by the height of the flank electrode (height at the end of the electrode) for small electrodes affecting the ability of debris particles and bubbles to flow out from the sparking area: where the MRR is the material removal rate (mm 3 /min); M w1 and M w2 are the workpiece weight (g) before and after machining, respectively; ρ w is the density of workpiece (7.78 g/cm 3 ); and t is the machining time (min).
Appl. Sci. 2021, 11, 2084 7 of 15 eliminated from the sparking area when the electrode jumped up, as the debris particles and bubbles were able to float throughout the sparking area due to a greater escape area at the side wall between the electrode and hole (the neck electrode is smaller than the end electrode). This led to the generation of a cycle of self-flushing and the ability to maintain stable conditions throughout the sparking process. This can be confirmed by the photograph of the side spark (secondary spark) on the electrode shown in Figure 5. The damage on the side surface of the electrode occurred along the conventional electrode (CE). The secondary spark on the step cylindrical electrode (SCE) can be found on the flank of electrode only (body electrode). In addition, the element on the surface of the electrode is analyzed and discussed in Section 3.4. The material removal rate (MRR) was calculated using the volume of work material removed and machining time, as described in Equation (1) [50]. The experimental results of the material removal rate are shown in Figure 6. We found that the material removal rate for both conventional electrodes (CE) and step cylindrical electrodes (SCE) increased with the diameter of electrode. This was due to the debris becoming difficult to remove from the sparking area for the smaller electrode due to the incredibly small gap clearance and some debris accumulated on the machining surface due to the action of gravity [51]. Therefore, the sparking process was impeded, extending the machining time, which led to the decrease in material removal rate for the small electrode. The decreased area of the secondary spark on the side electrode and elimination of debris particles in the sparking area caused by the modified shape of electrode improved the material removal rate. The machining performance of the step cylindrical electrode clearly increased by approximately 215.7%, 203.8%, and 130.4% compared with the conventional electrode for electrode diameters of 6, 9, and 12 mm, respectively. However, the material removal rate of the step cylindrical electrode with a diameter of 3 mm was slightly higher than that for the conventional electrode. This can be explained by the height of the flank electrode (height at the end of the electrode) for small electrodes affecting the ability of debris particles and bubbles to flow out from the sparking area:
Electrode Wear Ratio (EWR)
The electrode wear ratio (EWR) is defined by the removal volume of the electrode due to erosion wear to the removal volume of the workpiece in the machining process, as defined in Equation (2) [50]. The obtained results found that the electrode wear ratio of the step cylindrical electrode (SCE) was lower than the ratio for the conventional electrode
Electrode Wear Ratio (EWR)
The electrode wear ratio (EWR) is defined by the removal volume of the electrode due to erosion wear to the removal volume of the workpiece in the machining process, as defined in Equation (2) [50]. The obtained results found that the electrode wear ratio of the step cylindrical electrode (SCE) was lower than the ratio for the conventional electrode (CE) for electrodes of any size as shown in Figure 7. We found that both SCE and CE showed the nonlinear electrode wear ratio for the electrode diameter of 3 mm. This is because the poor removal of machined debris with the flushing system led to debris blending in the dielectric [52] and deposited on the machining surface under the action of gravity and formed a protective film [51]. In addition, the electrode wear ratio for the step cylindrical electrode decreased by approximately 35.0%, 47.2%, 63.1%, and 37.3% compared to the conventional electrode for the diameters of 3, 6, 9, and 12 mm, respectively. This is because suitable flushing as a result of the electrode design led to improved flushing performance to eliminate debris particles and resulted in a shorter machining time due to the reduction in secondary sparks, which led to a high energy concentration in the sparking area. From these results, we concluded that electrode wear was improved by self-flushing in the machining process with the step cylindrical electrode: where the EWR is the electrode wear ratio (%); M t1 and M t2 are the electrode weight (M w1 and M w2 are the workpiece weight (g) before and after machining); and ρ t is the density of the copper electrode (8.96 g/cm 3 ).
Electrode Wear Ratio (EWR)
The electrode wear ratio (EWR) is defined by the removal volume of the electrode due to erosion wear to the removal volume of the workpiece in the machining process, as defined in Equation (2) [50]. The obtained results found that the electrode wear ratio of the step cylindrical electrode (SCE) was lower than the ratio for the conventional electrode (CE) for electrodes of any size as shown in Figure 7. We found that both SCE and CE showed the nonlinear electrode wear ratio for the electrode diameter of 3 mm. This is because the poor removal of machined debris with the flushing system led to debris blending in the dielectric [52] and deposited on the machining surface under the action of gravity and formed a protective film [51]. In addition, the electrode wear ratio for the step cylindrical electrode decreased by approximately 35.0%, 47.2%, 63.1%, and 37.3% compared to the conventional electrode for the diameters of 3, 6, 9, and 12 mm, respectively. This is because suitable flushing as a result of the electrode design led to improved flushing performance to eliminate debris particles and resulted in a shorter machining time due to the reduction in secondary sparks, which led to a high energy concentration in the sparking area. From these results, we concluded that electrode wear was improved by self-flushing in the machining process with the step cylindrical electrode: where the EWR is the electrode wear ratio (%); Mt1 and Mt2 are the electrode weight (Mw1 and Mw2 are the workpiece weight (g) before and after machining); and is the density of the copper electrode (8.96 g/cm 3 ). Figure 7. Electrode wear ratio of holes drilled with the conventional electrode (CE) and the step cylindrical electrode (SCE).
Quality of the Drilled Hole
In this work, the quality of the drilled hole was evaluated based on machining accuracy by means of gap clearance and the surface roughness of the machined hole. Figure 8 shows the results of the gap clearance between the wall of the electrode and the machined hole along the machining depth. A larger gap clearance was found when bigger electrodes were used. This is because the higher material removal rate induced more debris particles to be blended into the dielectric fluid, leading to an accelerated secondary spark on the side wall of the electrode; this was particularly evident in the case of the conventional electrode (CE) due to the low area for debris escape. Besides, there is the potential for the higher discharge current used for larger electrodes to generate violent secondary sparking on the wall of the drilled hole. The results for the modified electrode with a step cylindrical design showed that the gap clearance was reduced compared to the conventional electrode. The average gap clearance decreased approximately 44%, 30%, and 29% for electrode diameters of 6, 9, and 12 mm, respectively. The concavity of the wall of the hole drilled with the step cylindrical electrode (SCE) also decreased. This is because the neck design of the electrode increased the escape area for the debris particles to easily flow out from the machining area. As a result, fewer secondary sparks occurred on the wall of the machined hole. The concavity of the drilled wall for the electrode diameter of 3 mm was non uniform because the poor removal of machined debris for small electrode induces a high concentration of debris particles which are electrostatically polarized, resulting in the secondary spark on the side of electrode and workpiece [52]. In addition, the poor debris particles escaping from the sparking area could occur when the deep holed drilling with the conventional electrode diameter of 6 mm increased more than 25 mm and could result in the irregular gap clearance at the end of the drilled hole. secondary spark on the side wall of the electrode; this was particularly evident in the case of the conventional electrode (CE) due to the low area for debris escape. Besides, there is the potential for the higher discharge current used for larger electrodes to generate violent secondary sparking on the wall of the drilled hole. The results for the modified electrode with a step cylindrical design showed that the gap clearance was reduced compared to the conventional electrode. The average gap clearance decreased approximately 44%, 30%, and 29% for electrode diameters of 6, 9, and 12 mm, respectively. The concavity of the wall of the hole drilled with the step cylindrical electrode (SCE) also decreased. This is because the neck design of the electrode increased the escape area for the debris particles to easily flow out from the machining area. As a result, fewer secondary sparks occurred on the wall of the machined hole. The concavity of the drilled wall for the electrode diameter of 3 mm was non uniform because the poor removal of machined debris for small electrode induces a high concentration of debris particles which are electrostatically polarized, resulting in the secondary spark on the side of electrode and workpiece [52]. In addition, the poor debris particles escaping from the sparking area could occur when the deep holed drilling with the conventional electrode diameter of 6 mm increased more than 25 mm and could result in the irregular gap clearance at the end of the drilled hole. Figure 9 shows the obtained results for surface roughness. We found that the roughness of the machined surface increased with the size of the electrode due to the higher discharge currents employed for the operation of larger-diameter electrodes. However, the step cylinder electrode results for averaged roughness of the machined surface were higher than those for the conventional electrode by approximately 54.35%, 67.95%, and 42.43% for electrode diameters of 6, 9, and 12 mm, respectively. This can be explained by the limited height of the flank electrode leading to a decreased secondary spark area. The secondary spark on the side wall caused more damage to the machined surface. Large areas of secondary spark resulted in lower damage to the machined surface because the concentration of energy in the sparking area was reduced. This finding can be confirmed by the profile of the machined surface along the drilled hole as shown in Figures 10 and 11 for the conventional electrode (CE) and step cylindrical electrode (SCE), respectively. The mean height of peaks (Rpm) and mean depth of valleys (Rvm) [53] in the Figure 9 shows the obtained results for surface roughness. We found that the roughness of the machined surface increased with the size of the electrode due to the higher discharge currents employed for the operation of larger-diameter electrodes. However, the step cylinder electrode results for averaged roughness of the machined surface were higher than those for the conventional electrode by approximately 54.35%, 67.95%, and 42.43% for electrode diameters of 6, 9, and 12 mm, respectively. This can be explained by the limited height of the flank electrode leading to a decreased secondary spark area. The secondary spark on the side wall caused more damage to the machined surface. Large areas of secondary spark resulted in lower damage to the machined surface because the concentration of energy in the sparking area was reduced. This finding can be confirmed by the profile of the machined surface along the drilled hole as shown in Figures 10 and 11 for the conventional electrode (CE) and step cylindrical electrode (SCE), respectively. The mean height of peaks (R pm ) and mean depth of valleys (R vm ) [53] in the profile of the surface machined with the step cylindrical electrode are higher than those for the conventional electrode, and the roughness of the machined surface decreased with the decreasing diameter of the electrode. This finding can be explained by the high sparking current for the large electrode, leading to higher damage on the machined surface. The high efficiency of particle removal and the reduction in the sparking area with the step cylindrical electrode led to a high energy concentration in the sparking area and resulted in higher roughness of the machined surface.
for the conventional electrode, and the roughness of the machined surface decreased with the decreasing diameter of the electrode. This finding can be explained by the high sparking current for the large electrode, leading to higher damage on the machined surface. The high efficiency of particle removal and the reduction in the sparking area with the step cylindrical electrode led to a high energy concentration in the sparking area and resulted in higher roughness of the machined surface. for the conventional electrode, and the roughness of the machined surface decreased with the decreasing diameter of the electrode. This finding can be explained by the high sparking current for the large electrode, leading to higher damage on the machined surface. The high efficiency of particle removal and the reduction in the sparking area with the step cylindrical electrode led to a high energy concentration in the sparking area and resulted in higher roughness of the machined surface. The influence of dimensional electrode on the quality of the machined surface could be explained by the gap clearance increasing with the size of electrode. This is due to more blending debris in the dielectric induced secondary spark on the side wall of the electrode and workpiece. When the step cylindrical electrode was employed in the machining test, the gap clearance decreased because the neck electrode led to debris easily escaping from the machining area. However, the limited body of the electrode (high 3 mm) induced the high sparking energy in the machining area and secondary spark on the side wall of the drilled hole which is related to the increased material removal rate and roughness of the machined surface [54]. The influence of dimensional electrode on the quality of the machined surface could be explained by the gap clearance increasing with the size of electrode. This is due to more blending debris in the dielectric induced secondary spark on the side wall of the electrode and workpiece. When the step cylindrical electrode was employed in the machining test, the gap clearance decreased because the neck electrode led to debris easily escaping from the machining area. However, the limited body of the electrode (high 3 mm) induced the high sparking energy in the machining area and secondary spark on the side wall of the drilled hole which is related to the increased material removal rate and roughness of the machined surface [54]. Figures 12 and 13 show the results of the analysis of the alloy elements mapped on the surface of the copper electrode and the average volume contents of copper, carbon, and iron in the region marked in Figure 5 for the conventional and step cylindrical electrodes, respectively. The weight percentage (%wt) of alloy elements was averaged by EDX, analyzing at least three positions in the different regions of the electrode surface, as shown in the SEM images. These positions included an area where the surface is bright and clear, an opaque surface, and the surface over the boundary joint area. Figure 12a shows the elemental mapping in the regions close to the end of electrode (EDX1 in Figure 5a). Carbon (C) of approximately 52.99%wt and ferrous (Fe) of approximately 35.01%wt accumulated more in the region that further away from the end of electrode (EDX2 in Figure 5a) as shown in Figure 12b which results element of carbon by approximately 44.57%wt and ferrous by approximately 27.81%wt. This is evidence of the severity of repeated sparks occurring in the process of joining the wall debris around the wall within the drill hole, leading to higher carbon and iron accumulation on the surface of the electrode. However, the content of copper (Cu) of approximately 26.29%wt on the step cylindrical electrode (EDX3 in Figure 5b), as shown in Figure 13a, was higher than the copper content of approximately 12.00%wt on the conventional electrode at the same region (Figure 12a). This can be explained by the reduction in secondary sparking occurring on the side surface of the electrode due to the removal of debris and bubbles during machining, which easily escape from the sparking area. In addition, the increased escape area with the increased gap distance in the machining process when using the step cylindrical electrode led to the elimination of secondary sparking, resulting in low damage and alloy accumulation on the surface of the electrode (Fe ≈ 0.09%wt, C ≈ 40.30%wt, and Figures 12 and 13 show the results of the analysis of the alloy elements mapped on the surface of the copper electrode and the average volume contents of copper, carbon, and iron in the region marked in Figure 5 for the conventional and step cylindrical electrodes, respectively. The weight percentage (%wt) of alloy elements was averaged by EDX, analyzing at least three positions in the different regions of the electrode surface, as shown in the SEM images. These positions included an area where the surface is bright and clear, an opaque surface, and the surface over the boundary joint area. Figure 12a shows the elemental mapping in the regions close to the end of electrode (EDX1 in Figure 5a). Carbon (C) of approximately 52.99%wt and ferrous (Fe) of approximately 35.01%wt accumulated more in the region that further away from the end of electrode (EDX2 in Figure 5a) as shown in Figure 12b which results element of carbon by approximately 44.57%wt and ferrous by approximately 27.81%wt. This is evidence of the severity of repeated sparks occurring in the process of joining the wall debris around the wall within the drill hole, leading to higher carbon and iron accumulation on the surface of the electrode. However, the content of copper (Cu) of approximately 26.29%wt on the step cylindrical electrode (EDX3 in Figure 5b), as shown in Figure 13a, was higher than the copper content of approximately 12.00%wt on the conventional electrode at the same region (Figure 12a). This can be explained by the reduction in secondary sparking occurring on the side surface of the electrode due to the removal of debris and bubbles during machining, which easily escape from the sparking area. In addition, the increased escape area with the increased gap distance in the machining process when using the step cylindrical electrode led to the elimination of secondary sparking, resulting in low damage and alloy accumulation on the surface of the electrode (Fe ≈ 0.09%wt, C ≈ 40.30%wt, and Cu ≈ 57.61%wt), as shown in Figure 13b. These results confirmed that the machining performance of the electrical discharge machining process for deep hole drilling can be improved by using a step cylindrical electrode.
Conclusions
In this work, the modified electrode used for the self-flushing process in electrical discharge machining to investigate the improvement of machining performance for deep hole drilling can be described as follows: 1. The machining time increased with the machining depth for any type of electrode.
However, the slope of the step cylindrical electrode was lower than that of the conventional electrode because the neck design of the electrode increased the area for particle debris and bubbles to escape from the machining area while improving dielectric flushing in the sparking area. These effects led to a material removal rate improvement of approximately 13.6%, 215.7%, 203.8%, and 130.4% for electrode diameters of 3, 6, 9, and 12 mm, respectively. 2. The electrode wear ratio of the step cylindrical electrode was lower than that of the conventional electrode because the neck design of the electrode reduced secondary sparking on the side wall of the workpiece which concentrated and limited sparking areas to the bottom and flank face of the electrode. This led to a decrease in tool wear due to the reduction in machining time. | 9,762.2 | 2021-02-26T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
A Study on an Application System for the Sustainable Development of Smart Healthcare in China
The purpose of this study is to explore ways to apply information technologies such as big data, Internet of things (IoT), mobile internet, and so on to the healthcare industry. By analyzing the impact path of such high-techs on healthcare, it intends to propose an application system for the sustainable development of so called “smart healthcare”. It identifies the influencing factors of smart healthcare from three different perspectives: society, economy and environment. It then constructs an indicator system containing 14 factors, and comprehensively analyzes the importance and dependence of the factors by using Fuzzy Decision-making Trial and Evaluation Laboratory (DEMATEL) and Interpretative Structural Modeling Method (ISM) based on a multi-level hierarchy model. Using the theoretical framework of scientific research methods, this paper reveals the hierarchical path of the sustainable development of the whole intelligent healthcare which starts from the social level, combines the social and economic levels to achieve the balanced development of benefits, and finally brings the benefits to the environmental level. Based on this finding, this paper develops a sustainable application system for intelligent medicine in three levels: government, enterprise and user. The development of the sustainable application system for smart healthcare can provide theoretical guidance for model application evaluation.
I. INTRODUCTION
With economic growth, in general, there has been a growing interest in quality of life. Among many factors affecting quality of life, healthcare has become one of the main issues in people's lives. The importance of healthcare as a key determinant in quality of life is being recognized not only by individuals or industries but also by societies or nations. Thus, building a comprehensive healthcare system may not just give great benefits to the individuals' physical or mental health and quality of life, but also bring driving force to the harmonious, stable and orderly development of economy and society [1]. However, with the continuous increase of elderly population, the situation is not that optimistic. In some countries such as The associate editor coordinating the review of this manuscript and approving it for publication was Laxmisha Rai .
China, Korea and Japan, the aging population leads to the rapid increase of medical expenses. The current economic recession worldwide is also causing national governments to adjust their resource allocations to reduce public health budgets. In this background, the issue of sustainable development of healthcare is increasingly gaining importance and should be addressed urgently in both the civil and the national level. It includes the improvement of the efficiency of healthcare infrastructure, the rational allocation of resources, and the coordinated and orderly development of medical ecology which will result in the continuous development in the healthcare field as a whole. Recently, the advent and advancement of information technology such as big data, internet of things (IoT), cloud computing and artificial intelligence is comprehensively transforming the healthcare system into more efficient, convenient, and personalized one [2]. Thus, it has become a key issue to organize smart healthcare system from the perspectives of individuals, industries and governments and eventually realize the sustainable development of the system.
Existing literatures with regard to the new concept of smart healthcare lack the perspective of sustainability, with most focusing on the unilateral construction of technology, society, environment and economy [3]. Some scholars discuss the application and promotion of the IoT, cloud services, website platforms and other means of smart healthcare services from the perspective of technological innovation [4]. There is still a long way to go for development and introduction of new technologies. Along the way, many challenges are emerging as well [2]. Some studies challenge the construction of intelligent healthcare from the perspective of environment [5], [6], whereas Govind explore the economic impact of smart healthcare, pointing out that smart healthcare has a significant optimization effect on resource allocation [7]. Still others, from the perspective of social level, explore the attraction of smart healthcare to professionals and the upgrading of knowledge system [8]. Most of the prior studies emphasize only intelligent technology in a certain field, and do not consider the application of it to the social and economic environment [9], [10]. Moreover, they lack a systematic and theoretical integration framework to provide guidance for the sustainable development of wisdom [11].
Based on the triple bottom line theory and the perspective of sustainability, this paper sets up a sustainable development system of smart healthcare which combines social, environmental and economic aspects. Combining artificial intelligence with the three aspects will improve the sustainable development of smart hospital [12]. Combined with the theory of ecological efficiency, a systematic integration framework is constructed to provide guidance for the sustainable development of smart healthcare. The integration of healthcare application system and internet of things platform will make it more intelligent, more sustainable, more reliable and efficient, and less concerned with carbon emissions [13].
Udo & Jansson point out that sustainable development is multi-dimensional, and multi-disciplinary with significant complexity and uncertainty [14]. It is also difficult to use traditional methods to deal with a large number of fuzzy information arising in the sustainable development of hospitals. Therefore, this study introduces a fuzzy set to provide a new way of thinking and processing for hospital decision-making. Based on the hospital selection mechanism, it will also provide a visual analysis by considering the hierarchy and interrelationship between decision-making and experimental evaluation laboratory (DEMATEL) and interpretative structural model (ISM), taking into account the indicators [15]. Finally, incorporating the smart city agent theory [16] and stakeholder theory [17], this study constructs a multi-agent coordinated development system that comprehensively considers the social, economic, technological and environmental aspects.
We can draw the following conclusions through this research. First of all, solving social level problems is a first step in the construction of the framework of sustainable development of smart healthcare. Next, based on a supply chain system which is influenced by the social level, the economic level of smart healthcare should be constructed. Third, the social and economic benefits derived from above process should be taken into account in dealing with continuous improvement of environmental problems. Finally, based on the ISM model, this paper establishes the intelligent healthcare application system at three levels: government, company, and user level. The research results of this paper provide an important theoretical reference and the practical contribution for the sustainable development of smart healthcare.
II. LITERATURE REVIEW A. SUSTAINABLE DEVELOPMENT OF HEALTHCARE
Since the establishment of the United Nations Conference on the human environment in 1972, sustainable development of healthcare has made great progress at the local, national, regional and international levels. However, the famous Brundtland Commission Report points out the unequal development of environment, economy and society [18]. Many researchers have also noticed the negative impact of healthcare on the environment [19], thus focusing on how to reduce the negative environmental impact [20].
Sustainable development may involve the process of institutional reforms to align institutions with current and future needs [18]. Previous studies began to systematically implement impact mitigation to assess hospital practice [20]. Operators of Health Care Information System (HCIS) such as hospitals and local healthcare institutions are responsible for leading the way to sustainable development, including fair provision of care and prevention of unnecessary treatment, which will improve efficiency to reduce the impact on the environment. A research from Italy shows that knowledge capital promotes the transformation of medical institutions to sustainable development, and encourages further investigation of the strategic plan for sustainability within HCIS [21].
In addition, the Health Hospital Initiative 1 provides tools and resources to promote sustainable development. The Global Reporting Initiative 2 provides sustainability reporting to help organizations measure, understand and communicate the impact on key sustainability issues. The Dow Jones Sustainability Index (DJSI) is an index that evaluates the sustainability of organizations and was developed to help organizations recognize the new opportunities and risks of global sustainability [8]. Therefore, exploring sustainable indicators and assessment tools is the first step towards sustainable HCIS.
B. INTELLIGENT HEALTHCARE
With the development of Internet technology, interconnection and exchanges of medical data have become more intensively needed by individuals and organizations alike. As information technology and intelligence are constantly infiltrating the field of healthcare, the concept of ''smart healthcare'' has gradually emerged accordingly [3].
Intelligent medicine originates from the concept of ''smart planet'' put forward by IBM company. 3 IBM's vision of smart planet is based on the instruments, interconnection and intelligence, which provides a way to improve productivity, efficiency and responsiveness for industry, infrastructure, processes, cities and society. Intelligent medicine refers to the construction of an interactive platform for medical information sharing based on electronic health records and the comprehensive use of IoT, internet, cloud computing, big data and other technologies, so as to realize the interaction of patients, medical institutions, medical personnel and medical equipment, and intelligently match the needs of the medical biosphere. Intelligent medicine or smart healthcare is the cross application of information technology and life science, and is a large health system for medical treatment, rehabilitation, nursing and pension, involving medical services, public health, medical security, drug supply security, health management and other aspects.
The continuous development of IoT and internet provides technical support for medical informatization. Dimitrov expounds the impact of big data on the field of healthcare in the United States from various perspectives, and points out that the combination of big data and the healthcare industry will produce many potential values [22]. Problems in disease diagnosis and epidemic prevention can also be analyzed and predicted by big data medical treatment. The healthcare information system can lead to promotion of data integration and information sharing, and support the establishment of intelligent medical system. Application process of healthcare information system can be divided into: information exchange, organization cooperation, process reengineering, health organization management optimization and organizational culture construction. It can contribute to establishment of local, regional and central health data coordination and exchange mechanism.
C. INDEX INTERPRETATION 1) SMART SOCIETY
Intelligent medicine has a great demand for talents in various fields such as theoretical researches, technology application and transformation, and engineering application. So, to attract relevant talents in these fields, improve employee satisfaction, and reduce turnover rate, hospitals, medical organizations, and related institutions should be able to offer various awards and training programs as well (C1) (Ryan-Fogarty et al., 2016). Through employee development plans, they can also upgrade employee knowledge levels and achieve knowledge spillover (C2) [20], [23], [24].
According to the latest clinical guidelines and standards, quality medical care is defined as ''continuously to please patients by providing effective and efficient medical services to meet patients' and providers' needs''. Providing patients with high-quality patient service (C3) can be said to be the top priority to achieve hospital service quality, which can also effectively avoid conflicts in doctor-patient relationship [25] and improve patient satisfaction [26]. Some small medical institutions may not be able to provide certain medical services due to the limited resources such as medical equipment or skills and capabilities of medical staff, which can be overcome with the help of telemedicine services (C4). Smart hospitals can also actively participate in the construction of community medical system, providing community medical consultation and health education, which may have a demonstration effect on hospitals nearby (C5) [20].
2) SMART ECONOMY
Inefficient resource allocation is a fundamental defect of public hospitals in developing countries. Inefficiency depletes the limited resources allocated to the public healthcare [27].
Inefficient allocation may arise when most of the relevant resources are transported to a few tertiary healthcare hospitals [28]. Therefore, resource allocation efficiency (C6) is an important index to evaluate the healthcare system. As an important management tool, big data also brings both opportunities and challenges to the innovation and development of smart finance (C7). In order to seek the further development of smart hospital, traditional financial management needs to make changes in many aspects. Costs of medical care reflects the demand for efficient use of medical resources. Effective supply chain management (C8) improves operational efficiency and is a catalyst for more effective use of medical resources [29]. Smart Marketing (C9) plays an important role in organizational success. It can not only provide sustainable competitive advantage, but also enable to obtain excellent business performance with marketing assets [30]. Therefore, adopting short-term measures to control costs (C10) and improving healthcare efficiency in the long run are the goals of smart healthcare.
3) SMART ENVIRONMENT
With the construction of sustainable medicine, more researches compare the effects of reusable and disposable medical devices [31], water-saving and water-waste treatment [32], energy efficiency [33] and food selection, preparation and waste [34]. Medical waste management (C11) has gradually become a feature of sustainable health researches [35], [36]. Environmental sustainability must address emerging threats to the natural environment, such as climate change and the loss of biodiversity. Based on the social environment, as well as the potentially economically constrained and challenging future (the so-called ''triple bottom line'' of common welfare of health), combing healthcare provision with social healthcare to improve energy efficiency reduces energy and water consumption, and improves the operational eco-efficiency of the medical sector (C12) [20]. In addition, the establishment of environmental management system (C13), the formulation of a series of relevant hospital environmental policies, and the development of environmental responsibility training for employees can also effectively improve the ecological benefits of smart medicine. Hospitals, in a sense, belong to public places. Their capacities should be improved to handle various types of emergencies. Therefore, construction of hospital emergency management system (C14) is an important foundation to ensure the sustainable development of hospitals [37].
III. METHOD
The research process is designed as follows. First of all, we need to clarify our research theme, that is, what it means to construct the sustainable development system of smart healthcare. Then, we invite experts in related fields to revise the index system and fill in the questionnaire. Next, we obtain the calculation results by exploring the Fuzzy-DEMATEL and ISM models. Finally, we build a complete sustainable development system of smart healthcare. The whole process is shown in Fig. 1.
A. FUZZY-DEMATEL
Fuzzy mathematics based on fuzzy set theory is applicable to the analysis of fuzzy problem of correlation degree between elements, which is a method to simulate the human brain to process fuzzy information. Introducing fuzzy set theory and subjective judgment of experts in triangular fuzzy quantization can eliminate the subjectivity of experts' grading [35], [63].
Fuzzy-DEMATEL method not only retains the practical and effective advantages of traditional DEMATEL method in factor identification, but also replaces the original accurate value with a triangular Fuzzy number, which can reflect the real situation of the problem more comprehensively, improve the credibility of the analysis results, and provide more valuable reference for managers to make decisions [64].
Our Fuzzy-Dematel model is descried in algorithm 1 below: The Fuzzy-Dematel Algorithm Step 1: For the problem under study, build a system of influencing factors set to F1, F2, . . ., Fn, and get an semantic matrix.
Step 2: Determine the influence relationship between two factors by an expert scoring method and express the relationship in a matrix form. Invite experts to use the language operators ''no impact (N)'', ''very weak influence (VL)'', ''weak influence (L)'', ''strong influence (H)'', and ''very strong influence (VH)''. The original expert evaluation is converted into a triangular fuzzy number by means of semantic transformation table, as shown in Table 2. In converting the identified languages variables to triangular fuzzy numbers, we applied equation (1) Step 3: Converting the Fuzzy data into Crips Scores (CFCS) [65] to defuzzify the initial values of the expert scores, the nth order directly affects the matrix Z, and the direct influence matrix reflects the direct effect between the factors, including the following four steps: (1) Normalize triangular fuzzy numbers: Among them, max min = max a k 3ij − min a k 1ij ; In turn, we can calculate xa k 1ij , xa k 2ij , xa k 3ij . (2) Normalize the left value (ls) and right value (rs): (3) Calculate the clear value after defuzzification: (4) Calculate the average clear value: Step 4: Normalize the direct influence matrix Z to obtain the standardized direct influence matrix G: Step 5: The synthesis matrix T, which is also called comprehensieve matrix, is calculated according to the following formula: Step 6: Analyze the comprehensive matrix to reveal the internal structure of the sustainable application system. The elements in matrix T are added by row as the influence degree D i , which represents the comprehensive influence value of the row factor on all other factors. The elements in matrix T are added as the affected degree R i by column, indicating the comprehensive influence value of all other factors in that column. The formulas are as follows: The sum of the influence degree and affected degree is called centrality, which indicates the position of the factor in the system and the size of its role. The difference between the influence degree and the affected degree is called causality, which reflects the causal relationship between the influencing factors. If the causality is greater than 0, the factor has a great effect on other factors and is called the factor of cause. If the causality is less than 0, the factor is greatly affected by other factors and is called the factor of result. The formulas are as follows: Step 7: According to the cartesian coordinate system, the centrality and reason degree of the method are used to mark the position of each method in the coordinate system.
B. ISM
Although DEMATEL method can calculate the importance degree of a specific prevention method in the influencing factor system, it cannot determine the internal correlation of the prevention method and divide the hierarchy. So it is difficult to effectively manage and control the prevention method.
Conversely, as Attri suggest, ISM is a recognized method for identifying relationships between specific elements that define a problem [66]. ISM originates from an interactive group learning process, and it can also be used by individuals. In this process, a set of directly or indirectly connected elements is constructed into a system model. ISM is used to understand the relationships between barriers and form insights into a collective understanding of these relationships.
ISM is a mature qualitative tool, which can be applied to various disciplines. For example, Luthra et al. through the application of ISM, discuss various obstacles of green supply chain management (GSCM) in Indian automobile industry [67]. Talib et al. apply the ISM method to understand the interaction between total quality management (TQM) barriers in an organization [68]. Haleem et al. analyze the key success factors of world-class manufacturing practices using ISM method [69]. Therefore, we choose to adopt ISM method to classify the system structure in this study.
The basic steps to implement ISM are as follows: (1) Calculate the overall influence matrix F. The calculation formula is: (9) where, the matrix I is the identity matrix; (2) The threshold is introduced to eliminate redundant information for the most streamlined matrix. According to the trial calculation, the most suitable threshold calculation model is obtained. The calculation formula is: In this equation, α, β are the mean and standard deviation of all elements in the comprehensive influence matrix T.
(3) According to the overall influence matrix of the system and the threshold value to remove the redundant factors, the reachable matrix M is obtained.
In the formula, 1 means there is a direct effect between the two factors, while 0 means there is no direct effect between the two factors.
(4) The accessible set and the preceding item set of each factor were determined, and the accessible set Ri and the preceding item set Si were obtained by hierarchical processing.
(5) Check the following. If it is true, it indicates that the corresponding factor is the underlying factor, and then the rows and columns corresponding to the factors are deleted in the matrix M.
(6) Repeat steps (4) and (5) until you get the factor set N q (q = 1, 2, . . . , n) at each level and all factors in the accessibility matrix M are deleted.
(7) According to the matrix obtained in step (6), the hierarchical structure diagram of influencing factors is drawn in the order in which the factors are crossed out.
IV. RESULT
To obtain the original data and design the questionnaire for the analysis, we selected 7 experts who were attending doctors with at least 15 years of working experience from 4 different third-class hospitals located in northeast China.
The draft of the questionnaire was developed through literature review and analysis, and then sent to the 7 experts. If any one of the experts disagrees with the proposed measures in the questionnaire, authors re-discuss the argued points until the 7 experts reach a unanimous agreement. This process went through several iterations. Next, individual interviews were conducted for data collection in order to improve accuracy and prevent mutual influence.
(1) Summarize the answers and corrections to obtain the fuzzy direct impact matrix, and then process the original data according to the CFCS method.
(2) Determine the direct impact matrix between the factors affecting SSP, as shown in Table 3.
(3) According to the formula T = G + G 2 + · · · G n , the comprehensive influence matrix T is obtained, as shown in Table 4. (4) The comprehensive impact matrix of the internal structure analysis of the indicator system is shown in Table 5.
(5) The DEMATEL cause and effect diagram is shown in Fig. 2.
As shown in Fig. 2, from the perspective of impact, patient service (C3), social impacts on communities (C5) and supply chain management (C8) have a high influence. The results show that C5 and C3 are the primary indicators of SSP development. C5 refers to the smart hospital actively participating in the construction of community medical system, using Internet big data to support community medical care, and playing a demonstration effect on the surrounding hospitals in the community. This index belongs to the combination of ''intelligence'' and society. It is a new index proposed in this paper. Traditional hospitals are, in general, very sensitive to social impact, but under the SSP development framework, traditional indicators must be updated to meet the new needs. C3 refers to the combination of ''wisdom'' and medical treatment, which establishes citizens' electronic medical and health records. It can provide personalized support for patients to improve patient satisfaction, while saving time and resources. This index is also a combination of ''intelligence'' and society, which belongs to the new index proposed in this paper. In traditional hospitals, patient service is more important in the relationship between doctors and patients, and the role of intelligent technology is not obvious. However, under the development framework of SSP, new patient service or intelligent medical treatment can be provided in the form of innovative service, electronic medical record, patient care, intelligent consultation and other aspects. In addition, C5 and C3 will promote the development of another key element, C8, which refers to supply chain management strategy and sustainable procurement, drug supply, equipment sharing, and information sharing. The development of C5 and the support for C3 will enhance the support for C8. C8 also seems to be an indicator for smart hospitals, since it represents obtaining more sustainable returns. (6) According to the test results, expert suggestions and practical requirements, λ is set to be 0.16 in the study. According to equation 9∼11, as shown in Table 6, the accessibility matrix M composed of 0 and 1 is constructed. 1 indicates a strong relationship between two factors while 0 indicates no or weak relationship between them. Then, the accessible and antecedent set of factors are shown in Table 7. (7) Finally, build a hierarchical path for indicators to interact with each other from the ISM model, as shown in Fig. 3.
The influencing factors are divided into four levels: C3, C5 and C8 are in the highest priority, while C2, C6 and C10 are in the second priority. C4, C7 and C11 are in the general level. C1, C9, C12, C13 and C14 are in the lowest level. The red arrow indicates the same level of interaction. The green arrow indicates the impact on the upper layer, and the black arrow indicates the impact on the cross layer. These provide a complete guidance for SSP. The development of smart medicine needs to start from the first level (C3, C5, C8), through the intermediary of C6, C10, to achieve economic and comprehensive effects. Environmental factors C11, C12, C13, C14 and C15 are in a relatively low priority, and they will be affected by other core elements in smart medicine.
V. DISCUSSION
In the whole model, patient service, social impacts on communities and supply chain management are at the first level, which belong to the social and economic level, respectively. It means that solving social and economic problems is the very first step in the sustainable development of smart healthcare. Solving social problems includes how to improve patient satisfaction and how to perfect the construction of surrounding communities. Patient service is at the first level, which is in line with the core characteristics of smart healthcare. Patient satisfaction with treatment is the most important predictor of overall hospital care satisfaction [70], [71]. Based on stakeholder theory, highly satisfied patients are essential for the sustainability of any healthcare institution [72]. Electronic medical records enable doctors to understand patients quickly and deeply, provide patients with high-quality services, and effectively avoid doctor-patient conflicts [25]. In addition, through electronic means such as websites, the hospital can publish various kinds of information and communicate and interact with patients in no time [45], which can also improve patient satisfaction [26]. As a social indicator, community is at the first level and belongs to the stakeholders of the hospital, which is the core characteristics of smart healthcare. In recent years, due to the increasing demand for public healthcare in China, the increase of medical expenses, the limitation of medical resources and the subsequent changes in medical practices, the government, in allocating the limited public healthcare resources, hopes to assign them to hospitals according to the quality of medical treatment [73], [74]. As there is a growing demand for high-quality healthcare services and appropriate hospitals [44], hospital managers seek to meet the demand by improving the medical quality [42], [73]. Therefore, whether to provide effective community service is an important standard in evaluating smart hospitals. In the meantime, supply chain management connects all hospital stakeholders, which is the first level of economic problems. Supply cost usually accounts for 40% of hospital operating costs [75], and the supply cost ratio is generally regarded as an important indicator of hospital supply chain performance [52]. The healthcare industry needs to pay attention to supply chain innovation, because efficient supply chain management can reduce the cost of hospital supplies considerably. It seems that the research conclusions on patient service, community construction and supply chain management reflect the focus of social and economic development under the smart healthcare.
Knowledge upgrading, resource allocation efficiency and cost control are at the second level. Except for knowledge upgrading, these factors seem to support economic aspects. But, knowledge upgrading, though it seemingly belongs to the social level, cannot be ignored since it also works as an economic indicator from the perspective of human capital. Knowledge upgrading of medical staff can solve various social problems of hospitals. Previous studies have shown that in hospital nursing, the allocation of high-quality medical personnel, comfortable working environment and patient satisfaction have a certain relevance [38], [43]. Therefore, medical talents are not only a core element of evaluating medical institutions, but also an important cornerstone for upgrading medical industrial structure and improving the innovation capabilities [24]. The framework theory of urban capacity proposes that the idea of sustainable development should be emphasized in undertaking urban planning. Therefore, the economic problems of the second level are mainly linked to resource allocation efficiency and cost control [46]. From the perspective of social medical security, there has been a VOLUME 9, 2021 chronic shortage of medical resources and the medical supplies have not sufficiently met the demand. Even the solutions put forward by countries at the legal or policy level have proved to be an irrational or uneven resource assignment. So, in order to rationalize the allocation of scarce resources and reduce the relative cost, health authorities, hospitals and other healthcare departments coordinate the allocation of medical materials among different regions [57]. Therefore, assessing whether the resource allocation is appropriate has become one of the important measures to evaluate the hospital operation and management. Another economic index belonging to the second level is cost control. The measurement of medical cost usually focuses on controlling the patient's medical cost, improving labor productivity, and maintaining a high level of capacity utilization. Some empirical and theoretical studies have shown that successful healthcare institutions often have superior managing ability in cost control [54]. Another study argues that although knowledge upgrading, resource allocation efficiency and cost control belong to different levels of evaluation system, they are related to each other and that knowledge upgrading and efficient resource allocation are at the core of cost control [24]. Therefore, in the dynamic healthcare industry, the cost management of the hospital is closely related to the overall hospital performance.
Telemedicine, intelligent finance and medical waste management are at the third level. Although these three fields look totally different from each other, they are not isolated. The first two factors are associated with social and economic aspects, which belong to the primary stage of smart healthcare. The last one is closely related to the environmental level, which belongs to the development stage of intelligent healthcare. At this level, the value chain of smart healthcare industry is addressed such as telemedicine, intelligent finance and medical waste management. Each link complements each other and has an important connection. Telemedicine belongs to the third level of social problems. Rapid changes in population and market are forcing the healthcare community to adjust their business model. Telemedicine represents a major change in the way doctors care for patients, and is an important method to evaluate the sustainable development of smart healthcare [76], [77]. The application of telemedicine will greatly increase the quality of healthcare, improve the therapeutic effect, and reduce the cost of healthcare [55]. In addition, as the demand for convenient and personalized care is growing, the demand for telemedicine will continue to increase accordingly. In the process of realizing the whole value chain, financial accounting control also has a certain impact on the development of smart healthcare. Smart finance is helpful for data collection, analysis and information transmission, facilitating hospital decision-making. Big data provides new opportunities for accounting management to play an active role in data creation and decision support [50]. Hospitals need to use big data to build and improve their financial relations and adjust them dynamically, so as to give full play to the incentive role of financial relations in financial performance. Finally, medical waste management has become a new environmental issue at this level. In recent years, as the number of hospitals has soared dramatically, the need for medical waste treatment has increased accordingly. Improper treatment and disposal of medical waste may cause high-risk infection and injury, and may pose serious health hazards to medical workers as well as general public [59]. The current treatment capacity of traditional hospitals is not big enough to meet the growing demand for medical waste treatment and disposal. So there is an urgent need for smart healthcare to provide new treatment and disposal methods [58]. It can be said that the effective operation of telemedicine, intelligent finance and medical waste management is inseparable from the construction of big data and intelligent medical internet. At this level, we have verified the importance of telemedicine, smart finance and medical waste management to the sustainable development of smart healthcare from the three bottom lines of society, economy and environment.
Talent attraction, intelligent marketing, operational ecoefficiency, environmental management system and emergency management are at the final level of construction. There is still a long way to go in realizing this last level, from construction of patient, community and supply chain system, to management of internal knowledge system, resource efficiency and operation cost of the hospital, and finally to the leap and upgrading of telemedicine, intelligent finance and waste management. Talent attraction is the last level of social problems. Talent resources are not only the first resource for economic and social development, but also an important support for upgrading the industrial structure and improving innovation capabilities. A hospital's talent resource is its competitive advantage, so talent attraction is drawing more research attention [39], [40]. In addition, due to the aging population in some countries, changes in patient groups, changes in the labor market and other issues, it is essential to attract, cultivate and retain motivated and high-quality employees [41]. Secondly, the economic problem at the last level is smart marketing. Intelligent marketing integrates big data, Internet and IoT, analyzes patient and market information in time, and dynamically evaluates alternatives. It connects the hospital with its environment, and then helps the hospital become proactive to better adapt to environmental changes [30]. The theory of ecological strategy is consistent with the theory of sustainability, which focuses on relative efficiency gains [60]. Operational eco-efficiency is one of the potential factors of hospitals' competitive advantage. Good environmental performance can promote the hospital to better production and operation [53]. Therefore, as one of the indicators to measure the sustainable development of the hospital, the operational eco-efficiency not only saves money for the hospital, but also reduces its destructiveness to natural resources. In addition, providing a safe and reliable environmental management system is also one of the key environmental issues of healthcare institutions [78]. On the contrary, inappropriate environmental management systems can make medical waste infectious and toxic. If not handled properly, medical waste can pose significant potential health and environmental risks [79]. Therefore, based on big data, IoT and other such technologies, it is meaningful to analyze and compare the medical environment, and then establish a reasonable environmental management system for the sustainable development of the hospital. Hospital, as a public place, often involves many aspects of response, from the social public response outside the hospital to the hospital medical professional response related to many aspects and departments. A series of linkage reaction mechanisms need to be established from accident occurrence to pre-hospital, in-hospital and post-hospital to ensure the timeliness of response [80]. Therefore, unified planning and construction, overall organization and coordination, all-round promotion, and multi department linkage are necessary. Based on these, emergency management ability, as the last level of environmental problems, becomes one of the essential indicators to evaluate intelligent healthcare. Smart hospitals are different from general enterprises. For smart hospitals, the main goal in the early stage is to create social benefits and undertake social responsibilities. But if we expand our vision, we should pay equal attention to social and economic benefits, which is in line with the long-term interests and planning of enterprises. However, the sustainable development of smart hospital is different from that of traditional hospitals. In pursuit of economic and ecological benefits, environmental benefits should also be taken into account, which is consistent with the triple bottom line theory. Therefore, the location of talent attraction, intelligent marketing, operational eco-efficiency, environmental management system and emergency management in the model shows the characteristics of intelligent healthcare.
According to the Smart City Agent theory [16], the model can be divided into three levels: government, company and user level [46]. Based on IoT related technologies, smart healthcare integrates relevant data of healthcare-related organizations such as large hospitals, community hospitals, health departments, social security departments, insurance companies and other institutions through cloud data, big data and other technical means [82]. As Figure 4 shows, the medical and health department at the government level have provided a good intelligent healthcare coordination mechanism, which uses the existing public resources to promote internal resource sharing, build a unified healthcare information sharing platform, and break through the barriers between industries. In order to develop a sustainable smart city which includes smart healthcare system, sustainable urban design and planning is necessary [47]. The urban planning department can reasonably plan the location of smart hospitals so as to improve the radiation area of medical resources [46]. Besides, smart drug regulatory system can effectively regulate drug use behavior and reduce drug costs in the healthcare process [24]. Smart medical waste management and smart environment management system can help environmental protection departments improve ecological efficiency and promote sustainable development [60]. At the company level, the smart hospital adopts advanced information technology to improve the hospital process, share information with other healthcare institutions, and improve the patient's experience. Unlike the supply chain of traditional hospitals, that of the smart hospital adopts cloud information system to provide flexible management ability, which can obtain data from supply chain partners anytime, anywhere through VOLUME 9, 2021 the network [9]. Accounting and auditing institutions can conduct accounting according to the transaction information of smart finance. As a third-party enterprise, insurance institutions can upload electronic insurance contracts and store them in the form of smart contracts in the big data platform to improve hospital emergency management capabilities [49]. The system uses intelligent terminal equipment and customized professional medical software to provide accurate and efficient customized solutions for the community healthcare center and improve the work efficiency. At the user level, the construction of intelligent healthcare is inseparable from the knowledge upgrading of medical staff and the cultivation of high-quality medical talents, which will eventually enhance patient satisfaction. Along with the help of big data which can improve healthcare and service development policies based on patients' feedback, personalized intelligent medical service is sure to increase patient satisfaction [25], [26], [83].
VI. CONCLUSION
Currently, researches on smart healthcare, in general, stay in the shallow stages focusing only on single level problems arising in the value chain of smart healthcare industry and lacking a systematic hierarchical framework. Different from what previous researches have done, this study explores the ways to systematically apply information technologies to smart healthcare. To do so, it constructs a sustainable development framework of smart healthcare and provides the interaction and hierarchical influence path between indicators under the framework. Based on triple bottom line theory and stakeholder theory, it has constructed 14 index systems at three levels, analyzed the interaction among the smart healthcare indicators by means of fuzzy, DEMATEL and ISM research methods, obtained the hierarchical model of smart healthcare, and then revealed the path of smart healthcare. This study also considers embeddedness of new technology which can promote system development of intelligent healthcare and sustainability of the smart medical ecology. While providing a useful guidance for model application, this study has some limitations for future studies to address. The future research avenues are as below: Firstly, this paper constructs a framework which is a systematic and brand-new indicator system, but it is not a complete one and there is still a possibility of missing indicators, which needs further exploration and improvement. Secondly, since the degree of impact relationship between indicators is analyzed based on the information derived from a survey on experts, there may be a subjectivity bias involved in the process. Although the fuzzy set theory is used to solve the problem, there may still be some other errors that might not possibly be eliminated. Lastly, the research conclusions of this paper need to be verified and supplemented in medical practice. These problems need to be addressed carefully by constantly improving the thinking process and research methods of smart healthcare in the framework of big data. ZHIGANG YAN received the master's degree from LNU. He is currently an Associate Research Fellow with Panjin Institute of Industrial Technology, Dalian University of Technology, China. He has been selected as ''Qian'' layer in Liaoning BaiQianWan Talents Program. He possesses rich experience in industry, university, and research cooperation as well as science and technology policy research. He has lead more than ten scientific research institutions above the provincial level. As a research author, he has published more than ten international and domestic articles, such as ''Research management and platform construction in the context of technology transfer'' and ''See the Enterprise science and technology innovation practice in the theoretical perspective.'' | 9,433.2 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Is the Interplay between Epigenetic Markers Related to the Acclimation of Cork Oak Plants to High Temperatures?
Trees necessarily experience changes in temperature, requiring efficient short-term strategies that become crucial in environmental change adaptability. DNA methylation and histone posttranslational modifications have been shown to play a key role in both epigenetic control and plant functional status under stress by controlling the functional state of chromatin and gene expression. Cork oak (Quercus suber L.) is a key stone of the Mediterranean region, growing at temperatures of 45°C. This species was subjected to a cumulative temperature increase from 25°C to 55°C under laboratory conditions in order to test the hypothesis that epigenetic code is related to heat stress tolerance. Electrolyte leakage increased after 35°C, but all plants survived to 55°C. DNA methylation and acetylated histone H3 (AcH3) levels were monitored by HPCE (high performance capillary electrophoresis), MS-RAPD (methylation-sensitive random-amplified polymorphic DNA) and Protein Gel Blot analysis and the spatial distribution of the modifications was assessed using a confocal microscope. DNA methylation analysed by HPCE revealed an increase at 55°C, while MS-RAPD results pointed to dynamic methylation-demethylation patterns over stress. Protein Gel Blot showed the abundance index of AcH3 decreasing from 25°C to 45°C. The immunohistochemical detection of 5-mC (5-methyl-2′-deoxycytidine) and AcH3 came upon the previous results. These results indicate that epigenetic mechanisms such as DNA methylation and histone H3 acetylation have opposite and particular dynamics that can be crucial for the stepwise establishment of this species into such high stress (55°C), allowing its acclimation and survival. This is the first report that assesses epigenetic regulation in order to investigate heat tolerance in forest trees.
Introduction
Plants necessarily experience changes in temperature during their life cycle. A diversity of cellular targets is greatly affected by atypically high temperatures that can induce a re-setting of physiological, biochemical and molecular programs and affect plant growth and performance [1][2][3]. Epigenetic modifications in the genome can be induced by environmental signals, and thus, the single genome in a plant cell gives rise to multiple epigenomes in response to different environmental cues [4][5][6]. The control of gene expression based on chromatin organization rather than on primary DNA sequence information is referred to as epigenetics [7]. Epigenetic modifications occur without changing original nucleotide sequence and can be achieved on several interdependent levels that include covalent modifications of DNA and histones [8,9].
A number of studies have shown that DNA methylation and histone posttranslational modifications play a key role in epigenetic control and plant functional status under stress (e.g. [4,6,8,10,11]) by controlling the functional state of chromatin and gene expression [12][13][14][15]. These epigenetic marks are generated fast, can be transmitted across cell divisions (meiotically and mitotically) and can also be reversed, providing a way to confer plasticity in the plant response and temporary ''memory'' strategies [8,16]. Experiments inquiring this subject in Arabidopsis showed that prolonged heat stress (37uC, 42uC) induces a transient release of gene silencing [17,18], but once a plant is removed from stress, gene expression is re-established within 48 hours.
Histone acetylation and DNA methylation can activate or repress transcription by generating ''open'' or ''closed'' chromatin configuration [19][20][21]. Thus, open chromatin increases the accessibility of the genome to transcription machinery, while closed chromatin represses gene expression by limiting the accessibility [19]. Configuration of chromatin at specific loci also controls somatic homologous recombination; heat stress affects genetic stability through chromatin remodelling, altering accessibility of DNA for repair and recombination [17].
It has long been suspected that a link exists between heat stress, chromatin remodelling, and epigenetic regulation of gene expression but further study is required to confirm the existence and nature of such a link [17]. The work of Kumar and Wigge [22], for example, identified histone H2A as a thermosensor in Arabidopsis and revealed a direct link with DNA methylation. Whilst significant progress has been made in understanding the physiological, cellular and molecular mechanisms of plant response to environmental stress factors [1] our understanding of how plants cope with climate challenges is still very limited. Such insight is required to understand heat-induced epigenetic processes. In fact, because these epigenetic traits exhibit characteristic dynamics during growth and development they may be of crucial importance in exploring and understanding adaptation related processes throughout the life cycle of trees, particularly in response to stress. There is an urgent need to determine the adaptive potential of forest trees given their importance in ecosystem functioning and the associated ecological and economic services they provide.
This topic was addressed in Quercus suber L. (cork oak) plants that were demonstrated to be extremely tolerant to elevated temperatures [23,24]. In the field, cork oak plants can be exposed to temperature near 40-45uC (in shade) and experience daily stress [23,25]. Cork oak is widely distributed and withstands a variety of climates with contrasting temperatures and rainfall [26], making summer drought and high temperatures clear selective agents [27]. These factors are thought to be among the most significant in the increasing mortality of forests in response to global climate changes [28].
Cork oak's ability to acclimate to stress conditions may be an important factor in the tolerance of this species to high summer temperatures [23,24]. In the Mediterranean area, cork oak is of great ecological and economic importance. It is expected to be severely affected by climate change due to the increased intensity and duration of the drought and heat periods expected for this region [27].
Few studies have been conducted which test the effect of high temperatures in Quercus and the majority of those which have been conducted focused on the photosynthetic apparatus and volatile organic compounds production [23,25,[29][30][31]. At the molecular level information is still scarcer [32] particularly for such high temperatures. Epigenetic changes that may occur under such conditions therefore remain largely unexplored.
To test the hypothesis that the epigenetic code could be related to heat stress tolerance in cork oak, DNA methylation and acetylated histone H3 (AcH3) levels were monitored during cumulative high temperature stress from 25uC to 55uC (in 10uC steps). In addition, the spatial distribution of these modifications was followed by immunolocalization in order to validate the results and analyse the possible correlation between heat stress tolerance and the studied epigenetic marks. The aim of this work was to collect, for the first time, epigenetic knowledge related to heat tolerance in cork oak and contribute to the current understanding of epigenetic control of heat acclimation in forest trees.
Materials and Methods
Plant material and experimental design 8-month-old cork oak plants were acquired from forest plant producer ANADIPLANTA (located in Central Portugal) and transferred from semi controlled greenhouse conditions to a climate chamber for a 2 weeks acclimation period. The climate chamber environment was kept constant (air temperature = 25uC; relative humidity = 60-70%; photosynthetic photon flux density = 250 mmol m 22 s 21 ; watering = field capacity; photoperiod = 16 h).
During the experimental treatment, relative humidity, irradiance, watering and photoperiod were held constant, while air temperature was gradually increased by 10uC every 3 days from 25uC to 55uC, peak temperature was maintained for 3 hours. Minimum daily temperature was 20uC during the 8 night hours.
Sampling occurred on the third day during peak heat hours (around 12 a.m.) for each temperature (25uC, 35uC, 45uC and 55uC). Fully expanded leaves were collected from each treatment, frozen in groups of 5 individuals (pools) in liquid nitrogen and stored at 280uC for subsequent analyses. Leaf sections were also fixed in paraformaldehyde for further immunohistochemical detection. Fresh leaf samples were used for determination of relative electrolyte leakage.
Percentage of survival, visual leaf damage and determination of relative electrolyte leakage
The percentage of survival plants and visual leaf damage were recorded for each temperature. To get more information on the cell-membrane damage caused by heat stress, the membrane permeability of the leaves was measured by electrolyte leakage. Leaves were rinsed three times with deionized water to remove surface-adhered electrolytes, then placed in tubes containing 20 mL of deionized water and incubated at 25uC on a shaker. Twenty four hours later, the electrical conductivity of the bathing solution (C1) was determined using a conductivity instrument (pH 340/ION, WTW, Germany). The tubes were then autoclaved at 100uC for 25 min and subsequently maintained at 25uC. Finally, total electrical conductivity (C2) was measured and electrolyte leakage was calculated using the following equation: relative electrolyte leakage (%) = (C1/C2)6100. Eight biological samples were analysed.
Nuclei isolation
Nuclei were isolated from 500 mg of frozen leaves using the protocol described by Haring et al. [33] with the following modifications: samples were transferred to 12 mL tubes containing 8 mL ice-cold cell isolation buffer (10 mM Tris pH 8.0, 400 mM sucrose, 10 mM Na-butyrate, 0.1 mM phenylmethylsulfonyl fluoride (PMSF), 5 mM b-mercaptoethanol, protease inhibitors 1 mg/mL) and filtered through 3 layers of Miracloth into a new ice-cold 12 mL tube. After centrifuging the filtrate (30006g, 15 min, 4uC), the supernatant was removed and the pellet was resuspended in 5 mL ice-cold nuclei isolation buffer A (10 mM Tris pH 8.0, 250 mM sucrose, 10 mM Na-butyrate, 10 mM MgCl 2 , 1% v/v Triton X-100, 0.1 mM PMSF, 5 mM bmercaptoethanol, protease inhibitors 1 mg/mL) and incubated for 10 minutes on ice. The solution was centrifuged (30006g, 15 min, 4uC), the resulting supernatant was removed and the pellet was resuspended and incubated in 5 mL ice-cold nuclei isolation buffer until pellet was light green. After centrifugation (30006g, 15 min, 4uC) the supernatant was removed and the pellet was resuspended in 8 mL ice-cold nuclei isolation buffer B (10 mM Tris pH 8.0, 1.7 M sucrose, 10 mM Na-butyrate, 2 mM MgCl 2 , 0.15% v/v Triton X-100, 0.1 mM PMSF, 5 mM bmercaptoethanol, proteinase inhibitors 1 mg/mL). The solution was centrifuged (30006g, 15 min, 4uC), the supernatant was removed and the isolated nuclei were kept at 280uC until DNA or protein extraction.
Global nuclear DNA methylation
Nuclear DNA was extracted from the pre-isolated nuclei following the procedure described by Thomas et al. [34], modified in the ensuing points: nuclear pellets were transferred to a 2 mL tube with 1.25 mL buffer 1 (0.25 M NaCl, 0.2 M Tris-HCl pH 7.6, 0.05 M Na 2 EDTA pH 8.0, 2.5% v/v b-mercaptoethanol, 2.5% w/v polyvinylpyrrolidone (MW 40.000)). The mixture was briefly mixed by vortexing and centrifuged (26006g for 5 min at 4uC). After removing the supernatant, pellet was resuspended in 500 mL buffer 2 (0.05 M NaCl, 0.2 M Tris-HCl pH 8.0, 0.05 M Na 2 EDTA pH 8.0, 2.5% v/v b-mercaptoethanol, 2.5% w/v polyvinylpyrrolidone (MW 40.000), 3% sarkosyl) and incubated at 37uC for 30 min with gently shaking. After adding an equal volume of chloroform/isoamyl alcohol, centrifuging and collecting the aqueous phase in a new tube, 350 mL of cold isopropanol was added and slowly mixed to precipitate DNA. DNA was collected and transferred to 500 mL 70% ethanol. Ethanol-DNA mixture was centrifuged (190006g for 10 min at 4uC), supernatant was discarded and after completely dry, DNA pellet was resuspended in 100 mL dd H 2 O. DNA suspensions were purified using phenolchloroform solution and completely air-dried DNA pellets were resuspended in 12 mL dd H 2 O. An aliquot of each extracted sample was used to evaluate DNA concentration and integrity and to detect residual RNA.
DNA hydrolysis and global DNA methylation analysis were performed by high performance capillary electrophoresis (HPCE) according to previously described by Hasbún et al. [35]. Four pools of five samples and three analytical measurements were analysed at each experimental situation. The methylation content of each DNA sample was quantified as: 5-mdC peak area6100/ (dC (deoxycytidine) peak area + 5-mdC peak area).
Methylation-sensitive random-amplified polymorphic DNA (MS-RAPD)
Genomic DNA was extracted from 75 mg of frozen leaves with a plant genomic DNA extraction kit (DNeasy Plant Mini Kit, Qiagen, Germany) according to the manufacturer's instructions. DNA yield and purity were assessed by spectrophotometry (as described in Valledor et al. [36]) and gel electrophoresis on agarose gel by direct comparison with phage l DNA.
To study the epigenetic changes in specific DNA sequences it was used MS-RAPD, a modification of the original RAPD technique which is closer to MSAP (methylation sensitive amplified polymorphisms) analysis. All steps in the protocol were carefully standardized and thoroughly described in order to ensure technical reproducibility.
In a first step, two methylation-sensitive isoschizomers (HpaII and MspI) were used in parallel to digest the DNA.
After digestion, a standard RAPD procedure was used to amplify the restriction fragments. In the first reaction, 250 gg of each extracted DNA was added to a restriction mixture containing: 16 restriction buffer and 10/20 U restriction endonuclease to a final volume of 30 mL, one mixture per endonuclease (HpaII/MspI; all endonucleases and buffers were supplied by New England Biolabs, USA). Restriction mixtures were then incubated at 37uC overnight. The reaction was stopped by placing the tubes on ice and restriction was checked by agarose electrophoresis. PCR amplification was performed according to Cocconcelli et al. [37]: 20 mL reaction mixtures containing: 12-14 ng DNA, 16 PCR buffer (Invitrogen, USA), 3.5 mM MgCl 2 , 75 pM dNTP, 0.25 mM each primer, 1 U Taq polymerase (Invitrogen, USA). In a preliminary work, a total number of 20 primers was tested (one 'arbitrary' primer set -Operon OPH). From this preliminary test, only the 10 most polymorphic primers were selected and used in the current manuscript (see table 1).
Amplification was performed in a thermo cycler in 96-well plates. The cycle conditions were as follows: initial denaturation at 94uC for 5 min, followed by 45 cycles 94uC for 1 min, annealing temperature of 29uC for 1 min and extension at 72uC for 2 min. A ramp of 1.5 min was used between annealing (29uC) and elongation (72uC). None of the primers used in this study revealed any change in DNA fingerprint within the samples when RAPD analysis was performed to undigested genomic DNA, so all of the quantified changes are the results of changes in DNA methylation and not in sequence-differences between individuals.
The products of RAPD assay were resolved on 1.75% agarose gels. Interpretation of MS-RAPD bands followed the representation of MSAP (methylation sensitive amplified polymorphisms) detected by HpaII/MspI endonuclease digestion according to Valledor et al. [38] and appearance-disappearance of bands was used to study the variation of methylation which could be classified into two categories: de novo methylation and demethylation events. Four pools of five samples were analysed. The complete protocol was performed two unattached times to ensure the reliability of the results. Only consistent bands between batches were considered for analysis.
Protein extraction
Proteins were extracted from the pre-isolated nuclei, following part of the procedure described by Shechter et al. [39]. In brief, nuclear pellets were transferred to a fresh 1.5 mL tube with 400 ml 0.8N H 2 SO 4 . The suspensions were vortexed until clumps were dissolved and incubated on rotator, for 30 min. After this time, the solutions were sonicated and then centrifuged (150006g for 10 min at 4uC) to remove nuclear debris. Supernatants were transferred to a fresh 1.5 mL tube and they were added 140 mL 100% trichloroacetic acid (TCA). Suspensions were incubated on ice for 30 min and then centrifuged at 150006g during 10 min at 4uC. Supernatants were discarded and then pellets were washed with 1 mL cold-acetone. Pellets were recovered by centrifugation (150006g, 10 min, 4uC) and supernatants were discarded. This washing step was repeated twice. Then the protein pellets were airdried at room temperature and later dissolved in appropriate volume of protein rehydration buffer (
Protein blot
Protein gel blot analysis was performed as described by Valledor et al. [38]. In brief, proteins and standards were separated by electrophoresis in 13.5% w/v acrylamide SDS gels (Mini-PROTEAN II Multi-Casting Chamber, BIO-RAD, USA) and then transferred by electroblotting (350 mA for 2 hours) to Immobilon membranes (Millipore Corp., USA). For the immunodetection, the membranes were blocked overnight at 4uC in 2% w/v powdered skimmed milk in phosphate-buffered saline (PBS) containing 0.5% v/v Tween 20. Then, the membranes were incubated with primary and secondary polyclonal antibodies diluted 1:2000 and 1:1000 in blocking solution for 2 and 1 hours, respectively. The used primary antibodies were anti-acetyl-Histone H3 (anti-AcH 3 ) (rabbit, Millipore Corp., USA) and anti-actin (rabbit, Chemicon, USA). Latter secondary antibodies (antirabbit), coupled to alkaline phosphatase, were used in a dilution of 1:5000 and signals were revealed in a nitroblue tetrazolium and bromo-chloro-indolyl-phosphate (NBT-BCIP) mixture. Anti-actin was used as a control for loading normalization.
Densimetric measurements were taken after immunodetection using Kodak.1D v 3.6 Scientific Imaging Systems (USA). Abundance index was calculated as follows: protein band intensity/actin band intensity. Four pools of five samples were analysed.
Immunohistochemical detection
The immunolocalization was carried out according to the procedure described by Valledor et al. [38]. Half cross sections of leaf were fixed in 4% paraformaldehyde, 1% b-mercaptoethanol in PBS overnight at 4uC and sectioned at a 50 mm thickness using a cryomicrotome Leica CH 1510-1 (Leica Microsystems, Germany). The samples were mounted on slides coated with APTES (3-aminopropyltriethoxysilane; Sigma, USA). The leaf sections were dehydrated and then rehydrated through ascending and descending series of ethanol, respectively. The leaf sections were incubated in 2% cellulase in PBS for 45 min at room temperature and denatured in 2N HCl for 30 min. The permeabilised sections were blocked in 5% bovine serum albumin (BSA) in PBS for 10 min and incubated 1 h with mouse antibody anti-5-methylcytidine (anti-5-mdC, Eurogentec, Belgium) diluted 1:50 in 1% blocking solution or with rabbit antibody anti-acetylated-Histone H3 (anti-AcH 3 , Millipore Corp., USA) diluted 1:25. Non-bound antibodies were washed with 0.1% Tween 20 in PBS. Alexa Fluor 488-labelled anti-mouse polyclonal antibody (Invitrogen, USA) diluted 1:25 was used as secondary antibody for the 5-mdC detection and Alexa Fluor 488-labelled anti-rabbit polyclonal antibody (Invitrogen, USA) for the acetylated H3 histone detection. The slides were counterstained with DAPI (49, 6diamidino-2-phenylindole; Fluka, USA). Fluorescence was visualized using a confocal microscope (Leica TCS-SP2-AOBS; Leica Microsystems, Germany). Three biological samples were analysed for each treatment and multiple 3D image stacks were acquired of each leaf. 3D images of whole leaf were reconstructed and maximal projection was performed using Leica Software (LCS c2.5. Leica Microsystems, Germany).
Statistical analysis
Statistical analyses were conducted with SigmaPlot statistical software package v. 11 (Systat, Germany) for Windows. Data from electrolyte leakage, global DNA methylation and relative abundance index of AcH3 were subjected to an analysis of variance using One Way Anova. When data were statistically different Anova test was followed by a Holm Sidak test (p,0.05).
Results
Percentage of survival, visual leaf damage and electrolyte leakage were measured in plants exposed to temperature increases from 25 to 55uC (in 10uC steps). All plants survived to 55uC. Nevertheless, visual leaf damage was evident at 55uC, where it was possible to detect brown spots (Fig. 1). No morphological signal of stress was detected below this temperature.
Cumulative heat stress led to higher electrolyte leakage (an indirect parameter of membrane rupture) that increased significantly between 35uC-45uC. The maximum degradation was observed at 45uC. At 55uC electrolyte leakage percentage decreased significantly (Fig. 2).
DNA methylation analysed by HPCE (Fig. 3) revealed no significant alterations in global DNA methylation with increasing heat stress until 45uC, but a significant increase at 55uC (p,0.05).
The MS-RAPD analysis showed different methylation patterns at specific sequences. Seventy four bands were analysed: from 25uC to 35uC (12 demethylation and 14 de novo methylation events); from 35uC to 45uC (8 demethylation and 4 de novo methylation events); finally 9 demethylation and 5 de novo methylation events were measured from 45uC to 55uC (Fig. 4). These results showed a high dynamic number of events between 25uC and 35uC in comparison the others increase steps of temperature.
The immunolocalization analysis showed different 5-mdC levels and spatial distribution in leaves in response to heat stress increase (Fig. 6). At 25uC 5-mC was equitably distributed over the tissues of the sample (Fig. 6e). An increasing in intensity of methylated cytosine at 35uC was observed, occupying the four cellular layers (Fig. 6f). At 45uC there is a clear redistribution of the methylated cytosine; such redistribution is much less evident in the spongy parenchyma and more concentrated near palisade parenchyma and vascular vessels (Fig. 6g). At 55uC the distribution occurred in similar pattern as to that at 35uC but with much more methylated nuclei (Fig. 6h). The histone AcH3 signal presented an opposite pattern in the immunolocalization analysis (Fig. 7): the intensity Appearance-disappearance of bands was used to study the variation of methylation between treatment steps (25uC to 35uC, 35uC to 45uC and 45uC to 55uC). Fragment analysis allowed the classification into two categories: de novo methylation and demethylation events. declined from 25uC to 45uC, showing a slight increase at 55uC (Fig. 7h).
Discussion
Forest trees are long-lived organisms and, as masters of adaptation, they can tolerate a very wide range of growing conditions and extreme seasonal changes. Despite the short-term heat stress experiment that was implemented, the survival of all 8 month-old cork oak plants to 55uC was remarkable and strengthens the reports of high thermotolerance within this species by other authors [23].
This tolerance highlights the well-designed adaptive mechanisms present in cork oak plants, but despite of this tolerance, it is underlined that a period of stress is necessary to trigger the physiological responses. For example, when focusing on membrane stability it was visible an increase of electrolyte leakage until 45uC. After that, a decrease getting closer to the values found for 35uC was observed. This unexpected profile suggests acclimation even at this high temperature. This could be explained by a physiological and biochemical re-setting in response to stress, which allowed stabilization and protected the membranes. The increasing expression of heat shock proteins and proline at 55uC are probably part of the explanation (unpublished data).
It is clear that gene control plays important roles defining the tissue adaptation to stress. To achieve the new morphological and physiological status, genes must be regulated to express only in certain cells and situations. Changes in epigenetic marks accompany morphological and physiological changes in trees in a wide range of processes such as phase change, aging, flowering time, organ maturation [35,38,40,41] and also variation in natural plant populations [9].
Gene expression driven by developmental and stress cues often depends on DNA methylation and histone posttranslational modifications. These epigenetic mechanisms are crucial to adapt plant responses to stress that result in short-term acclimation [11]. The performed analysis revealed that heat stress induced covalent modifications such as DNA methylation and histone acetylation.
It has previously been shown that cytosine methylation is altered in response to environmental stimuli throughout the genome at specific loci [42]. MS-RAPD results pointed dynamic methylationdemethylation patterns over stress. While HPCE quantifies all cytosines present in the genome, MS-RAPD can only discriminate methylation of cytosines present in specific CCGG sequence. According to the literature, these CCGG sites are present in a small part of the genome of the plants, and are frequently located in gene promoter regions, where cytosine methylation usually implies repressive chromatin in gene promoters and repression of gene transcription [6]. MS-RAPD analysis showed that the first temperature ramp (25uC-35uC) was highly dynamic with highest number of methylation and demethylation events, and higher rate of de novo methylation than demethylation. This fact suggests that, in an initial stressing condition, cork oak cell reorganizes its DNA structure and compaction grade by methylation-demethylation mechanisms to promptly regulate gene expression, activating effective defence mechanisms to overcome the stressful conditions. After that (35uC-45uC; 45uC-55uC) a new baseline is achieved at DNA methylation profile that is indicated by an increased number of constant bands between temperatures. When looking at genome level by quantification by HPCE, it was observed that DNA methylation was stable until 45uC, showing a marked increment after that. Supporting these results, in cold treated maize leaves DNA methylation level increased [43], and in long time pine trees exposed to radiation the level of DNA methylation also increased, explained as being a possible mechanism of adaptation to radiation [44]. Tan [42] and Pecinka et al. [17] both stressed the importance of increasing methylation of transposable elements that is frequently associated with transcriptional gene silencing.
The immunohistochemical detection of 5-mdC came upon these results, highlighting an increase in DNA methylation over stress. A re-distribution was noticed at 45uC, with an accumulation of 5-mdC near both epidermis and vascular vessels. This redistribution concurred with membrane stability results that showed the highest values of electrolyte leakage at 45uC. The decrease at 55uC might be related with the acclimation process. This may be explained by the activation of other molecular pathways related to membrane stability.
Despite the global increasing methylation given by HPCE and immunolocalization of 5-mdC, the MS-RAPD results demonstrated that specific demethylation simultaneously occurred in many loci. These findings are supported by other works, such as Boyko et al. [45] who found that although a hypermethylation in genes and promoters in stressed Arabidopsis progeny is evident, many loci in the genome are hypomethylated.
The acetylation states of histone H3 decreased under stress as evidenced in Protein blot and by immunolocalization. Tsuji et al. [46] found that submergence induced histone H3 acetylation in Oryza sativa and concluded that this histone modification was correlated with enhanced expression of ADH1 and PDC1 genes under stress. These differences lead us to speculate that deacetylated H3 in Q. suber is responsible for repressive chromatin in gene promoters and repression of gene transcription. Modifications in histone H3 were also observed in heat acclimation in animals showing a conserved mechanism both in plants and in animals [47].
The opposite labelling profile of 5-mdC and AcH3 observed here during the increasing heat stress indicates cooperation of both epigenetic mechanisms, demonstrating the well coordinated and interdependent relations between the explored epigenetic markers as reported by other authors [20,48]. The layout of these marks in immunolocalization shows a differential distribution in the cross section evaluated. Accordingly, this technique has successfully been used to inquiry the differentiation of specific tissues like in flowering development [49,50] or maturation [38].
The results here reported proved that cork oak leaves experience interrelated and specific DNA methylation and histone H3 acetylation changes due to the elevated temperature conditions. Such interplay can be crucial for the stepwise establishment of this species into such high stress (55uC) that allow the acclimation and survival. Up to authors knowledge the only report that analysed the distribution of epigenetic marks in cork oak was assessed in the nuclei of mature pollen cells [51]. Here the high level of DNA methylation associated with histone modifications in the vegetative nucleus indicated a high potential for transcriptional silencing according to the apparent chromatin silencing of the generative nucleus [51].
The evolutionary impact of such regulation could rely on a ''memory'' of stressful conditions faced by ancestor leading to a better adaptation of the progeny. The existence of this memory has still to be fully established for perennial species. Recently, a temperature-dependent epigenetic memory was reported for Norway spruce where the temperature of embryo development had later influence on bud timing phenology and gene expression [16,52]. Insights into epigenetics variation will contribute to the understanding of adaptive plant response in a climate change scenario. The question of whether the perceived environmental cues were memorized by cork oak plants (in a form that is maintained even when the stimulus is removed) now provides an interesting avenue for future research.
Further studies are needed, to specifically link physiology and molecular aspects and aid understanding heat tolerance in forest key trees under conditions such as those present in the Mediterranean region. The results published here open new perspectives in a non model woody species and help us to understand, for the first time, epigenetic regulation in response to high temperatures. | 6,469.4 | 2013-01-11T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Transcriptomic profiling of high- and low-spiking regions reveals novel epileptogenic mechanisms in focal cortical dysplasia type II patients
Focal cortical dysplasia (FCD) is a malformation of the cerebral cortex with poorly-defined epileptogenic zones (EZs), and poor surgical outcome in FCD is associated with inaccurate localization of the EZ. Hence, identifying novel epileptogenic markers to aid in the localization of EZ in patients with FCD is very much needed. High-throughput gene expression studies of FCD samples have the potential to uncover molecular changes underlying the epileptogenic process and identify novel markers for delineating the EZ. For this purpose, we, for the first time performed RNA sequencing of surgically resected paired tissue samples obtained from electrocorticographically graded high (MAX) and low spiking (MIN) regions of FCD type II patients and autopsy controls. We identified significant changes in the MAX samples of the FCD type II patients when compared to non-epileptic controls, but not in the case of MIN samples. We found significant enrichment for myelination, oligodendrocyte development and differentiation, neuronal and axon ensheathment, phospholipid metabolism, cell adhesion and cytoskeleton, semaphorins, and ion channels in the MAX region. Through the integration of both MAX vs non-epileptic control and MAX vs MIN RNA sequencing (RNA Seq) data, PLP1, PLLP, UGT8, KLK6, SOX10, MOG, MAG, MOBP, ANLN, ERMN, SPP1, CLDN11, TNC, GPR37, SLC12A2, ABCA2, ABCA8, ASPA, P2RX7, CERS2, MAP4K4, TF, CTGF, Semaphorins, Opalin, FGFs, CALB2, and TNC were identified as potential key regulators of multiple pathways related to FCD type II pathology. We have identified novel epileptogenic marker elements that may contribute to epileptogenicity in patients with FCD and could be possible markers for the localization of EZ.
Introduction
Focal cortical dysplasia is the most commonly encountered developmental malformation that causes drug resistant focal epilepsy, particularly in children [1]. Its anatomopathological position and cellular appearance are highly variable and influence not only the cortical architecture and unique neuronal subpopulations, but also the junction of gray-white matter and sub-cortical white matter regions [2,3]. The most frequent subtype is FCD type II, mainly localized in the frontal and parietal lobes and can range from either small and almost invisible bottom-of-sulcus dysplasia to larger dysplastic regions covering more than a single gyrus. Focal cortical dysplasia type II is marked by gross histopathological changes, i.e., dysmorphic neurons (FCD type IIA) and additional balloon cells (FCD type IIB) [4,5]. Because dysplastic tissue contains atypical neuronal networks that are highly susceptible to abnormal excitation, FCD is thought to be intrinsically epileptogenic. Despite the introduction of new anti-epileptic drugs (AEDs) in the last two decades, over 30% of epilepsy patients have recurring seizures and many have undesirable side effects. Surgery is an effective alternative treatment as it offers seizure freedom or a significant reduction in seizures for those patients with drug-resistant epilepsy (DRE). Epilepsy surgical outcome is influenced by a number of factors, including epilepsy type, underlying pathology, and the most significant accurate localization of the epileptogenic zone (EZ) and precise details of its association with the eloquent cortex for complete and safe removal using a variety of clinical, neuroimaging, and neurophysiological tests [6][7][8].
FCD is a diffuse lesion with poorly defined epileptogenic zones. Thus, incomplete resection has been consistently known to be a poor prognostic factor. Clinical history, comprehensive semiology analysis, long-term video-EEG recording, inter-ictal and ictal EEG analysis, neuroimaging, and neuropsychological examination are all part of the pre-surgical evaluation process and each modality gives distinct and complementary information. Because no single currently available approach can consistently diagnose EZ, and each modality has its own set of limitations, comprehensive examinations are required to analyse the EZ's various characteristics [8,9]. The functional involvement of the dysplastic cortex in the epileptogenic network cannot be identified through MRI alone. FCDs can be microscopic (or MRI negative), which means they may go undetected even with high resolution MRI. The lesions are subtle in these cases; morphological features may vary only marginally from normal tissue [10,11]. fMRI or magnetoencephalography (MEG) detects classical and aberrant distributed functional networks but may be falsely suppressed in the postictal period. The absence of a visible lesion is one of the greatest challenges in epilepsy surgery; dysplastic tissue looks similar to normal brain tissue and can be missed, unless intracranial electrode application and intraoperative electrocorticography (EcoG) recordings are performed [6,12].
Despite the use of all available invasive and non-invasive approaches, the epileptogenic zone cannot be fully identified, and patients do not benefit in more than 30% of these cases, owing to the inability to accurately locate the EZ [13]. A more precise framework for identifying EZ can be provided by molecular and cellular biomarkers combined with imaging and electrical investigations [13,14]. Aberrant gene expression and epigenetic alterations such as DNA methylation have been reported in different epilepsy pathologies, including FCD [15][16][17][18][19][20]. These studies have helped us to better understand the molecular mechanism of epileptogenesis, but the search for biomarkers to localize the EZ accurately has not ended yet.
This has kindled interest in unveiling the mechanisms of epileptogenicity in these lesions. Human tissue samples, on the other hand, restrict experimental design because age and gender-matched control samples from non-epileptic patients are rarely available for comparison. However, having a better understanding of how seizures are generated in the dysplastic human neocortex ultimately requires an examination of the available human tissue samples. Surgically resected human dysplastic tissue can be a good model to study the mechanism of epileptogenicity in these patients. Intracranial EcoG recording is usually performed in FCD cases to identify the extent of the epileptogenic zone and its complete excision. Tissues with different EcoG grades have been removed during surgery from the same patients. These tissues from the same patient can be an ideal model to extrapolate the mechanism of epiletogenicity in FCD type II, which in turn helps to delineate the epileptozenic zone in FCD patients. Hence, the present study was designed to study the transcriptomic profile of surgically resected paired tissue samples obtained from electrocorticographically graded high-(MAX) and low-spiking (MIN) regions of FCD type II patients and autopsy controls. The current study's findings are discussed to gain better understanding of the epileptogenicity in FCD and the localization of EZ.
Patient and control samples
The patients who were diagnosed to have DRE due to FCD type II and underwent electrocorticography (ECoG)-guided surgery were included in the study. Presurgical assessment was done for each patient, and the pathology was determined by analysing convergent data on MRI, video EEG (vEEG), fluoro-2-deoxyglucose positron emission tomography (FDG-PET) and magnetoencephalography (MEG), further confirmed by histopathological examinations by neuropathologists. Patients with dual pathology were excluded.
Based on ECoG recordings, the regions were graded from scores of 1 to 5 [21,22], with grade 2 and above reported as a high spiking zone (MAX) and grade 1 as a low spiking zone (MIN) (Additional file 1: Fig. S1). Surgical resection of ECoG graded cortical samples was performed as per the previously reported protocol [22,23]. The MAX region was defined as cortical regions of MEG abnormality, the greatest positron emission tomography hypometabolism, the most severe magnetic resonance imaging architectural abnormalities, and the most abnormal ECoG findings. The MIN region was defined as less severely involved based on neuroimaging and ECoG, but it was part of the planned resection. Resected tissue samples from the MAX and MIN regions from the same patients were collected for transcriptomic analysis. Details of neuroimaging techniques used and their scores are listed in Table 1. Based on EcoG grade, MRI, PET and MEG data, E018, E019, E070, E075, E077, E273, E460, and E578 were categorized under the MIN region, and E006, E028, E045, E084, SampleE1, E115, E135, E536 and E593 were categorized under the MAX region.
As there are no "ideal" or acceptable non-epilepsy controls for such studies involving epilepsy surgery, we have used histologically normal cortex tissues obtained from the frontal lobes of the post-mortem cases without any history of seizures or other neurological disorders as non-epileptic controls. All the autopsies were performed within 8 h of death. All the patients included in the study were seizure-free post-operatively (Class I Engel outcome). Part of the resected tissue were stored in 4% paraformaldehyde for histopathological examination and remaining parts were immediately frozen and stored at − 80 °C until further use.
The study was conducted in accordance with the Declaration of Helsinki and was approved by the Institute Ethics Committee, AIIMS, New Delhi. Informed and written consent was obtained from all the patients, their parents, or legal guardians if the patients were underage.
RNA sequencing (RNA seq)
RNA extraction and sequencing were performed as described previously with some modifications [16]. Briefly, frozen brain samples were homogenized in RiboZol reagent (Amresco) and RNA was extracted using RNeasy Mini Kit (Qiagen) as per manufacturer's instructions. An additional DNase1 digestion step was performed to ensure that the samples were not contaminated with genomic DNA. RNA quality was assessed using Bioanalyzer 2100 (Agilent). RNA libraries were prepared using TruSeq RNA Access Library Prep Kit (Illumina) and paired-end sequencing was performed on IlluminaHiseq 2500 platforms. Sequences were qualitychecked using FastQC and low-quality bases and reads were excluded from further analysis. Sequences were aligned using HISATaligner against all known genes and transcripts of GRCh37/hg19 assembly.
In this comparative study we analyzed the RNA Sequencing data by three of the most frequently used software tools: Cuffdiff, DESeq2 and EdgeR [24,25]. Significantly altered genes which were common in all three software tools were further used for downstream gene enrichment and network analysis. RNA Seq analysis was performed for three categories: (1) Between autopsy samples (A1 andA2) and samples from MIN region (E018, E019, E075 and E273); (2) Between autopsy samples (A1 andA2) and samples from MAX region (E006, E028, E115, SampleE1 and E135); and (3) Between samples from MIN region (E018, E019, E075 and E273) and samples from MAX region (E006, E028, E115, SampleE1, and E135).
Principal component analysis (PCA), pathway enrichment analysis and gene network analysis
Intersections of gene expression which were found to be significantly altered in all three RNA Seq analysis software were used for calculating and plotting the principal components using ClustVis [26]. Unit variance scaling was applied to genes and singular value decomposition with imputation was used to calculate the principal components. Samples were clustered using correlation distance and average linkage in heatmap. Common DEGs (based on fold-change (≥ 2) and FDR-adjusted p values (padj) in all three software packages were used for downstream gene enrichment and network analyses. Gene enrichment analysis was performed as described previously [27]. Briefly, the DEGs were used as an ordered query in g: Profiler with term size ranging from 3 to 350 and significance cut-off (FDR q val) set to < 0.05. Custom gene sets containing GO: BP terms and KEGG pathways were used.
Network analysis was performed to graphically display associations between DEGs, to show both direct and indirect interactions using Natural language processingbased (NLP) network discovery algorithms in gene spring software version 13.0 as described previously [16].
Validation by real-time PCR (RT-PCR)
Real-time PCR (RT-PCR) was performed to validate the differentially expressed genes using specific primers (Additional file 7: Table S1) for selected genes (TNC, MOBP, SLC12A2, CTGF, GPR37, KCNK10 and CARTPT) on 9 FCD (F1 to F9) patients and 8 control samples (A1 to A8). RNA was reverse transcribed using High-capacity cDNA Reverse Transcription Kit (ThermoFisher; cata-logue# 4368814). Hypoxanthine phosphoribosyltransferase (HPRT) gene was used as an internal reference. Real-time PCR amplifications were performed in CFX 96 real-time systems (Bio-Rad) with Bio-Rad CFX software manager with the following cycling parameters: an initial hot start of 95 °C for 3 min followed by 40 replicate was examined to confirm a single peak appearance. All samples were run in triplicates. The 2 −ΔΔCq method was used to measure the mRNA expression of target genes based on the average Ct values across samples [28].
Histopathology
Tissues were fixed in 4% paraformaldehyde and embedded in paraffin wax for preparing 5-µm thick tissue sections. Haematoxylin and eosin (H&E) staining was performed as described previously [16]. One series of sections was stained with crystal violet (Sigma) according to the previously established protocol [29]. The slides were independently reviewed by two neuropathologists to confirm the pathology and evaluate any damage in control tissue.
Clinical characteristics of patients and controls
A total of nine FCD Type II patients (three male and six females) patients were included in this study. For RNA Seq analysis, graded samples of five FCD type II patients (F1 to F5) and two controls (A1 to A2) were included. Subsequently, we used surgically resected graded tissues from 9 FCD type II patients (F1-F9) and eight controls (A1-A8) (including samples of RNA Seq analysis) for real-time PCR analysis. The detailed clinical characteristics of individuals are listed in Table 1.
The mean age of FCD type II patients was 14.77 ± 6.23 years (ranges from 5 to 22 years). Autopsy patient's age ranged from 16 to 65 years (mean age 29.37 ± 15.01 years). Detailed histopathological investigations were performed on all the samples obtained for experiments (as mentioned in Table 1) to confirm the pathology (Fig. 1). Haematoxylin and eosin (HE), and crystal violet (CV) staining were performed to evaluate the histopathological features. Characteristic features of FCD type II patients were observed in all the patients. Cortical section from FCD type II patients showed dysmorphic neurons
Differentially expressed genes (DEGs)
RNAseq read summary is provided in the Fig List of genes with significantly altered gene expression analysed by EdgeR is provided in Additional file 5. Most of the DEGs identified by each of three tools overlapped, DESeq2 detected more DEGs than the other tools. To avoid false positive results, intersection of DEGs from two or more tools is generally used for analysis [24,25]. To get more accurate and precise findings, intersection of DEGs from Cuffdiff, DESeq2 and EdgeR was used for further analysis, details of commonly found DEGs among three tools is presented in Fig. 3 and Additional file 6. Only 6 genes (2 up-regulated and 4 down-regulated) were found to be differentially expressed in MIN vs autopsy, 109 DEGs (85 up-regulated and 24 down-regulated) were observed in MAX vs autopsy, and 158 DEGs (152 up-regulated and 6 downregulated) were found to be significantly altered in MAX vs MIN. No gene was found to be common in all three groups. 49 genes were found to be common in MAX vs autopsy and MAX vs MIN. 4 genes were observed to be common in MIN vs autopsy and MAX vs autopsy (Fig. 3).
The PCA result indicated that MIN and MAX region of FCD type II patients could be separated by their transcriptome profile by unsupervised clustering. Dimensionality reduction using principal component analysis segregated FCD type II samples and autopsy samples into distinct clusters with PC1 (85.5% for autopsy and MAX, and 89.2% for MIN and MAX) accounting for most of the variance (Fig. 4).
Pathway enrichment analysis and network analysis
Detailed pathway g: Profiler enrichment results are provided in the
Validation of data by real-time PCR
The mRNA levels of TNC, SLC12A2, CTGF, KCNK10, MOBP, and GPR37 were significantly up-regulated in MAX compared to autopsy controls (fold-change ≥ 2; p value < 0.05), whereas CARTPT was down-regulated (fold-change ≥ 2; p value < 0.05) (Fig. 6). The mRNA levels of TNC, KCNK10, MOBP, SLC12A2 and GPR37, were significantly up-regulated in MAX compared to MIN (fold-change ≥ 2; p value < 0.05), whereas CARTPT was significantly down-regulated. CTGF expression was relatively higher in MAX as compare to MIN, but it was not statistically significant (Fig. 6). Only TNC expression was significantly higher in MIN as compare to autopsy (fold-change ≥ 2; p value < 0.05) (Fig. 6).
Myelination, axon and neuronal ensheathment, oligodendrocyte development and differentiation
Oligodendrocyte-specific and myelination-associated genes were one of the dominating functional groups found to be up-regulated in MAX of dysplastic tissues in FCD type II patients compared to MIN and autopsy control. These include PLP1 [30][31][32][33][34][35][36][37]. KLK6, a serine protease, may rapidly hydrolyze major myelin and blood brain barrier proteins and promote oligodendrogliopathy, neuronal injury and astrogliosis [38,39]. ANLN from oligodendrocytes disrupts myelin septin assembly, causing the appearance of abnormal myelin outfoldings. ERMN plays a significant role in cytoskeletal rearrangements during the late wrapping and/or compaction phases of myelin assembly [40,41]. Contrary to this study, many of these genes, GLDN, MOBP, UGT8, ASPA, TMEM10 (OPALIN), MOG, ERMN, and CLDN11 were found to be down-regulated in dysplastic human temporal neocortex [17]. Similar to this study, increased expression of MOG, PLP1, ABCA2, FA2H, TF, ASPA was demonstrated in highspiking regions of cortical areas of temporal lobe epilepsy patients [19].
CTGF expression was found to be up-regulated in MAX of surgically resected sample of patients compared to autopsy controls. CTGF/CCN2 negatively regulates myelination through the mTOR pathway [50]. Mutations in mTOR pathway genes were reported in FCD [51,52]. Our previous study also demonstrated differential epigenetic regulation of the mTOR pathway in FCD [15]. Overtly active mTOR signaling may lead to insufficient myelination associated with FCD type II. CTGF has also been linked to astrogenesis, astrocyte activation, and neuro-inflammation [53].
In the present study, we have demonstrated the increased expression of genes related to myelination, remyelination or demyelination, suggesting that both phenomena are prevalent in patients. Demyelination is compensated for by remyelinating factors, and a delicate balance between them must be disrupted, resulting in myelin pathology, which may contribute to the epileptogenicity of this cortical malformation. OLs' inability to synthesize functional myelin could also be a factor. Upregulated expression of several OL differentiation related genes could be due to an increased number of OPCs and differentiating OLs. It could be due to a compensatory mechanism to suppress epileptiform activity. Reductions in the number of oligodendroglial cells and myelin content have been reported in FCD, but the results remain controversial. An increased number of oligodendendroglia was also reported in patients with temporal lobe epilepsy and malformations of cortical development [54][55][56][57]. Scholl et al. (2017) suggested that impaired oligodendroglial turnover is associated with myelin pathology in focal cortical dysplasia and tuberous sclerosis complex. Proliferative oligodendroglia was identified in FCD IIA, IIB, and TSC, suggestive of a reactive phenomenon due to insufficient maturation or delayed maturation that prevents adequate myelination [55].
Recent studies show that neuronal activity can influence the generation of new oligodendrocytes (oligodendrogenesis) and myelination. Changes in myelination in cortical white matter are mostly reported, but alterations in myelination of grey matter have also been demonstrated [58]. During epileptogenesis, various kinds of synchronous sub-threshold excitatory stimuli allow their temporal summation in the post synaptic neurons [59]. This summation could be a direct result of axons with poorly distributed conduction velocities, resulting in synchronous action potential firing. The conduction velocity of an axon is mainly related to its diameter and the myelin sheath. Therefore, a direct relation might exist between epileptic seizure susceptibility and abnormal myelin content. Conversely, previous studies have indicated that neurological disorders associated with abnormal myelin content are accompanied by a higher susceptibility to epileptic seizures [60][61][62][63]. Several studies have indicated that epilepsy is also associated with myelin abnormalities [64][65][66][67][68][69]. Oligodendrocytes also control potassium accumulation in white matter and seizure susceptibility [70]. A subset of CNS oligodendrocytes expresses glutamine synthetase and directly modulates glutamatergic excitatory neurotransmission [71]. The findings presented here highlight avenues for potential therapeutic interventions targeting aberrant oligodendrogenesis and myelination.
Phospholipid biosynthesis
RNA Seq data highlights the perturbation of key metabolism processes in lipid metabolism, especially phospholipid biosynthesis in the MAX region of the FCD type II patients. Altered lipid levels and/or distribution have been reported in a variety of neurodegenerative diseases. [43,72,73]. PLLP, UGT8, ABCA2, PLD1, ELOVL1, CERS2, S1PR5, PLPPR1, SPTLC1, ENPP2, NPC1, FA2H, LRP2, S1PR5 and GAL3ST1 gene expression were found to be up-regulated in this study. PLLP, CERS2, UGT8, ASPA and GAL3ST1 contribute to various processes related to myelin synthesis [74]. PLD1, ELOVL1, NPC1, SPTLC1, FA2H, LRP2, and S1PR5 contribute to the synthesis of fatty acids, sphingolipids, and intracellular trafficking of lipid molecules [75][76][77]. Our data demonstrated dysregulation in lipid metabolism, i.e. phospholipid biosynthesis and trafficking, which in turn ameliorates the signalling pathways related to lipid molecules and can affect diverse cellular functions. Apart from these, phospholipids are known to be important regulators of many channels, mitochondrial function, excitotoxicity, impaired neuronal transport, cytoskeletal defects, inflammation, and reduced neurotransmitter release [72]. Future studies on these altered genes could provide us with promising targets with the potential to delineate the epileptogenic zone in FCD type II.
Ion channels
Ion channel dysfunction, either caused by mutations or acquired, has been associated with epilepsy. Many AEDs tend to manipulate the ion permeability of these channels to modify neuronal excitability [78]. In the present study, we have demonstrated the up-regulation of AQP1, KCNK10, KCNH8, P2RX7, SGK1, SGK2, SLC12A2, SLC6A2, SLC44A1, SLC45A3, SCLC5A11, SCL26A9, CLCA4 and SEPT4 in MAX of FCD type II patients.
AQP1 functions as a water channel protein, whereas KCNK10 and KCNH8 are potassium channels for neurotransmitter release, neuronal excitability, and electrolyte transport [79,80]. SGK1 and SGK2 are reported to be involved in the regulation of a wide variety of ion channels, i.e., potassium, sodium, and chloride channels, membrane transporters, cell growth, survival and proliferation [81]. Activation of the P2X7 receptor has been associated with neuronal excitability, microglia activation and neuro-inflammatory responses. Increased expression of the P2X7 receptor has been demonstrated in animal models of epilepsy. P2X7 receptor ligands may be considered as a therapeutic target for DRE [82]. High SLC12A2 expression results in elevated Cl-concentration inside the cell, leading to net Cl − outflow and subsequent depolarization when GABA activates GABA A Rs [83,84]. Increased expression of SLC12A2 has been reported in surgically resected tissue specimens from FCD patients [85]. SLC26A9, a highly selective chloride ion channel, CLCA4, calcium sensitive chloride channel, SLC45A3, SLC5A11 may be involved in ion transport and neurotransmitter release in FCD [78]. SLC44A1, a choline transporter, may contribute to membrane synthesis and myelin production. Alterations in ion channel gene expression might affect the ionic homeostasis of ions involved in epileptic activity within dysplastic tissues. So, it could serve as a potential biomarker to identify the EZ in FCD patients, but confirmatory studies on a larger cohort are needed.
Cell signaling molecules of various functions
Aside from these, several genes related to diverse cellular functions were found to be altered in this study, including semaphorins, fibroblast growth factors (FGFs), MAP4K4, ATF3, TNC, CALB2 and GRP. Here, we demonstrated the up-regulation of SEMA3B, SEMA4D and SEMA6A in MAX compared to MIN. SEMA3B-NRP1 mediated immune response and apoptosis have been reported, and their involvement in neuro-inflammation and cell death in epileptic conditions cannot be ruled out [86,87]. SEMA4D/CD100 may regulate oligodendrocyte differentiation by promoting apoptosis [88]. SEMA4D also promotes inhibitory synapse formation and alleviates seizures in an animal model of epilepsy [89]. SEMA6A is considered to be a positive regulator of oligodendrocyte differentiation and myelination [90].
Other than semaphorins, the expression of MAP4K4, TNC, FGF1, FGF17, TGFA, ATF3, MCAM was found to be up-regulated and GRP and CALB2 were down-regulated in the study. MAP4K4 plays a specific role in activating the MAPK8/JNK pathway, which has also been found to be up-regulated in high-spiking cortical areas of TLE patients [19,91]. Increased expression of TNC is highly associated with glial reactivity and reduced myelination, and also participates in Notch signalling [92]. As the FGF system is involved in the development of specific brain circuits in the hippocampus and cortex associated with epileptogenesis, increased expression of FGF1 and FGF17 was very much expected. FGF17 can activate numerous transcription factors involved in intra-cortical wiring. FGF1 has also been linked to a role in an animal model of epilepsy. Contrary to this, FGF1 has been shown to have anti-convulsant properties in kainate-induced epilepsy [93]. ATF-3 expression has been correlated with seizure frequency in epilepsy patients [94]. Loss of CALB2 (Calretinin) expression in hippocampal interneurons was shown in the dentate gyrus of patients with epilepsy [95]. Contrary to this, an increase in the number of calretinin-positive cells was observed by Blumcke et al. (1999) in patients with temporal lobe epilepsy [96]. Further studies on a greater number of samples are required for absolute findings.
There is evidence that GRP mediated signalling might play a role in regulating cognitive functions such as emotional responses, social interaction, memory, and feeding behaviour. Alterations in GRP or GRPR expression or function have been reported in patients with neurodegenerative, neurodevelopmental, and psychiatric disorders [97].
The small sample size of this study which does not include age and gender matched cases and controls is one of its limitations. Only two autopsy samples have been included in the present study. The age range of FCD patients is from 5 to 22 years, whereas autopsy patients range from 16 to 65 years. It's very difficult to obtain age and gender-matched autopsy samples as per the inclusion criteria. Surgically resected tissue samples obtained for this study were from patients suffering from seizures for many years. Therefore, it is difficult to delineate and relate the transcriptional changes to underlying epileptogenic changes and to seizure activity. Further in vitro and in vivo studies are needed to determine whether the identified transcriptional changes are epileptogenic or a symptom of seizure activity.
The patients with FCD were on a combination of antiepileptic drugs which may affect the expression of certain genes. AEDs selectively reduce the excitability of neurons and provide appropriate seizure control in epileptic patients by acting on a variety of biological targets. AEDs have a variety of modes of action, which can be classified based on their regulatory roles in voltage-gated ion channels and synaptic excitability control. However, recent research has revealed that AEDs can act as epigenetic modifiers to regulate gene expression [98]. Changes in gene expression caused by Valproate were seen in the peripheral blood of patients with newly diagnosed epilepsy [99]. The antiepileptic drug levetiracetam selectively modifies kindling-induced alterations in gene expression in the temporal lobe of rats [100]. These studies suggest that AEDs may have modulatory effects on the expression of certain genes. Hence, the contribution of AEDs to changes in gene expression cannot be ruled out. The findings of this study suggest that myelin and/ or oligodendrocyte cells are involved in the epileptogenic process. Further exploration of the altered pathways may provide potential markers to aid in specifying the EZ in FCD patients. To date, there have been several preclinical and human studies presenting clear evidence that myelin content could be associated with epilepsy, epileptic seizures and epileptogenesis. Attempts to restore the process of myelination through pharmacological intervention could represent another promising therapeutic strategy for FCD as there is no evidence that administering these drugs to human patients can prevent seizures [58,62]. Even with potential limitations, our study shows a tight association between ECoG grading of samples and the expression pattern of PLP1, PLLP, UGT8, KLK6, SOX10, MOG, MAG, MOBP, ANLN, ERMN, SPP1, TF, FA2H, CLDN11, TNC, GPR37, GRP, ABCA2, ABCA8, ASPA, P2RX7 (P2X7), CERS2, MAP4K4, OPALIN, Semaphorins, FGF1, CALB2, and TNC in patients with FCD. These genes could be further studied as a potential biomarker for the identification of epileptogenic margins in these patients. The primary reason for poor surgical outcomes in patients with FCD is the inaccurate localization of the epileptogenic margins. These results further support that EcoG-guided resection is likely to have a better outcome in terms of achieving seizure freedom postoperatively [22,101]. | 6,358.8 | 2021-07-23T00:00:00.000 | [
"Medicine",
"Biology"
] |
Wearable feedback systems for rehabilitation
In this paper we describe LiveNet, a flexible wearable platform intended for long-term ambulatory health monitoring with real-time data streaming and context classification. Based on the MIT Wearable Computing Group's distributed mobile system architecture, LiveNet is a stable, accessible system that combines inexpensive, commodity hardware; a flexible sensor/peripheral interconnection bus; and a powerful, light-weight distributed sensing, classification, and inter-process communications software architecture to facilitate the development of distributed real-time multi-modal and context-aware applications. LiveNet is able to continuously monitor a wide range of physiological signals together with the user's activity and context, to develop a personalized, data-rich health profile of a user over time. We demonstrate the power and functionality of this platform by describing a number of health monitoring applications using the LiveNet system in a variety of clinical studies that are underway. Initial evaluations of these pilot experiments demonstrate the potential of using the LiveNet system for real-world applications in rehabilitation medicine.
Background and Introduction
Over the next decade, dramatic changes in healthcare systems are needed worldwide. In the United State's alone, 76 million baby boomers are reaching retirement age within the next decade [1]. Current healthcare systems are not structured to be able to adequately service the rising needs of the aging population, and a major crisis is imminent. The current system is dominated by infrequent and expensive patient visits to physician offices and emergency rooms for prevention and treatment of illness. The failure to do more frequent and regular health monitoring is particularly problematic for the elderly with multiple co-morbidities and often tenuous and rapidly changing health states. Even more troubling is the fact that current medical specialists cannot explain how most problems develop because they usually only see patients when something has already gone wrong.
Given this impending healthcare crisis, it is imperative to extend healthcare services from hospitals into home environments. Although there has been little success in extending health care into the home, there clearly is a huge demand. In 1997, Americans spent $27 billion on health care outside of the health care establishment, and that amount has been increasing [2]. Moreover, our dramatically aging population makes it absolutely necessary to develop systems that keep people out of hospitals. By 2030, nearly 1 out of 2 households will include someone who needs help performing basic activities of daily living and labor-intensive interventions will become impractical because of personnel shortage and cost [2].
The best solution to these problems lies in more proactive healthcare technologies that put more control into the hands of patients. The vision is a healthcare system that will help an individual to maintain their normal health profile by providing better monitoring and feedback, so that the earliest signs of health problems can be detected and corrected. This can be accomplished affordably by continuously monitoring a wide range of vital signals, providing early warning systems for people with high-risk medical problems, and "elder care" monitoring systems that will help keep seniors out of nursing homes and in their independent living arrangements.
Most available commercial mobile healthcare platforms have focused on data acquisition applications, with little attention paid to enabling real-time, context-aware applications. Companies such as VivoMetrics [3], Bodymedia [4], and Mini-Mitter [5], have extended the basic concept of the ambulatory Holter monitor (enabling a physician to record a patient's ECG continuously for 24-48 hours), which for three decades has been the only home health monitor with widespread use [6]. Additions to this individual monitoring paradigm have been extended along two fronts: medical telemetry and real-time critical health monitoring. Regarding the former, various inpatient medical telemetry systems have been developed in recent years, focusing on providing an infrastructure for transporting and storing data from the patient to caregivers for later analysis [7]. In terms of the latter, a few systems have extended the health monitoring concept by augmenting a physiological monitor (usually based on a single physiological sensor) with specialized algorithms for real-time monitoring within specific application domains, such as heart arrhythmia, epileptic seizures, and sleep apnea, which can potentially trigger alerts when certain critical conditions or events occur [8,9]. However, the development of proactive healthcare technologies beyond these basic telemedicine and individual event monitoring applications has been rather slow. The main limitation has been the large costs and inflexibility of limited monitoring modalities associated with these technologies and the impracticality for long-term use in general settings. This paper presents LiveNet, a flexible distributed mobile platform that can be deployed for a variety of proactive healthcare applications that can sense one's immediate context and provide feedback. Based on cost-effective commodity PDA hardware with customized sensors and data acquisition hub plus a lightweight software infrastructure, LiveNet is capable of local sensing, real-time processing, and distributed data streaming. This integrated monitoring system can also leverage off-body resources for wireless infrastructure, long-term data logging and storage, visualization/display, complex sensing, and computation-intensive processing. The LiveNet system allows people to receive real-time feedback from their continuously monitored and analyzed health state. In addition, LiveNet can communicate health information to caregivers and other members of an individual's social network for support and interaction. Thus, by combining general-purpose commodity hardware with specialized health/context sensing within a networked environment, it is possible to build a multi-functional mobile healthcare device that is at the same time a personal real-time health monitor, multimodal feedback interface, contextaware agent, and social network support enabler and communicator.
With the development of increasingly powerful diagnostic sensing technology, doctors can obtain more context specific information directly, instead of relying on a patient's recollection of past events and symptoms, which tend to be vague, incomplete, and error prone. While many of these specialized sensing technologies have improved with time, most medical equipment is still a long way off from the vision of cheap, small, mobile, and non-invasive monitors. Modern imaging technology costs thousands of dollars per scan, requires room-sized equipment chambers, and necessitates uncomfortable and time-consuming procedures. Personal health systems, on the other hand, must be lightweight, easy-to-use, unobtrusive, flexible, and non-invasive to make headway as viable devices that people will use.
As such, there is tremendous potential for basic non-invasive monitoring as a complement to more invasive diagnostic sensing devices. The LiveNet system focuses on using combinations of non-invasive sensing and contextual features (for example, heart rate, motion, voice features, skin conductance, temperature/heat flux, location) that can be correlated with more involved clinical physiology sensing such as pulse oximetry, blood pressure, and multi-lead ECG. Sensors in the LiveNet system can continuously monitor autonomic physiology, motor activity, sleep patterns, and other indicators of health. The data from these sensors can then be used to build a personalized profile of performance and long-term health over time tailored to the needs of the patient and their healthcare providers. This unique combination of features also allows for quantification of personal contextual data such as amount and quality of social interactions and activities of daily living. This type of information is potentially very useful for increasing the predictive power of diagnostic systems.
monitoring. Continuous monitoring ensures the capture of relevant events and the associated physiology wherever the patient is, expanding the view of healthcare beyond the traditional outpatient and inpatient settings. Longterm monitoring has the potential to help create new models of health behavior. For example, long-term monitoring may provide important insights into the efficacy and effectiveness of medication regimes on the physiology and behavior of a patient over time at resolutions currently unobtainable. In addition, progress in terms of understanding human physiology and behavior will result from the fact that long-term trends can be explored in detail. Such advances include tracking the development and evolution of diseases, development of predictors of response to treatment and relapse prevention, monitoring changes in physiology as people grow older, comparing physiology across different populations (gender, ethnicity, etc), and even knowing characteristic physiology patterns of people who are healthy (this last example is particularly important when it is necessary as a diagnostic methodology designed to quantitatively define abnormal behavior). The goal is to be able to detect repeating patterns in complex human behavior by analyzing the patterns in data collected from the LiveNet system. From continuous monitoring, a very fine granularity of quantitative data can be obtained, in contrast to the surveys and history-taking that has been the mainstay of long-term studies and health interventions to date.
The LiveNet System
There are three major components to the LiveNet system: a personal data assistant (PDA) based mobile wearable platform, the software network and resource discovery application program interface (API), and a real-time machine learning inference infrastructure. The LiveNet system demonstrates the ability to use standardized PDA hardware tied together with a flexible software architecture and modularized sensing infrastructure to create a system platform where sophisticated distributed healthcare applications can be developed. While the current system implementations are based on PDAs, the software infrastructure is designed to be portable to a variety of mobile devices, including cell phones, tablet computers, and other convergence devices. As such, the system leverages commercial off-the-shelf components with standardized base-layer communication protocols (e.g., TCP/IP); this allows for the rapid adoption and deployment of these systems into real-world settings.
The LiveNet system is based on the MIThril wearable architecture developed at the Massachusetts Institute of Technology (MIT) Media Laboratory [10]. This proven architecture combines inexpensive commodity hardware, a flexible sensor/peripheral interconnection bus, and a powerful light-weight distributed sensing, classification, and inter-process communications software layer to facilitate the development of distributed real-time multimodal and context-aware applications.
The LiveNet hardware and software infrastructure provides a flexible and easy way to gather heterogeneous streams of information, perform real-time processing and data mining on this information, and return classification results and statistics. This information can result in more effective, context-aware and interactive applications within healthcare settings.
A number of key attributes of the LiveNet system that make it an enabling distributed healthcare system include: • Hierarchical, distributed modular architecture • Based on standard commodity/embedded hardware that can be improved with time • Wireless capability with resource posting/discovery and data streaming to distributed endpoints • Leverages existing sensor designs and commercial sensors for context-aware applications that can facilitate interaction in a meaningful manner and provide relevant and timely feedback/information • Unobtrusive, minimally invasive, and minimally distracting • Abstracted network communications with secure sockets layer (SSL) encryption with real-time data streaming and resource allocation/discovery • Continuous long-term monitoring capable of storing a wide range of physiology as well as contextual information • Real-time classification/analysis and feedback of data that can promote and enforce compliance with healthy behavior • Trending/analysis to characterize long-term behavioral trends of repeating patterns of behavior and subtle physiological cues, as well as to flag deviations from normal behavior • Enables new forms of social interaction and communication for community-based support by peers and establishing stronger social ties within family groups
LiveNet Mobile Technology
The LiveNet system is currently based on the Sharp Zaurus (Sharp Electronics Corporation, U.S.A.), a Linux-based PDA mobile device that leverages commercial development and an active code developer community. Although LiveNet can utilize a variety of Linux-based devices, the Zaurus PDA provides a very convenient platform. This device allows applications requiring real-time data analysis, peer-to-peer wireless networking, full-duplex audio, local data storage, graphical interaction, and keyboard/ touch screen input.
In order to effectively observe contextual data, a flexible wearable platform must have a means to gather, process, and interpret this real-time contextual data [34]. To facilitate this, the LiveNet system includes a modular sensor hub called the Swiss-Army-Knife 2 (SAK2) board that can be used to instrument the mobile device for contextual data gathering.
The SAK2 is a very flexible data acquisition board that serves as the central sensor hub for the LiveNet system architecture. The SAK2 incorporates a powerful 40 MHz PIC microcontroller, high efficiency regulated power (both 5 V and 3.3 V to power the board and sensor network) from a flexible range of battery sources, a 2.4 GHz wireless tranceiver capable of megabit data rates, compact flash based memory storage, and various interface ports (I2C, RS-232 serial, daughter board connector).
The SAK2 board was designed primarily to interface a variety of sensing technologies with mobile device-based wearable platforms to enable real-time context-aware, streaming data applications. The SAK2 is an extremely flexible data acquisition hub, allowing for a wide variety of custom as well as third-party sensors to interface to it. In addition to being a sensor hub, the SAK2 can also operate in stand-alone mode (i.e., without a Zaurus or mobile PDA host) for a variety of long-term data acquisition (using the CF card connector) and real-time interactive applications.
Physiologic and Contextual Sensing Technology
In order to support long-term health monitoring and activities of daily living applications, a specialized extensible, fully integrated physiological sensing board called the BioSense was developed as a special add-on board to the SAK2. The board incorporates a three dimensional (3D) accelerometer, ECG, EMG, galvanic skin conductance, a serial-to-I2C converter (which can allow the simultaneous attachment of multiple 3 rd party serial-based sensing devices to the sensor network), and independent amplifiers for temperature/respiration/other sensors that can be daisy-chained to provide a flexible range of amplification for arbitrary analog input signals. Toward developing more non-invasive sensing technologies, we have started a collaboration with the Fraunhofer Institute to shrink the BioSense hardware to create a microminiaturized embedded system that can be incorporated in wearable fabrics. A prototype of a working lead-less lightweight ECG shirt based on conductive textiles has already been created.
Along with the core physiological sensing capabilities of the LiveNet system with BioSense daughter board, a whole host of other custom and third party sensors can be seamlessly integrated with the system, including: • Wearable Multiple Sensor Acquisition (WMSAD) Board, providing a 3D accelerometer, infrared (IR) tag, IR tag readers (vertical, for in-door location in place of GPS, and horizontal for peer or object identification), and microphone for telephony-grade 8-kHz audio. This is interfaced to the SAK2 via the I2C port. [11].
• Squirt IR Tags: IR beacons that can broadcast unique identifiers (up to 4 independent signals from separately mounted and direction-adjustable IR-LEDS) [12]. These can be used to tag individuals, objects, locations (such used in arrays on the ceiling to identify location to within meter resolution within indoor settings where GPS is not effective), or even as environmental sensors to identify the actuation of certain events such as opening/closing of drawers, cabinets, or doors.
• IR Tag Reader: to be used in conjunction with the Squirt tags to be able to identify tagged objects, people, or even locations [12].
• Accelerometer Board: 3D accelerometer board very useful for a variety of activity classification. It has been demonstrated that a single accelerometer board can be used to accurately classify activity state (standing, walking, running, lying down, biking, walking up stairs, etc). Interfaced to the SAK2 using the sensor port.
• BodyMedia SenseWear: An integrated health sensor package which provides heart rate (via a Polar heart strap), galvanic skin response, 2D accelerometer, temperature (ambient and skin), and heat flux in a small form-factor package worn on the back of the arm [4]. The SAK2 can interface to the SenseWear wirelessly via a 900-MHz tranceiver attached to the serial-toI2C bridge (the tranceiver interface and heart rate monitor was discontinued in the SenseWear Pro 2) • MITes Environmental Sensor: a wireless 3D accelerometer using the nRF 2.4 GHz protocol has been developed by the house_n group at the Media Lab for wireless environmental sensors for monitoring human activities in natural settings [13] • Socio-Badges: Multifunctional boards with on-board DSP processor capable of processing audio features, RF tranceiver, IR transceiver, a brightness-controllable LED output display, vibratory feedback, navigator switch, flash memory, audio input/microphone and optional LCD display [14]. This badge is meant for social-networking experiments and other interactive distributed applications.
The sensor hub also allows us to interface with a wide range of commercially available sensors, including pulse oximetry, respiration, blood pressure, EEG, blood sugar, humidity, core temperature, heat flux, and CO 2 sensors. Any number of these sensors can be combined through junctions to create a diversified on-body sensor network.
The LiveNet system can also be outfitted with BlueTooth, Secure Data (SD), or Compact Flash (CF) based sensors and peripherals, and other I/O and communication devices including GSM/ GPRS/ CDMA/ 1 × RTT modems, GPS units, image and video cameras, memory storage, and even full-VGA head-mounted displays.
With the combined physiological sensing board and third-party sensors, a fully outfitted LiveNet system can simultaneously and continuously monitor and record 3D accelerometer, audio, ECG, EMG, galvanic skin response, temperature, respiration, blood oxygen, blood pressure, heat flux, heart rate, IR beacon, and up to 128 independently channeled environmental activity sensors. The sensor data and real-time classification results from a LiveNet system can also be streamed to off-body servers for subsequent processing, trigger alarms or notify family members and caregivers, or displayed/processed by other LiveNet systems or computers connected to the data streams for complex real-time interactions.
Software
The software architecture allows designers to quickly design distributed, group-based applications that use contextual information about the members of a group. Layered on top of standard libraries, this middleware comprises three important parts: the Enchantment Whiteboard, the Enchantment Signal system, and the MIThril Real-Time Context Engine [10]. Respectively, these three layers provide the ability to easily coordinate between distributed applications, transmit high bandwidth signals between applications, and create classification modules that make a group's changing contextual information available to applications. The Enchantment Whiteboard system is a distributed, client/server, inter-process communication system that provides a lightweight way for applications to communicate. This system processes, publishes, and receives updates, decoupling information from specific processes. This is particularly useful in mobile, group based applications where group members may not be known a priori and may come and go over time.
For higher bandwidth signals, especially those related to the sharing and processing of sensor data for context aware applications, we developed the Enchantment Signal system. The Signal system is intended to facilitate the efficient distribution and processing of digital signals in a network-transparent manner. The Signal system is based on point-to-point communications between clients, with signal "handles" being posted on Whiteboards to facilitate discovery and connection. In the spirit of Whiteboard interactions, the Signal API abstracts away any need for signal produces to know who, how many, or even if, there are any connected signal consumers.
The MIThril Real-Time Context
Engine is an open-source, lightweight, and modular architecture for the development and implementation of real-time context classifiers for wearable applications. Using the context engine, we can implement lightweight machine learning algorithms (capable of running on an embedded system like the Zaurus PDA) to process streaming sensor data, allowing the systems to classify and identify various user-state context in real-time.
Sample Applications
In
Health and Clinical Classification
The LiveNet system has proven to be a convenient, adaptable platform for developing real-time monitoring and classification systems using a variety of sensor data, including accelerometer-based activity-state classification that can differentiate between a variety of activities (for example, running, walking, standing, biking, climbing stairs) [15], accelerometer-based head-nodding/shaking agreement classifiers, GSR-based stress and emotional arousal detectors, and audio-based speech feature classifiers that can help characterize conversation dynamics (for example, talking time, prosody, stress) [16].
Work on these real-time classifiers has also been extended to include a variety of health conditions. Examples of current collaborations between the MIT Wearable Computing Group [17] and healthcare providers have lead to a variety of pilot studies including a hypothermia study with the United States Natick Army Laboratories in Natick; a study on the effects of medication on the dyskinesia state of Parkinson's patients with neurologists at Harvard Medical School; a pilot epilepsy classifier study with the University of Rochester Center for Future Health; and a study of the course of depression treatment with psychiatrists at Harvard Medical School.
Critical Soldier Monitoring
Army Rangers and other soldiers must perform physically and mentally demanding tasks under challenging environmental conditions ranging from extreme heat to extreme cold. Thermoregulation, or the maintenance of core body temperature within a functional range, is critical to sustained performance. A research collaboration with the Army Research Institute for Environmental Medicine (ARIEM) at the Army Natick Labs was initiated to study the effects of harsh environments on soldier physiology through the use of non-invasive sensing. Specifically, non-invasive accelerometer sensing was used to determine hypothermia and cold exposure state, as part of a broader initiative to develop a physiologic monitoring device for soldiers under the US. Army's Objective Force Warrior Program.
In the study, a real-time wearable monitor was developed using the LiveNet system that is capable of accurately classifying shivering motion through accelerometer sensing and analysis using statistical machine learning techniques [18]. Real-time working classifier systems were developed from Gaussian Mixture Models using frequency features derived from calculating a finite Fourier transform (FFT) on the raw accelerometer data. Preliminary data demonstrate that shivering can be accurately distinguished with up to 95% accuracy from general body movements in various activities using continuous accelerometer sensing.
Results also indicate that specific modes of shivering (subjects in the study all exhibited a light shiver at a characteristic frequency at the start of the protocol that progressed into a more noisy and energetic shivering response spread across more frequency bands, and ending in a dampened shivering toward the end of the protocol) may correlate with core body temperature regimes, as a person is exposed to cold over time. In fact, preliminary results from six subjects show that we can triage a soldier into three core body temperature regimes (Baseline/Cold/Very Cold) with accuracies in the 92-98% range using HMM (Hidden Markov Models) modeling techniques. HMM modeling has the advantage of being able to accurately model the time-dependent changes in shivering over time as an individual is exposed to cold. This exploratory research shows promise of eventually being able to develop robust real-time health monitoring systems capable of classifying cold exposure of soldiers in harsh cold environments with non-invasive sensing and minimal embedded computational resources.
Parkinson's Disease Monitoring
LiveNet promises to be especially effective for monitoring medical treatments. Currently, doctors prescribe medications based on population averages rather than individual characteristics, and they check the appropriateness of the medication levels only occasionally. With such a datapoor system, it is not surprising that medication doses are frequently over-or underestimated and that unforeseen drug interactions can occur. Stratifying the population into phenotypes using genetic typing will improve the problem, but only to a degree and only in limited ways currently.
Continuous monitoring of physiologic and behavioral parameters may be extremely effective in tailoring medications to the individual Parkinson's patient. In Parkinson's patients, there are a variety of symptoms and motor complications that can occur, ranging from tremors (rhythmic involuntary motions), akinesia (absence or difficulty in producing motion), hypokinesia (decreased motor activity), bradykinesia (slow down of normal movement), and dyskinesia (abnormal or disruptive movements). For these patients to function at their best, medications must be optimally adjusted to the diurnal variation of these symptoms. In order for this to occur, the managing clinician must have an accurate picture of how a patient's symptoms fluctuates throughout a typical day's activities and cycles. In these situations, a patient's subjective selfreports are not typically very accurate, so objective clinical assessments are necessary.
An automated Parkinson symptom detection system is needed to improve clinical assessment of Parkinson's patients. To achieve this, Dr. Klapper, from the Harvard Medical School, combined the LiveNet system's wearable accelerometers with neural network algorithms to classify the movement states of Parkinson's patients and provide a timeline of how the severity of the symptoms and motor complications fluctuate throughout the day [19,20]. Two pilot studies were performed, consisting of seven patients, with the goal of assessing the ability to classify hypokinesia, dyskinesia, and bradykinesia based on accelerometer data, clinical observation (using standard clinical rating scales), and videotaping. Using the clinical ratings of a patient as the gold standard, the result was highly accurate identification of bradykinesia and hypokinesia. In addition, the studies classified the two most important clinical problems -predicting when the patient "feels off" or is about to experience troublesome dyskinesia -with nearly 100% accuracy. Future collaborations will focus on integrating the physiologic responses in an effort to identify predictors of relapse in addition to the motion data in Parkinson's patients.
Epilepsy Seizure Detection
A pilot study has also been initiated with the University of Rochester's Strong Hospital [21] to characterize and identify epileptic seizures through accelerometry and to begin to develop an ambulatory monitor with a real-time seizure classifier using the LiveNet system. Typically, epilepsy studies focus on EEG and EMG-based physiology monitoring. However, as demonstrated by the Parkinsons' and activity classification studies, accelerometry is a very powerful context sensor that can be applied to the domain of epilepsy. The study protocol is currently being designed, and we hope to have subject run in the Fall of 2005.
Of particular note to patients who have epilepsy is the fact that it can manifest itself in an extremely wide range of idiosyncratic motions, in contrast to Parkinsons' patients, whose movements typically follow distinct, characteristic motions. However, motions from the epileptic seizures of a particular individual are normally fairly consistent. As such, a motion classification system specifically tailored to a particular individual could be highly effective at being able to identify an epileptic seizure at onset and at subthreshold levels of awareness. In addition, many times, the epileptic individual has no recollection of a seizure, so a system that could determine if a seizure has occurred could be very useful for doctors to be able to properly diagnose the type and pattern of epilepsy in patients or to develop applications to alert caregivers to changes that could lead to medication adjustments earlier in the course of the illness. Again, future directions will involve continuous physiologic and voice feature analysis in combination with the motion sensors to increase the accuracy and understanding of patients with epilepsy.
General Activity Classification
Being able to predict an individual's immediate activity state is one of the most useful sources of contextual information. For example, knowing whether a person is driving, sleeping, or exercising could be useful for a health wearable to calculate general energy expenditure or to initiate an action. Many studies on activity classification have been conducted because of the importance to contextaware systems. Most previous studies on accelerometerbased activity classification that involves multiple activities states focuses on using multiple accelerometers and requires the specific placement of sensors on different parts of the body. In one study, it was shown that it is possible to obtain activity classification with an average of 84% for 20 daily activities (such as vacuuming, eating, folding laundry, etc), with the additional finding that classification accuracy dropped only slightly by decreasing the number of sensors to two including the wrist and waist) [22].
In contrast, we have conducted a pilot study on the use of the minimum set of sensors required to accomplish accurate activity classification. The ultimate goal is to use only a single sensor in random orientation placed close to a person's center of mass (i.e., near waist level), as this replicates the minimum setup requirements of a sensor-enabled mobile phone in the pocket of an individual. The goal is to demonstrate that accurate activity classification can be performed without the need for an extreme level of instrumentation (for example, some systems use up to 30 sensors [23]) or particular delicacy in the setup in order to achieve good classification results. This way, we are able to potentially reduce the cost of a recognition system as well as reducing the overall burden when using the technology. Using a LiveNet system we have been able to discriminate between a set of major activities (for example, lying down, walking, running, sitting in the office, watching TV, and walking up/down stairs) with classification results in the 80-95% accuracy range using only a single accelerometer located on the torso of an individual [15]. This research is important as it indicates that it is feasible to do activity classification on embedded hardware without any specialized setup, wires, or other unwieldy parts. By integrating the accelerometer into an existing device that people are comfortable carrying around (for example, a cell phone), we can significantly lower the bar for developing a practical activity classification system to the mass market that is completely transparent to the user. When combined with physiological measurements such as heart rate and breathing rate, these measurements can then be collected to build a personalized profile of your body's performance and your nervous system's activation throughout your entire day, and assembled over a period of months or years to show long-term changes in overall cardiac fitness. In the future, computer software 'agents' (automatic computer programs) could even give you gentle reminders to keep up your routine if your activity level started to decline and make suggestions to optimize your performance.
Depression Therapy Trending
Mental diseases rank among the top health problems worldwide in terms of cost to society. Major depression, for instance, is the leading cause of disability worldwide and in the U.S. Depressive disorders affects approximately 19 million American adults and has been identified by both the World Health Organization and the World Bank as the second leading cause of disability in the United States and worldwide [28,29].
Toward understanding the long-term biology associated with severe depression, we have recently initiated a pilot study to assess the physiological and behavioral responses to treatment in major depression in subjects in an inpatient psychiatric unit prior to, during, and following electroconvulsive therapy (ECT). This study, the first of its kind, intends to correlate basic physiology and behavioral changes with depression and mood state through a 24hour, long-term, continuous monitoring of clinically depressed patients undergoing ECT. We are using noninvasive mobile physiologic sensing technology in combination with sensing devices on the unit to develop physiological and behavioral measures to classify emotional states and track the effects of treatment over time. This project is a joint collaboration with the Massachusetts General Hospital (MGH) Department of Psychiatry.
The goal of this study is to test the LiveNet system based on the known models of depression and prior clinical research in a setting with combined physiologic and behavioral measures with continuous ambulatory monitoring. It is anticipated that changes in these measures (namely, GSR response, hear rate/heart rate variability, motor activity, vocal features, and movement patterns) will correlate with improvements in standard clinical rating scales and subjective assessment following treatment for depression throughout the course of hospitalization. In the future, these correlates may be used as predictors of those patients most likely to respond to ECT, for early indicators of clinical response, or for relapse prevention.
The collaboration with MGH will serve to establish the LiveNet system's capabilities for engaging in significant long-term ambulatory clinical studies. The implications and clinical significance of the proposed research are broad. The development and refinement of a methodology that objectively and accurately monitors treatment response in major depression has implications for the diagnosis, treatment, and relapse prevention. Once a reliable index of physiologic and behavioral metrics for depression has been established, other environments outside of the inpatient setting become potential targets for assessment. The methodology developed also has the potential to help in the assessment, early diagnosis, and treatment prediction of other severe psychopathologies that have likely physiologic correlates and involve difficulty with social interactions. These include communication disorders and pervasive developmental disorders in children as well as the pre-clinical assessment of severe psychotic, mood, anxiety, and personality disorders. As our understanding of the central nervous system control of autonomic arousal improves and the neurobiology of depression continues to be discovered, future questions about the subtypes of depression and endophenotyping for genetic studies of depression can be studied in the ambulatory setting. This will lead to more sophisticated neurobiologic models of the mechanism of healing and ultimately to increased efficiency and efficacy of treatment.
Quantifying Social Engagement
Social interaction is a complex and ubiquitous human behavior involving attitudes, emotions, nonverbal and verbal cues, and cognitive function. Importantly, impairment in social function is a hallmark for nearly every diagnostic category of mental illness including mood and anxiety disorders as well as dementia, schizophrenia, and substance abuse [24]. In addition, social isolation can be a significant stress for patients undergoing rehabilitation from surgical and medical procedures and illnesses. Thus, an important challenge for our behavior modeling technology is to build computational models that can be used to predict the dynamics of individuals and their social interactions.
Using LiveNet, we can collect data about daily interactions with family, friends, and strangers and quantify information such as how frequent are the interactions, the dynamics of the interactions, and the characteristics of such interactions using simple infrared (IR) sensors and IR tags to identify individuals. Using simple voice features (such as talking/non-talking, voice patterns, and interactive speech dynamics measures) derived from microphones, we can obtain a variety of useful social interaction statistics. We can even model an individual's social network and how that network changes over time by analyzing statistical patterns of these networks as they evolve [25]. Data on social function can be used as both a marker of improvement or rehabilitation progress or as an indicator of relapse and for use in relapse prevention.
Long-Term Behavior Modeling and Trending
The LiveNet platform also lends itself naturally to be able to do a wide variety of long-term healthcare monitoring applications for physiological and behavioral trends that vary slowly with time by using the currently available physiological sensors. This has important implications for rehabilitation medicine. The ambulatory physiological and contextual sensing and the health classifiers discussed in Section 3.1 can be combined together in a hierarchical manner to develop time-dependent models of human behavior at longer timescales. Current systems are purely reactive (e.g. sounding an alarm after a person has a heart attack or falls down), and are dependent on classifying and determining in real-time when certain events have occurred.
While this type of application is very useful and potentially-life saving, these systems typically do not have any sense of the history of an individual and can only react to instantaneous events. By combining long-term trending with multimodal analysis, it is possible to develop more proactive systems and personalized data that can be used to catch problems before they manifest themselves (e.g. instead of reacting to a heart attack, one can predict beforehand that a heart attack is imminent). However, a proactive system requires more resources, as it must have context-aware and inference capabilities to be able to determine what the right information is to be directed at the right people, to the right places, at the right times, and for the right reasons. While challenging, small advances have been made in this regard.
In order to fully accomplish the goal of preventive monitoring, large databases in living situations are needed. The LiveNet system provides a convenient infrastructure to implement and rapidly prototype new proactive healthcare applications in this domain. It is very important from the proactive healthcare point of view that these individual classification systems also be able to determine trends in physiological/contextual state over time to provide not only immediate diagnostic power but also prognostic insight. We are collaborating on the MIT/TIAX PlaceLab, a cross-institutional research smart living environment [26], to provide a very robust infrastructure to be able to collect and study long-term health information in conjunction with data collected by LiveNet systems. We also have a collaboration with British Telecom to use LiveNet technology in similar long-term naturalistic home monitoring applications for eldercare.
The information collected from the multimodal sensors can then be used to construct activities of daily living, important information in being able to profile a person's healthy living style. Furthermore, these activities of daily living can initiate action on the part of the wearable PDA. Examples include experience sampling, a technique to gather information on daily activity by point of querying (which can be set to trigger based on movement or other sensed context by the PDA). The system can also proactively suggest alternative healthy actions at the moment of decision, where it has been demonstrated as being more effective at eliciting healthy behavior [27].
Real-Time Multimodal Feedback Systems in Rehabilitation
An obvious domain for LiveNet is in physiology monitoring with real-time feedback and classification. The dominant healthcare paradigm that exists is to stream physiology data from an individual to a centralized server, where the higher-power processing and data visualization could be performed to post-process the data. The wearable system served mainly as a data acquisition vehicle, with little feedback or interaction capabilities. Now, it is possible for significant localized processing as well as displaying the result, which will open up the door to real-time interactive health applications. In fact, commercial sys-tems are just beginning to incorporate these types of increased functionality.
It is possible to use a system such as LiveNet to go a step farther and demonstrate that mobile systems are capable of significant local processing for real-time feature extraction and context classification as well as provide the distributed wireless infrastructure for streaming information between systems, all on commodity hardware that is commonplace and available today. This will enable the realtime classification of medical conditions without the need for other infrastructure, available wherever the individual goes. The distributed nature of LiveNet can also allow systems to stream raw physiology or its combination with derived metadata/context very easily to any specified source(s), whether it is other mobile systems, data servers, or output displays such as projections.
By providing local processing capabilities, the time that is required to receive feedback for relevant health events is dramatically reduced. Historically, the time delay required to receive feedback can potentially take weeks, and be both problematic because of the iterative nature of determining the optimized treatment path. For people who are on medication or embarking on a prolonged rehabilitation schedule, for example, this delay in the feedback loop is particularly onerous. A doctor will recommend a dosage and medication regimen to try out, and the person goes home and tries the medication schedule LiveNet wearable performing real-time FFT analysis and activity classification on accelerometer data, visualizing the results, as well as wirelessly streaming real-time ECG/GSR/ temperature and classification results to a remote computer with a projection display as well as peer LiveNet systems Figure 1 LiveNet wearable performing real-time FFT analysis and activity classification on accelerometer data, visualizing the results, as well as wirelessly streaming real-time ECG/GSR/ temperature and classification results to a remote computer with a projection display as well as peer LiveNet systems.
for a while. If the person does not respond favorably to this drug schedule, they have to reschedule an appointment with the doctor, go in, and potentially take more tests, before getting a recommendation on a new schedule (such is the case for people with thyroid conditions, for example). This same scenario is also true with lengthy rehabilitation programs such as cardiac rehabilitation. This results in a very time consuming process as well as a significant drain on healthcare resources. In addition to the fact that the feedback loop can be very long in duration, the doctor is literally in the dark about the efficacy of the treatment, and so an iterative trial-and-error process is required. Using the LiveNet system, it is possible to effectively reduce the time delay to process and receive health feedback. This is particularly true when the doctor can be either removed from the equation or visit length and frequency can be reduced, such as with real-time diagnostic systems that can provide effectively instantaneous classification on health state and context. Given that medication compliance is a major healthcare issue, especially among the elderly, with estimated costs of upwards of $100 billion annually [30], systems that can help remind and support compliance with appropriate feedback will help to promote healthy, preventive behavior.
Also, potential advances in more personalized medication and rehabilitation scheduling can be improved based on LiveNet system, composed of the Zaurus PDA (top left), with SAK2 data acquisition/sensor hub and BioSense physiological sensing board (middle), battery source (top right), sensor bus hub (lower right), 3D accelerometer board (middle left), and WMSAD multisensor board (lower left) Figure 2 LiveNet system, composed of the Zaurus PDA (top left), with SAK2 data acquisition/sensor hub and BioSense physiological sensing board (middle), battery source (top right), sensor bus hub (lower right), 3D accelerometer board (middle left), and WMSAD multisensor board (lower left). measured, quantitative physiological symptoms and behavioral responses, not based only on time scheduled approximations as the current practice. The effects of medication and rehabilitation treatment can be logged and recorded quantitatively and compared to changes in physiology, eventually with the goal of developing a real-time monitoring and drug delivery system. Future research will extend this work by developing real-time, closed-loop systems that can track the effects of individualized treatment over time.
Time-stamping of relevant events (either simple events or elaborated notes) in order to correlate these events with accurate continuous physiology and behavior data in an ambulatory setting also offers great potential. From this health information, real-time correlations to specific medical conditions as well as predictions of adverse outcomes can be made. This also has a potential impact in the research of physiology in the domain of clinical medication and intervention research, providing a streamlined path for Electronic Data Capture (EDC), where accurate reporting is an issue and human transcription errors and recall bias from surveys can be reduced. Potential benefits provided by continuous monitoring include automated and real-time data capture from patients for accurate reporting, feedback and notification for enforcing medica-tion compliance in patients, assessment of the degree of medication compliance, removal of human error inherent in manual transcription/data entry, high-resolution timestamping for accurate temporal characterization of events, and the ability to accurately correlate quantitative physiologic data to events for diagnosis and characterization
Conclusion
The ability to provide new wearable technology for medical and surgical rehabilitation services is emerging as an important option for clinicians and patients. Wearable technology provides a convenient platform to be able to quantify the long-term context and physiological response of individuals. This, in turn, will support the development of individualized treatment systems with real-time feedback to help promote proper behavior. The ultimate goal of the research is to eventually be able to use LiveNet for developing practical monitoring systems and therapeutic interventions in ambulatory, long-term use environments.
LiveNet wearable configured for non-invasive real-time sol-dier physiology monitoring Figure 3 LiveNet wearable configured for non-invasive real-time soldier physiology monitoring.
LiveNet wearable streaming real-time ECG/motion/stress information to a remote display over a wireless network link Figure 4 LiveNet wearable streaming real-time ECG/motion/stress information to a remote display over a wireless network link. | 9,951.2 | 2005-06-01T00:00:00.000 | [
"Computer Science"
] |
TECHNICAL AND ECONOMIC FEASIBILITY OF USING A VARIABLE-FREQUENCY DRIVE IN MICRO-IRRIGATION SYSTEMS
Irrigation is essential for the development of crops in regions with scarcity or irregular rainfall distribution, enabling high productivity. However, the use of water resources and electrical energy leads to a concern with irrigation efficiency. Pressure demand varies during the operations of irrigation systems and the appropriate pressure can be regulated by variable-frequency drives for the power supply of the motor-pump set. This study aimed to analyze the technical and economic feasibility of using a variable-frequency drive to adjust the pressure in subunits of micro-irrigation systems. Laboratory tests were carried out to determine the electrical power consumed in each irrigated subunit for different slopes and the application or not of the variable-frequency drive. Thus, an economic analysis was carried out considering the electricity tariff for group B and rural consumer class, as well as different annual irrigation times. The results showed the potential for energy saving with the use of the variable-frequency drive. Thus, the economic analysis showed that the variable-frequency drive was a better alternative than the dissipative method.
INTRODUCTION
Pumping systems for irrigation are designed to work in the condition of maximum flow demand and total manometric head to meet all irrigation subunits (Lamaddalena & Khila, 2012;Khadra et al., 2016;Brar et al., 2017). However, some subunits require less power even at the time of maximum demand, and dissipative methods are commonly used to adjust the motor-pump operating point (Carvalho et al., 2000;Araújo et al., 2006).
Significant differences in power demand may occur between micro-irrigation subunits due to factors such as slope and/or length of different lateral and manifold, providing variable manometric heads to meet each subunit.
Pressure regulators and, in some cases, selfcompensating emitters are required to adjust the pressure and flow in the subunits, allowing the pumping system to cover the entire area to be irrigated (Barreto Filho et al., 2000;Oliveira & Figueiredo, 2007). However, the use of these devices causes the dissipation of hydraulic energy, which reflects an increase in the consumption of electrical energy by the pumping system (Lamaddalena & Khila, 2013;Khadra et al., 2016).
Variable-frequency drives can provide better energy use efficiency in pumping systems, as the operation point of the motor-pump set is adjusted to the design point of each subunit by varying the motor supply frequency, acting on the control of its rotation (Araújo et al., 2006;Viholainen et al., 2013;Sungur et al., 2016;Valer et al., 2016). Thus, its use can provide an improvement in the pressure and flow control of irrigation systems, besides saving energy by avoiding the dissipation of hydraulic energy in pressure regulator devices (Burt et al., 2008).
Variable-frequency drives have been used in center pivot irrigation systems and the economic feasibility of using the equipment in these systems has been demonstrated Engenharia Agrícola, Jaboticabal, v.41, n.1, p.112-118, jan./feb. 2021 in several studies (Moraes et al., 2014;Lima et al., 2015;Brar et al., 2017). Micro-irrigation systems also present desirable characteristics for the use of this technology, and studies that focus on increasing energy efficiency are required, even if they are considered to have low power demand than other pressurized systems.
Thus, this study aimed to evaluate the technical and economic feasibility of using a variable-frequency drive to trigger the pump set in micro-irrigation systems.
MATERIAL AND METHODS
This study was developed at the Laboratory of Hydraulics and Irrigation of the Department of Engineering, belonging to the Institute of Technology of the Federal Rural University of Rio de Janeiro (UFRRJ).
Manometric heads required for each micro-irrigation subunit were evaluated considering three slope conditions (0, 5, and 10%) in the direction of the longest terrain length and 0% in the direction of the shortest length. Two methods were considered to regulate pressure at the beginning of the manifold: the use of valves (dissipative method) and variable-frequency drives (non-dissipative method).
The pressures at the outlet of the motor-pump set and the point corresponding to the start of the manifold were measured using the pressure transducer MSI 300-250-P-3-N. A digital multimeter (Minipa ET-3110) was used to acquire electrical variables (potential difference and current).
The following factors were considered to obtain the maximum area to be irrigated: highest yield flow of the motor-pump set (8 m 3 h −1 ), net irrigation depth of 5 mm, and time available for irrigation of 16 h d −1 . An irrigation frequency of 1 day was adopted, thus allowing each subunit to be irrigated individually, according to the time of irrigation available. The area to be irrigated consisted of 2.3 ha under these conditions. Thus, a rectangular area (210 × 110 m) was divided into 14 subunits of 55 × 30 m (Figure 1), providing lengths of the manifold of 28.5 m and the main line of 195 m. Each irrigation subunit was composed of 10 lateral lines spaced 3 m from each other, with a length of 54 m. Subunits located on the same terrain elevation (Figure 1) had the same pressure demand on the motor-pump set (1 and 14, 2 and 13, 3 and 12, 4 and 11, 5 and 10, 6 and 9, and 7 and 8). A dripper tube with integrated drippers with a flow of 4.3 L h −1 under a pressure load of 10 m, nominal diameter of 16 mm, spacing between emitters of 0.4 m, and flow regulation mechanism as a function of pressure (regulated emitter), whose exponent of the emitter flow-pressure equation tends to zero was considered for the hydraulic dimensioning of the lateral line. These emitter characteristics enabled to determine the maximum pressure variation allowed on the lateral line, which was assumed to be the operating pressure load range of the emitter (10 to 30 m), i.e., the maximum allowed variation was 20 m. This procedure was adopted because the allowed theoretical variation in regulated emitter tends to infinity [eq. (1)]. Where: ΔP is the pressure load variation in the subunit (m); qvar is the allowable flow variation (dimensionless); x is the exponent of the flow equation as a function of the emitter pressure (dimensionless), and Ps is the emitter working pressure load (m).
The dimensioning of lateral line and manifold used eqs (2) Where: hf is the pressure drop on the lateral line or manifold (m); Q is the flow in the pipes (m 3 s −1 ); L is the length of pipes (m); F is the Christiansen's head loss reduction factor (dimensionless); D is the pipe diameter (m); N is the number of outlets on the lateral line or manifold (dimensionless); λ is the local head loss factor (dimensionless); ΔZ is the topographic gap (m); Ps is emitter working pressure load (m); PinLL is the pressure load at the start of the lateral line (m), and PinBL is the pressure load at the start of the manifold (m).
The local head loss factor was determined according to Gomes et al. (2010) (Equations 6 to 9). Where: OI is the obstruction index (dimensionless); K is the local head loss coefficient (dimensionless); At is the tube cross-section (m 2 ); Ag is the internal dripper cross-section (m 2 ); f is the friction factor (dimensionless); Se is the spacing between emitters (m), and Leq is the equivalent length (m).
The data of the length of the lateral line of 54 m, the flow of 0.5805 m 3 h −1 , the internal diameter of 13.8 mm, and local head loss factor of 1.99 allowed obtaining a head loss of 4.86 m, which is within the limit that would be half the pressure variation in the subunit, corresponding to 10 m. The pressure required at the start of the lateral line was 13.6 m. Nominal diameter of 32 mm was adopted for the manifold, whose internal diameter is 28.4 mm, flow rate of 5.805 m 3 h −1 , length of 28.5 m, head loss factor of 1.32, obtaining a head loss of 3.26 m and meeting the criteria established for the dimensioning.
The velocity criterion (Equation 10) was used for the dimensioning of the main line piping, in which the lower and upper limits were 1.0 and 2.5 m s −1 , with a nominal diameter of 40 mm, whose internal diameter is 36.2 mm. The flow considered was 5.805 m 3 h −1 , corresponding to the flow of the manifold (irrigation of one subunit at a time), providing a speed of 1.57 m s −1 . The head loss was determined using [eq. (11)], being calculated for each subunit, and considering its distance to the control station. Where: v is the flow velocity (m s −1 ), and hfML is the head loss on the main line (m).
The manometric head required in each subunit was determined from the head loss at the control station and pump suction of 2.9 m, a variable topographic gap depending on the subunit in operation and the slope of the evaluated soil, and a variable head loss on the main line depending on the distance from the subunit to the pump (Equation 12). Hm = P inLD + hf LP + hf CC + hf + ΔZ Where: hfCC is the head loss at the control station (m); hfS is the pressure drop at the pump suction (m);
ΔZ is the topographic gap (m), and
Hm is the manometric head (m).
The pressure in each subunit was adjusted by partial closing of the gate valve (energy dissipative method) in tests without the use of the variable-frequency drive. On the other hand, the adjustment in tests using the variable-frequency drive was performed by controlling the motor supply frequency for each subunit. The simulation of the head loss on the main line and topographic gap was carried out by partially closing a gate valve installed in the discharge pipe, according to the procedure adopted by Moraes et al. (2014). Table 1 shows the values of manometric head required in each subunit for slopes of 0, 5, and 10%. The electrical power demanded by the motor in the different scenarios was determined by measuring the current and potential difference, using a digital multimeter (Equation 13). The power factor was obtained from the engine manufacturer's manual, considering the engine load in each evaluated situation.
Po =
√3 × V × I × cos α 1000 Where: Po is the consumed power (kW); V is the potential difference of the electrical grid (V); I is the electric current (A), and cos α is the power factor (dimensionless).
The evaluated alternatives were compared from the total annual cost (Equation 14), considering the sum of the energy cost and the investment cost with the variablefrequency drive (R$ 1,500.00) or valves (R$ 3,806.18 for the 14 valves) in current monetary values.
TAC = I
(1 + i) n i (1 + i) n − 1 + C × Po × time Where: TAC is the total annual cost (R$), i is the annual interest rate (dimensionless); n is the equipment useful life (years); I is the initial investment (R$); C is the electricity consumption tariff (R$ kWh −1 ), and time is the annual irrigation time (h).
The useful life of the equipment was considered to be 10 years, annual irrigation time of 500, 1000, 1500, and 2000 h, and annual interest rate (Selic interest rate) of 12.53% per year, corresponding to the average of the 229th and 230th meeting of the Central Bank of Brazil's Monetary Policy Committee (COPOM) (2017). The electric tariff for group B (low voltage) and rural consumer class was adopted, with monthly consumption above 300 kWh (R$ 0.51214 kWh −1 ), according to LIGHT (2017).
The classification in the rural class is due to the objectives of the study. Consumers of this category may be benefited from discounts in tariffs in case of practicing irrigation at night (Sá Júnior & Carvalho, 2016) although the price differentiation by time of use is not evaluated in this study.
RESULTS AND DISCUSSION
The electrical power demanded in each subunit for the 0, 5, and 10% slope conditions, with and without variable-frequency drive, is shown in Figure 2. In general, the power demanded in each subunit is directly proportional to the pressure required in the motor-pump set, with the lowest and highest power observed in subunits 1 and 7, respectively.
The power required in the different slope conditions and between subunits were similar by the dissipative method (Figure 2), considering that the manometric head was maintained practically constant to allow the pump operating point to be also maintained at the project point. The manometric head was adjusted by controlling the partial closing of the gate valve, as the pressure demand in each subunit is different.
The comparison between the slope and scenarios with and without the use of the variable-frequency drive showed a decrease in the demanded power when it was used. The demanded power for the 0% slope showed no significant increases along the subunits due to the small difference in manometric head required by the pumping system ( Figure 2). However, the power required in the more distant subunits also increased as the slope increased. It demonstrates that the higher the slope, the higher the potential for energy savings when using the variable-frequency drive, as found by Moraes et al. (2014) in a center pivot prototype controlled through the variable-frequency drive.
The difference between powers demanded with and without using the variable-frequency drive decreased as the subunits distanced from the motor-pump set under the 5 and 10% slope conditions ( Figure 2B and 2C, respectively), that is, a higher possibility of reducing energy consumption occurs in subunits closest to the motor-pump set.
The potential savings under the 0% slope condition ranged from 46 to 58%, with the lowest value in the most distant subunit. The 5% slope condition presented savings of 60% in subunit 1 and 22% in subunit 7. The 10% slope showed values between 54 and 8% of electrical power reduction.
The analysis of the area of the hydraulic project showed that the variable-frequency drive promoted a reduction in the demanded power of 51, 39, and 38% for the 0, 5, and 10% slope, respectively. Moraes et al. (2014) evaluated slopes of 0, 10, 20, and 30% for a center pivot and observed energy savings of 48, 37, 26, and 16%, respectively, demonstrating the importance of this factor in energy savings. Córcoles et al. (2019) evaluated the feasibility of using variable-frequency drives to capture water from wells for irrigation and found energy savings ranging from 6.8 to 23%, depending on the characteristics of the irrigated areas and the dynamic variations of the water depth in the well. Araújo et al. (2006) obtained a power reduction of around 30% when using a variable-frequency drive in a conventional sprinkler system, considering the number of lateral lines in simultaneous operation.
The fact that the 0% slope scenario has promoted the highest reductions in consumption when the variablefrequency drive is used must be analyzed with caution because this result may vary depending on the power of the
C.
Engenharia Agrícola, Jaboticabal, v.41, n.1, p.112-118, jan./feb. 2021 motor-pump set. In cases in which the motor-pump set is oversized, as is the analyzed case, the slope does not affect the pumping costs when the drive is not used because excess pressure is dissipated in the form of pressure loss in the control valve, increasing the power that can be saved using the variable-frequency drive. Ferreira et al. (2008) analyzed energy efficiency in pumping systems and found that the potential for electricity savings when using a variable-frequency drive instead of the dissipative method depends on the proximity of the project point to the motor-pump operation point. Figure 3 shows the total annual costs under the conditions with and without the variable-frequency drive, terrain slopes of 0, 5, and 10%, and annual irrigation time of 500, 1000, 1500, 2000 h. The total annual cost is higher when no variable-frequency drive is used, regardless of the annual irrigation time. It occurs because the initial investment for the acquisition of valves is higher than for the acquisition of the variable-frequency drive, the latter still providing a reduction in the cost related to energy consumption. The annual irrigation time was decisive for the feasibility of the use of the variable-frequency drives in other irrigation systems (Carvalho et al., 2000;Araújo et al., 2006;Córcoles et al., 2019). Araújo et al. (2006) found the leveling point between the number of hours worked and the economic feasibility of variable-frequency drives in center pivot motor-pump sets, considering various powers of drives associated with different working pressures. Córcoles et al. (2019) pointed out that the economic feasibility of this equipment depends on several factors such as energy savings, drive cost, and applied water depths. Carvalho et al. (2000) evaluated the feasibility of using variable-frequency drives to control the flow in irrigation systems and concluded that the decisive variables were the annual irrigation time and the demanded power reduction. However, the authors did not consider the reduction in costs related to the components of irrigation systems because it was a generic analysis. Pressureregulating valves in the micro-irrigation represented the largest portion of the initial cost in the economic analysis.
The use of the variable-frequency drive promoted cost reduction under all the analyzed situations compared to the energy dissipative method, being higher for the 0% slope. An annual cost reduction of R$ 7,532.23 and R$ 2,088.68 was obtained for annual irrigation times of 2000 and 500 h, respectively, under this condition. In percentage terms, savings reached 71.43, 40.33, and 26.28% for the 0, 5, and 10% slope scenarios.
1.
The use of the variable-frequency drive was technically feasible for adjusting the hydraulic power in the subunits of the micro-irrigation system, presenting the same technical performance as the energy dissipative method.
2. The use of the variable-frequency drive was the best economical alternative for all the evaluated conditions of relief and annual irrigation times. | 4,176.2 | 2021-02-01T00:00:00.000 | [
"Engineering"
] |
Grouting Mechanism in Water-Bearing Fractured Rock Based on Two-Phase Flow
School of Emergency Management and Safety Engineering, China University of Mining and Technology (Beijing), Beijing 100083, China State Key Laboratory of Deep Coal Mining & Environment Protection, Coal Mining National Engineering Technology Research Institute, Huainan 232000, China School of Resources and Civil Engineering, Northeastern University, Shenyang 110819, China School of Energy and Safety, Anhui University of Science and Technology, Huainan 232001, China Coal Industry Branch of Huaihe Energy Group, Huainan 232000, China
Introduction
As one of the main energy resources, coal is widely used in power generation, heating, steel, and other industrial production. China is very dependent on coal energy with the geological conditions of more coal and less oil. Considering the higher efficiency and exhaustion of resources, coal mining tends to be mined in depth and is limited by the complex geological conditions and mining technologies, accidents such as rock-burst, gas explosion, water inrush, and roof fall occur frequently [1][2][3][4][5], which seriously threaten the production safety. Therefore, corresponding prevention and control measures must be taken to keep mining, such as forced caving, gas drainage, and bolt-grouting [6,7]. To govern the geological fractures, grouting has been widely used in mining engineering, tunnel engineering, and other major construction projects [8][9][10]. The grouting process is to inject cement or other slurries into the structure with injection bolts, which will block the fractures and cement broken bodies to achieve sealing and reinforcement [11]. Especially in the mine threatened by the floor confined water, the comprehensive method of grouting and drainage become the key technology to ensure the safety of mining production. Therefore, the grouting equipment, materials, and their flow mechanism have been deeply studied [12][13][14][15].
Grouting engineering is a continuous and complex procedure. In the fractures, the slurry is driven by the external pressure and diffuses. The diffusion range is the key to ensure its quality, which is affected by the properties of slurry, flow channels, and geological matrix [16]. Grouting slurry shows the various fluid with different hydraulic properties and is suitable for different construction conditions [17][18][19]. To meet the needs of high-quality engineering, many chemical slurries have been developed to prevent leakage, which is generally characterized by Newtonian fluid. However, cement is often used in the water plugging in coal mining. The flow characteristics of cement are related to temperature and water-cement ratio (w/c), which can be divided into Newtonian fluid (w/c > 1:0), Bingham fluid (1:0 > w/c > 0: 7 ), and power-law fluid (0:7 > w/c) [20,21]. The flow pattern of the slurry plays a key role in the diffusion, which must be considered in the grouting calculation. The cement slurry of Newtonian type is always used in the coal mine to seal the fractures to water plugging completely, which can be injected into the fractured zone with grouting equipment. Therefore, the diffusion process of Newtonian grout in cracks has been widely studied, but further research is needed to meet the engineering needs [15,22].
The conventional research is to calculate the parameters of a single slurry fluid by the Navier-Stokes equation (NS equation) and Darcy's law, which can be used to estimate the actual engineering consumption [23][24][25]. It is difficult to meet the actual needs of grouting process with parameters calculated by conventional simple formula. Therefore, the research on grouting mechanism and method still needs to be carried out in depth. As the basis of grouting construction and research, many researches have been carried out on the slurry diffusion of single-plane fracture. The pressure distribution and final range of slurry diffusion in the fracture were derived with the simplified fracture of a two-dimensional smooth and nondeformable one by Wittke and Wallner [26]. Funehag and Gustafson developed a penetration model of one-dimensional grouting flow in the channel with the simplified parameters as to calculate the penetration length of silica sol [27]. El Tani and Stille simplified the silicon solution and cemented grouting as a channel flow in the plane fracture, analyzed the flow of silicon dioxide solution and cement with the Bingham type under the Gin method, and established a systematic representation of the analytical relationship between grouting diffusion and time period [12]. Based on the common rheological Bingham model, Funehag and Thörn studied grouting diffusion in a single fracture [28]. In order to truly reflect and simulate the actual situation of the site, the complexity, crisscross, and undulation of geological fractures are considered in grouting calculation. Mu et al. studied the influence of roughness, dip angle, and coupling degree on slurry diffusion based on a single rough fracture model [29]. Yang et al. simulated the fracture of single rough rock in the tunnel with the self-designed simulation device [30]. The numerical simulation of the slurry diffusion process in a single-slab fracture without external environment interference can be carried out. The research content can promote the application and development of grouting technology in engineering. However, the accurate characterization of slurry diffusion range, especially considering the external environment, such as water flowing, still needs further studies.
The external environment of grouting is complex, in which the grouting is often carried out for the water blocking, so there is obvious dynamic water interference in the grouting process, and many studies have been carried out in depth. Based on the experiment, Sui et al. established the slurry diffusion model in the single-plane fracture with the effects of dynamic water and studied the distribution characteristics of sealing effect under the condition of water flow [31]. Yang et al. analyzed the influence of multiple factors on slurry diffusion and sealing performance in view of fracture grouting diffusion in water environment [32,33]. Wang et al. pointed out that the slurry penetration under flowing water has spatial heterogeneity and the mechanical behavior of slurry under water washing was studied by constructing a rough rock fracture model [34]. Guo et al. established the theoretical model of fracture flow grouting, deduced the streamline equation of grouting diffusion trajectory with flowing water based on considering the boundary conditions, and analyzed the influence of boundary effect on grouting diffusion law [35]. Liu et al. simulated the flowing water grouting process in rough rock fractures through a series of physical simulation tests and studied the influence of slurry composition content on grouting fluid pressure and plugging effect [36]. Water flow has an important influence on the hydraulic characteristics such as the pressure distribution and permeability of the slurry in the fracture [37]. When the cement slurry blocks the water flow, the water flow also has multiple effects on the cement slurry, such as scouring, dilution, and promoting diffusion. The accurate characterization of this process is of great significance to optimize the grouting process and design.
There are significant differences in properties and flow patterns between cement slurry and water. It is not a singlefluid diffusion of slurry, but a process of slurry displacing fracture water flow. Therefore, the slurry diffusion can be regarded as a two-phase flow process of slurry and water. For the two-phase flow theory, there are two ways to track the two-phase flow interface: the level set method and phase field method [38]. Using these two methods, we can get the proportion of the two kinds of fluid in the calculation domain and get the corresponding hydraulic parameters. Then the two-phase driven diffusion of slurry and water can be solved by the level set method. Through this method, the effective 2 Geofluids diffusion characteristics of cement under the action of water flow, especially under the condition of flowing water, can be obtained [37]. The results obtained by tracing the diffusion boundary of two-phase flow can be used to analyze the influence of crack boundary, water velocity, and other factors on grouting. At the same time, corresponding solutions can be put forward for different situations to meet the needs of the actual project. Based on the N-S flow equation and the two-phase flow level set method, the research work is carried out in this paper. COMSOL is used to build the diffusion model of slurry in fracture, and the diffusion characteristics of slurry in dynamic water are studied based on CFD module. Combined with the diffusion characteristics of engineering scale, the corresponding solutions are proposed to better meet the needs of mining engineering. Figure 1. The mine is divided into two levels and 20 mining areas. The mine is concentrated in the first level production which is in -492 m and mining three groups of coal A, B, and C. The strata in the minefield area are deposited successively from top to bottom in Quaternary, Tertiary, Permian, Carboniferous, Ordovician, and Cambrian, and the mined coal are hosted in Permian. Most of the faults in Zhangji mining area are normal faults, and the strike is mainly NE-trending. The more developed faults will affect the continuous advance of mining face. Mine aquifer consists of Cenozoic loose pore aquifer, Permian sandstone fissure aquifer, and limestone karst fissure aquifer.
Engineering Case
The main water hazards in mine mining are limestone water, goaf water, coal measures sandstone fissure water, and Cenozoic loose bed water, among which limestone water is the main threat source of 1# coal mining. Coal mining is directly threatened by the Carboniferous limestone aquifer on the floor. Ordovician limestone aquifer generally does not pose a direct threat to coal mining, but it can indirectly threaten through water conducting faults and hidden subsidence pillars.
Potential Danger of Water Inrush in Stope
Floor. The average thickness of aquiclude in floor is about 17 m. Limestone confined aquifer is an important water source for mining. According to the multiple faults exposed in the driving roadway, dripping may occur in the mining face. It is necessary to take targeted water control measures under the influence of mining disturbance. And in April 2018, 11# orientated long borehole of -600 m drainage roadway in coal mining area appeared water gushing at 330 m depth. The initial water volume was about 3 m 3 /h. After all the drill pipes were pulled out, the stable water volume was 220 m 3 /h, and the water temperature was 40.5°. In order to further ensure the safety of mining, surface high-power time-frequency electromagnetic exploration and three-dimensional seismic`e xploration are used to interpret and refine strata in the mining area, as shown in Figure 2.
Sensitive seismic attributes of prestack depth migration profiles are detected. The area below the water outlet presents attributes anomaly, which reflects the poor continuity of strata and may contain geological anomaly bodies. Seismic multiattribute detection results are extracted by slicing along the bottom boundary of Taihu gray. There is an elliptical maximum negative curvature property anomaly area near the outlet point. The low-value anomaly of minimum coherence attributes presents "X" banded intersection, suspicious of conjugate shear faults. Combined with the results of borehole exploration, it is concluded that the anomaly body is a fault fracture zone, and there may be local stratum subsidence near the fault intersection point. The elliptical bad geological body is about 53 m in length and 35 m in width. Therefore, it is necessary to carry out grouting projects in the region and reinforce water shutoff and hydrophobic depressurization to reduce the risk of water inrush in fractured fault zones.
Grouting Process.
The grouting process mainly relies on the ground grouting station to inject cement slurry into the stratum, as shown in Figure 3. The water-cement ratio will change the characteristics of cement [21], so the appropriate water-cement ratio slurry should be selected on site. The maximum diameter of grouting hole is 350 mm, with an average of 250 mm. The grouting material is ordinary Portland cement of water-cement ratio 1.2~1.7 without early setting agent. According to the previous research conclusions, the principles of small after first high ratio and combined continuous and intermittent grouting were followed in this grouting engineering. The grouting methods of stop slurry at injection orifice, subsection downward of horizontal branch holes or forward grouting, are adopted.
Methodology
When coal and other mineral resources are exploited, the water-bearing working faces cause easily inrush disaster through permeable passage in rock stope, a result from the geological faults and structural planes. When grouting is used to block the water in the fault fracture zone, the slurry often spread with two forms: (1) porous medium: in the area where rock is seriously broken, the slurry will flow with an approximate spherical diffusion mode under the external pressure, as shown in Figure 4(b); and (2) plane fracture: limited by the rock walls, approximate radial diffusion centered on grouting holes will occur in the main structural planes or permeable passages [39], as shown in Figure 4(a). In order to simplify the research and improve the universality of engineering requirements, complex engineering problems are generally transformed into 2D simplified models, which have better representativeness [40]. So, the grouting engineering can be simplified into a 2D model, as shown in Figure 4(c).
Grouting water plugging is a diffusion process of groundwater displacement in fissures, which is usually filled with water during plugging and reinforcement engineering. Slurry has a hindrance reaction from groundwater when flowing. In 3 Geofluids the case of ignoring other factors, the slurry after completely mixing can be assumed to be a single-phase flow. Grouting engineering affected by water gushing essentially shows a slurry-water two-phase flow.
3.1. Radial Flow of Slurry. The slurry flow in fractures is governed by the mass and momentum balance equations. Based on NS equation, govern equations of single-phase in fracture can be given as follows [29,41,42]: where ρ s is the slurry density, t is the time, u s is the flow velocity, P s is the pressure, τ s is the shear stress, and F g is the gravitational stress. Chemical material such as polyurethane can be modeled by Newtonian fluids. The shear stress can be given by several types, as illustrated: where τ ij is the shear stress tensor, m is the power-law coefficient, μ s is the slurry viscosity, n is the power-law index, τ 0 is the yield shear stress, and _ γ ij is the shear rate which can be obtained by Based on the w/c = 1~1:5 commonly used in 4 Geofluids engineering and to make the calculation simplify, there are also some assumptions: (1) The slurry and groundwater are both Newtonian fluids and incompressible (2) The gravitational forces and inertial effects are negligible (3) The path aperture is much smaller than lateral dimensions.
The mass equations for the single-phase of a Newtonian fluid flow in a homogenous fracture can be simplified as 3.2. Two-Phase Flow. Grouting process involves immiscible two fluids of slurry and groundwater in the fracture. The external conditions in the radial direction of fracture during grouting have changed to be enclosed by water. There is an interface between them in displacement flow, which can be tracked by a phase transport equation. The level set method can be used to track the interface [38], and an interface equation can be given as follows: where Φ is the level function, γ is the reinitialization parameter, and ε is the interface thickness controlling parameter.
The govern equations of the two-phase flow can be given by Eq. (6).
where F v is the volume force, ρ is the density, μ is the density, and F st is the surface tension force which can be given by Eq. .
where σ is the surface tensions coefficient and n is the unit normal to the interface.
3.3. Groundwater in the Fracture. The permeable formation in underground coal mining can generally be divided into goaf water, surface water, confined water, etc., and floor water inrush disaster is easily caused by confined water. In China, floor limestone water is a key issue of mining control, especially for stopes with permeable passage such as faults.
Under the stratum high pressure, water flows into the working face through fissures or potential geological bodies. That is to say, the crack is filled with water with velocity and 5 Geofluids pressure. From the initial stage, grouting displaces groundwater flow and bears its reaction. The pressure gradient and flowrate are given by Eq. (8).
where P w is the groundwater pressure, P i is the grouting pressure, and L i is the spread radius. The grouting pressure should be larger than the groundwater pressure to spread. When the grouting pressure or velocity is less than groundwater, 0 distance flow will occur in the direction of reverse diffusion, resulting in grouting failure or even water inrush accident. Therefore, the magnitude and direction of groundwater hydraulic parameters must be considered during grouting water plugging, which is an important parameter affecting diffusion. According to the previous studies, the model can be simplified as a two-dimensional model, as shown in Figure 5 Figure 6. By comparison, the results show that the diffusion radius of slurry is R t in any direction with time without the influence of water flow, which presents a standard disk diffusion model. As time goes on from T1, T2, to T3, the diffusion radius changes uniformly, which is consistent with the experimental results, as shown in the first one in Figure 6(c). When the diffusion of grouting is affected by hydrodynamic conditions, the diffusion state of grouting in each direction is obviously different. First of all, it is no longer the symmetry destruction with the grouting hole as the center in the X direction, but two kinds of diffusion radius R t3 > R t1 with the grouting hole as the center are presented. And the difference between the two increased with time. At the same time, it can be found that the diffusion radius R t1 does not change with time in the upstream direction. Taking the grouting hole as the boundary (Y direction), there is an obvious blank zone of grouting slurry diffusion on the countercurrent side. However, in the downstream direction, the diffusion has been completed at T3. In the Y direction, the maximum diffusion radius R t2 max of slurry diffusion radius R t2 in the initial state is basically symmetrical with the grouting hole as the center. As time goes on to T2, the diffusion radius around the grouting hole does not increase significantly. But the maximum diffusion radius R t2 max moves along the flow direction with the diffusion, which makes 7 Geofluids the slurry diffusion state oval diffusion. In the Y direction near the boundary region, there is a symmetric diffusion white band. At T3, the maximum diffusion radius in Y direction is no longer obvious. When the slurry first diffuses to the boundary in the X direction, the slurry expands from the X boundary to the Y direction, and the blank band in the Y direction begins to fill gradually from the boundary.
Results
At the same time, in order to verify the rationality of the simulation results, the previous research results are selected to verify. By comparing the previous experimental results, it can be found that the results calculated by the two-phase flow theory are consistent with the experimental results, as shown in Figure 6(c). Then it further shows that there are significant differences in the diffusion process and diffusion form of slurry under the condition of nonflowing water and flowing water. The time-varying model of two-phase flow can be used to simulate the slurry diffusion and its time-varying process.
Diffusion Trace from Volume Fraction in the Case.
According to the design of grouting engineering, the watercement ratio of cement slurry is from 0.8 : 1 to 1.5 : 1, and the hydraulic characteristics of this kind of slurry are Newtonian type. It is assumed that the slurry diffuses around a single-slab fracture and displaces the water flow. The setting velocity of water flow is 0.004 m/s, and the grouting hole diameter is 0.35 m. When the average velocity of grouting slurry is 0.03 m/s and the water-cement ratio is 1 : 1, the density of grouting slurry is 1400 kg/m 3 . The average viscosity measured by the test is about 0.04 Pa s. In order to better represent the time-varying characteristics of cement slurry in the computational domain, the real-time range of 50%-100% proportion of fluid 2 (cement) is selected to characterize its diffusion, and the results are shown in Figure 7. Within 0.5 h of the diffusion calculation, there is little difference between the diffusion trace along the flow direction and the two sides. At this time, the downstream diffusion distance is 9.5 m, and the maximum diffusion distance on both sides is about 7 m. At the time of 1 h, it has diffused about 20 m in the downstream direction, and the maximum diffusion distance on both sides is about 10 m. At 1.5 h, the downstream direction has spread more than 25 m, and the maximum diffusion distance on both sides is about 12 m. At 2.0 h, the downstream direction has spread more than 30 m, and the maximum diffusion distance on both sides is about 13 m. At the time of 3.5 h, the slurry has reached the left outlet boundary, and the diffusion calculation is transferred to both sides. At the same time, backflow will spread to most areas. In this process, the slurry increases rapidly along the flow direction, but there is no slurry diffusion in the reverse flow direction. In the diffusion zone on both sides of the downstream flow, the range of slurry increases slowly.
It can be seen from the numerical calculation that under the influence of groundwater flow, the diffusion radius of slurry basically reaches 30 m in about 3 hours, but it is only about 15 m on the side. The results show that there are great differences in the diffusion distance of the slurry in the fractures, and the leaf-type diffusion model takes the middle line of the grouting hole in the middle as the maximum diffusion distance. As time goes on, when the grouting slurry diffuses to the downstream boundary, the slurry begins to diffuse to the side. In the grouting region, affected by the water velocity, the slurry diffusion path is irregular, and there is basically no slurry on the upstream side of the grouting hole. In a considerable period of time, the diffusion range of slurry in a single hole is mainly along the flow direction, so it cannot be filled by slurry in the side and counter flow area. According to the interpretation of the geological case, the width of the structure is about 52 m. Therefore, diffusion can be achieved 8 Geofluids in the width direction, and three grouting holes can be established in the length direction to achieve the purpose of grouting water plugging and reinforcement.
Grout Pressure and Streamlines in Flow
Zone. In order to clearly understand the interaction between water flow and slurry in the diffusion process, the absolute pressure and streamlines distribution are selected for calculation, and the results are shown in Figure 8. Affected by the constant velocity of the slurry in the grouting hole, the streamlines of the fluid can be divided into three distribution characteristics.
In the O1 region, the streamlines are divergent, and there are typical arc corners and breakpoints. In the O2 region, the streamline extends from the grouting hole side to the left, which is generally the diffusion area of slurry. In the O3 region, the streamlines extend from the left initial end to the right, and deflects in the swept region of O1 region. These three stages reflect the change trend of the slurry and water in the process. It can be found that the reason for the slurry missing in the countercurrent area is that the streamlines are separated by the interface, as shown in Figure 8(a). When the slurry fluid is injected into the flow field from the grouting hole, it will interfere with the original water pressure field and streamline field, as shown in Figures 8(b) and 8(c). When the displacement flow is continuously diffused after the slurry injection, the curvature direction of the streamline on the counter flow side will reverse, as shown in Figure 8(c), and the streamline grouting is compacted. At the same time, the pressure distribution in the downstream side of the grouting hole changes significantly with the diffusion of the slurry, while the pressure distribution in the upstream side does not change. There is a large fluid pressure field at the interface between slurry and water, and the envelope area of the larger pressure is the diffusion area of slurry. Therefore, the slurry-water interface is the boundary of maximum pressure distribution, and the slurry diffusion range can be characterized by the way of grout pressure in twophase flow model.
Grouting Process Control
4.2.1. Optimization of Grouting Hole Design. Based on the slurry diffusion mechanism in the fracture with dynamic water in the engineering scale, if the single-hole grouting pro-cess is implemented in the broken grouting area, the slurry diffusion distance is difficult to meet the grouting demand, so it is necessary to adopt the multihole and multisection grouting construction method. According to the numerical simulation results, when grouting is operated for a long time, there is obvious dominance in the direction of water flow. However, there are obvious grouting voids in the direction of upstream flow and both sides, so this feature should be considered when arranging grouting holes, as shown in Figure 9. When more than two grouting holes diffuse under water flow, the complexity of pregrouting area and the arc area of O1-O3 diffusion trace as shown in Figure 8(a) are considered. There is a slurry loss area in the arc corner intersection zone of the diffusion area, and the tip of the fracture area has the possibility of expanding upward and further penetrating.
Aiming at the regional diffusion problem caused by multiple grouting holes, the following optimization scheme is proposed: (a) All grouting holes should be arranged close to the intake side of water flowing to ensure that the required fracture zone is in the dominant diffusion area, and the slurry diffusion can cover all areas in the vertical direction, as shown in Figure 9(c) (b) In the area beyond the diffusion range of single-hole slurry, the second grouting hole is arranged on the premise of considering its diffusion radius in this direction, to ensure that the diffusion range of two grouting holes is crossed. Similarly, the third grouting hole is arranged to realize the full coverage of slurry in the whole area in this direction (c) Due to the influence of water flow on slurry diffusion, the layout of grouting holes does not adopt uniform height coordinate value in the direction of water flow but should present staggered layout in height, which can make one or two grouting holes produce larger grouting diffusion trace. The grouting area will cover the missing area of arc slurry (d) For the last grouting hole, due to the previous grouting hole having completed part of the crack plugging, it shows wide expansion under the effect of low water
Grouting
Engineering. According to the simulation results, considering the geological distribution in the dangerous area, the length and width in the plane are about 52 m and 30 m, and the grouting was carried out to plugging the water conducting structure (Y1~Y2 in Figure 10(b)). Three grouting holes are arranged on the long side for grouting construction, as shown in Figure 10(a). The range of slurry diffusion should exceed 52 m in length and 30 m in width. Among them, the main hole IH-1 is located in the middle of the concealed water channel, and the control range is C 3 3-1~C 3 11 (-694 m). The grouting hole IH-2 is located in the north of the main hole, and the depth range is C 3 9 (-655 m). The grouting hole IH-3 is located in the south of the main hole, and the depth range is C 3 6 -C 3 9 , as shown in Figure 10(b). Each borehole is staggered in spatial position.
According to the previous flow mechanism of cement slurry in cracks, the disadvantages of narrow cracks under low-pressure grouting and the distribution characteristics of diffusion range under dynamic water condition, the grouting is injected by stages in space. Firstly, the main hole (IH-2) is used to continuously inject cement into the formation, and the expected grouting effect is shown in Figure 9. Then, the grouting hole IH-3 is used to inject the slurry into the deep strata, and at the same time, it plays the role of sealing the nondiffused cracks in IH-2. Finally, the grouting hole IH-1 is used to inject liquid into limestone 11# to expand the diffusion range of cement near the working face and enhance the strength and tightness of rock stratum. According to the results of numerical simulation, each borehole in its control area through continuous injection for more than 2 hours to achieve a predetermined diffusion range. At the same time, the main hole is used to seal the crushing area at the top by sections, and the water plugging effect is added.
Verification of Grouting Effect and Water Control.
In the -600 m ash drainage roadway of West No.3 mining area of Zhangji mine, the water level of 11# hole of directional long borehole is C 3 3 lower limestone. The water source of the effluent is Ordovician limestone water. Therefore, the treatment target layer is determined as C 3 3~C 3 11 ash to block the water outlet channel. If the water output of the verification hole is less than 3 m 3 /h, it indicates that the grouting effect is feasible. The distributions of the primary and secondary faults and cracks were obtained from a geological survey based on the connectivity of grouting holes; the drilling holes were proposed and implemented from 10.08. The effective sealing of the floor water was realized after a regional sectional closed grouting. A segmental sealing grouting method (seven segments) was adopted to deal with the hydrophobicity of the floor. The real-time observations of hole H-11# revealed that the water inflow decreased to 7 m 3 /h after the third grouting, and then continued to decrease to 0, as shown in Figure 11. To further verify the effect of the grouting scheme, several checking holes were arranged in the upper section of the grouting in the fractured zone, such as TH-1#. Based on Figure 11, with the repeated grouting, the water flow in test hole is maintained in a stable small value after some grouting (from 12.15).
After performing coring in the field tunnel, it was found that the grouting veins were widely distributed. The main cracks consisting of wide and narrow cracks were filled with cement and consolidated with crushed rock following a multistage grouting, as shown in Figure 12. Based on the research finding here, the principle can be explained as changing the diffusion of the cement grout in the superior and inferior cracks and filling the inferior cracks with grout to plug them to achieve the effect of plugging the crack water under the condition of a low water pressure. The directional long borehole in the roadway is 35 m away from the adjacent grouting branch hole. Many cement cuttings were found during the construction of long boreholes, which shows that the diffusion range of grouting is not less than 35 m.
Discussion
Based on this study, the effect of water flow plays an important role in the diffusion of slurry. Therefore, five groups of numerical experiments were designed to discuss the influence mechanism of water flow on slurry. In the numerical calculation, the default slurry inlet velocity is 0.03 m/s, and other properties are consistent with the research basis of this paper. The size of the calculation model is length × width = 50 m × 36 m. The design scheme and the data obtained are illustrated in Table 1.
After processing the data, it can be found that with the increase of time, the water flow rate has a good promotion effect on the slurry along the flow direction in a certain 12 Geofluids period, that is, it basically increases linearly, as shown in Figure 13(a). Moreover, in the model with high water velocity, the diffusion boundary will be reached first. We also know that [34] the water flow has a large scouring and dilution effect on the expansion of the slurry, which is not conducive to the sealing effect of the slurry on the cracks. Even in some cases, there will be leakage, misplaced grouting, and other accidents. At the same time, on the side of the grouting area, the diffusion range of the slurry increases slowly with time, as shown in Figure 13(b). The slurry growth slope in the range of slurry diffusion under different conditions was obtained. Through comparison, it can be found that the slope change rate increases linearly with the flow on the downstream side. However, the slurry diffusion rate on the side decreases linearly with the flow, as shown in Figures 13(c) and 13(d). There are three different states of slurry under the influence of water flow: first, the increase of water velocity has a significant linear effect on the slurry diffusion along the water flow. Secondly, it has a linear inhibition effect on the side slurry diffusion. However, it has no effect on the reverse flow slurry. Therefore, it is very important to coordinate the relationship between water flow velocity and slurry diffusion, especially the advantage of water flow can be realized by modifying the grouting hole.
In the actual grouting operation, the influence of dynamic water is great. In addition to developing new grouting materials, the spatial relationship between grouting holes and rock fractures should be continuously optimized. Through the rational use of water flow, its negative impact on slurry diffusion seal should be reduced, and its positive 14 Geofluids role in slurry diffusion should be played. Especially for the energy production of coal mine, the generally used slurry is cement-based. So, the reasonable layout of grouting hole becomes a crucial factor under the condition of dynamic water. The research of this paper is aimed at this. The research was carried out based on the open boundary conditions of the model and the two-phase flow theory. However, in practical engineering applications, there are still significant fluid-structure interaction. For the influence of fluid-solid coupling, the future research work will focus on the diffusion mechanism of two-phase fluid-solid coupling slurry. The content of this study may be an important basis for the diffusion mechanism of grouting, and it is expected to play an important role in promoting the development of grouting technology and the development and application of twophase flow and fluid-solid coupling theory.
Conclusions
In this study, aiming at the problems of unclear diffusion mechanism and optimization of grouting process faced by grouting in broken area in engineering case, the grouting mechanism of slurry-water interface representing diffusion trace was studied based on two-phase flow theory, and the diffusion characteristics of slurry in dynamic water condition are revealed. The following can be concluded from the current study. | 8,181.6 | 2021-02-24T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
How Interaction of Perfringolysin O with Membranes Is Controlled by Sterol Structure, Lipid Structure, and Physiological Low pH
Perfringolysin O (PFO) is a sterol-dependent, pore-forming cytolysin. To understand the molecular basis of PFO membrane interaction, we studied its dependence upon sterol and lipid structure and aqueous environment. PFO interacted with diverse sterols, although binding was affected by double bond location in the sterol rings, sterol side chain structure, and sterol polar group structure. Importantly, a sterol structure promoting formation of ordered membrane domains (lipid rafts) was not critical for interaction. PFO membrane interaction was also affected by phospholipid acyl chain structure, being inversely related to tight acyl chain packing with cholesterol. Experiments using the pre-pore Y181A mutant demonstrated that sterol binding strength and specificity was not affected by whether PFO forms a transmembrane β-barrel. Combined, these observations are consistent with a model in which the strength and specificity of sterol interaction arises from both sterol interactions with domain 4 and sterol chemical activity within membranes. The lipid raft-binding portions of sterol bound to PFO may remain largely exposed to the lipid bilayer. These results place important constraints upon the origin of PFO raft affinity. Additional experiments demonstrated that the structure of membrane-inserted PFO at low and neutral pH was similar as judged by the effect of phospholipid and sterol structure upon PFO properties and membrane interaction. However, low pH enhanced PFO membrane binding, oligomerization, and pore formation. In lipid vesicles mimicking the exofacial (outer) membrane leaflet, PFO-membrane binding was maximal at pH 5.5–6. This is consistent with the hypothesis that PFO function involves acidic vacuoles.
The cholesterol-dependent cytolysins (CDCs) 3 are a family of bacterially secreted pore-forming proteins that require cholesterol to function (1). Perfringolysin O (PFO) is a CDC that contributes to the pathogenesis of the anaerobic Gram-positive bacterium Clostridium perfringens (2). While PFO has been presumed to act extracellularly on immune cells (2), it has more recently been shown to be necessary for both phagocytic escape and survival of C. perfingens within host macrophages (3).
PFO contains four domains. Secreted as aqueous monomers, PFO recognizes membrane cholesterol through a tryptophanrich motif within Domain 4 (4,5). Once associated with the membrane, PFO oligomerizes into complexes of 20 -50 subunits (forming a pre-pore structure) (6). In the pre-pore state, the insertion domain (Domain 3) is held ϳ60 Å above the surface of the membrane by Domain 2 (7). To induce pore formation, Domain 2 undergoes a vertical collapse, which brings Domain 3 within range to insert into the bilayer (8). Additionally, a major structural rearrangement takes place within Domain 3 whereby six ␣-helices rearrange into two amphipathic -hairpins that insert into the membrane to form a transmembrane (TM) structure (8). The resulting pore ranges in size from 250 -300 Å in diameter (6).
While cholesterol has been presumed to be the cellular receptor for PFO and some other CDCs (5), how PFO binds cholesterol has yet to be fully explained. Studies using model membrane systems typically require considerably high concentrations of cholesterol (up to 50 mol%) in order for efficient pore formation by CDCs (9). Recent studies also show that only the tip of Domain 4 is exposed to the nonpolar core of the bilayer (10,11). A model in which PFO binds to a membrane surface involving several sterol molecules has recently been proposed (12). Additionally, cholesterol is required for the PFO pre-pore to pore conversion, and has also been shown to be necessary for pore for-mation by intermedilysin, a related CDC, which does not use cholesterol as a receptor, but requires it for pore formation (6).
CDC proteins are also of interest because they are believed to bind to lipid rafts via their affinity for sterol. Lipid rafts are tightly packed sphingolipid and sterol-rich liquid-ordered (Lo) membrane domains which are believed to co-exist in eukaryotic cellular membranes with loosely packed disordered (Ld) domains composed mostly of unsaturated lipids (for recent reviews see Refs. 13,14). Rafts are believed to serve many functions in cellular processes at the plasma membrane and have been proposed to serve as platforms that regulate protein-protein interactions (15). While these lipid domains have been highly studied in model membranes, where their existence is widely accepted, their formation and functional role in cells remains controversial. Both intact PFO and isolated Domain 4 have been used as markers of cholesterol-rich regions of cell membranes (4,16,17).
The details of PFO-raft affinity are of particular interest because PFO is a TM protein, and the origin of TM protein-raft affinity is not clear. Although biochemical studies detect TM proteins within detergent-resistant membranes that may be derived from ordered domains in cells, TM proteins should not be able to pack well with lipids in an ordered state (18). Because the TM insertion of PFO can be controlled, it is an ideal protein to study this issue. Furthermore, like other CDCs, PFO interaction with membranes is affected by sterol structure (19 -24), and the relationship between the raft-forming abilities of sterols (25)(26)(27) and sterol interaction with PFO should yield useful information on PFO affinity for rafts.
In this study we found that the interaction of PFO with membranes does not require that the sterol to which it binds has the ability to promote raft formation. Furthermore, tightly packing phospholipids, which interact strongly with sterols, tended to weaken the PFO-membrane interaction. These results do not mean that PFO does not interact with rafts, but, together with the observation that a pre-pore mutant has a similar sterol specificity as wild-type protein, it does place important constraints on the origin of PFO affinity for rafts. In the course of these experiments we also found that a low pH strongly promoted the interaction of PFO with membranes. Combined with recent cellular studies (3), this supports the hypothesis that at least one physiological function of PFO involves low pH.
Sterol purity was analyzed on HP-TLC plates (Merck & Co, Whitehouse Station, NJ). Approximately 2 g of sterol dissolved in ethanol was applied to the plate, dried, and then chromatographed using a sequential solvent system. The first solvent (50:38:3:2 (v/v) chloroform/methanol/acetic acid/water) was allowed to migrate halfway up the plate. The plate was then dried, introduced into a second chamber containing the solvent system 1:1 hexane/ethyl acetate (v/v), and chromatographed until the solvent migrated to near the top of the plate. For each step, solvent chambers were equilibrated with solvents for at least 2 h before chromatography. The plate was then dried and sprayed with 5% (w/v) cupric acetate, 8% (v/v) phosphoric acid in water. To detect sterol, plates were charred at 180°C for 5 min. Sterols deemed impure (zymosterol and desmosterol) were purified by TLC (28), and purity was confirmed by HP-TLC.
A functional cysteine-less derivative of wild-type PFO (PFO C459A) and a pre-pore mutant (PFO C459A Y181A, gift of A. Heuck, University of Massachusetts, Amherst) were expressed in Escherichia coli as described previously (29). Both WT and pre-pore PFO were then purified by a modification of the previously reported protocol (29). Three hours after induction of expression with 1 mM isopropyl-1-thio--D-galactopyranoside (IPTG), two liters of cultured E. coli expressing PFO were pelleted at 4°C. The bacteria were resuspended in NiA buffer (10 mM MES, 150 mM NaCl, pH 6.5) containing 150 g/ml phenylmethylsulfonyl fluoride and 100 g/ml chicken egg white lysozyme (Sigma-Aldrich), incubated for 30 min at room temperature, subjected to tip sonication with a cell disruptor (Heat Systems, Ultasonics, Inc, Plainview, NY) for 15 s while cooled on ice, and then cooled a further 15 s. The sonication and cooling steps were repeated two times. Next, the mixture was spun down at 15,000 rpm in a SS-34 rotor at 4°C using a Dupont RC-5 centrifuge. The supernatant from this step was incubated for 20 min. at room temperature while mixing with TALON metal affinity resin (3 ml). Resin was pelleted with a tabletop centrifuge, added to a 0.8 ϫ 4 cm poly-prep plastic column (BioRad, Hercules, CA), washed with about 5 ml of NiA buffer followed by a 1-ml aliquot of NiA buffer containing 50 mM, and then washed with a 1-ml aliquot of NiA buffer containing 100 mM imidazole. The PFO was then eluted with several 1-ml aliquots containing NiA buffer with 400 mM imidazole. Fractions containing PFO were pooled and dialyzed overnight against 4 liters of Buffer B (10 mM MES, 1 mM EDTA, pH 6.5) with one buffer change. The pooled PFO-containing fractions were then subjected to gravity anion-exchange chromatography using SP-Sephadex resin (GE Healthcare, Piscataway, NJ) in a 0.8 ϫ 4 cm poly-prep plastic column and stepwise eluted using 1-ml aliquots of Buffer B containing increasing concentrations of NaCl in 100 mM steps, with duplicate aliquots at 300 mM and 400 mM NaCl. The majority of purified PFO eluted in Buffer B containing 300 -400 mM NaCl. It was then dialyzed against PBS, pH 7.4 (10 mM sodium phosphate, 1 mM potassium phosphate, 137 mM sodium chloride, 13 mM potassium chloride), and stored at Ϫ20°C.
Preparation of Liposomes-Multilamellar vesicles (MLV) were prepared at a concentration of 500 M lipid in PBS, pH 7.4 or 5.1 (PBS at pH Ͻ 7.4 being prepared by titrating PBS pH 7.4 with acetic acid) similarly as described previously (30). Dried lipid mixtures (redissolved in CHCl 3 and redried under N 2 and then high vacuum for at least 1 h) were dispersed in buffer at 70°C and agitated at 70°C for 15 min using a VWR multitube vortexer (Westchester, PA) placed within a convection oven (GCA Corp, Precision Scientific, Chicago, IL). The samples were then cooled to room temperature. Large unilamellar vesicles (LUV) were prepared from MLV (prepared at a lipid concentration of 10 mM) by subjecting the MLV to 7 cycles of freezing in a mixture of dry ice and acetone for 30 s and thawing at room temperature. Small unilamellar vesicles (SUV) were prepared at a concentration of 100 M lipid in PBS (pH 7.4, 6.8, or 5.1) by ethanol dilution in a manner similar to that described previously (30). (SUV samples for quenching experiments contained 5 M pyrene-PE in addition to 100 M unlabeled lipids.) Lipids mixed in ethanol were diluted slightly more than 50-fold in PBS buffer heated to 70°C, briefly vortexed, incubated at 70°C for about 5 min, re-vortexed, and then cooled to room temperature.
Fluorescence Intensity Measurements-Fluorescence emission intensity was measured at room temperature on a SPEX Fluorolog 3 spectrofluorimeter. For fixed wavelength measurements, excitation and emission wavelength sets used (in nm) were (295, 340) for tryptophan and (485, 518) for BODIPY-FL labeled streptavidin. Duplicate samples were prepared for fluorescence measurements. Fluorescence intensity in single background samples lacking fluorophore was subtracted. For protein emission spectra, samples, and backgrounds were excited at 280 nm and emission was acquired from 300 -400 nm.
Vesicle Binding Experiments-The ability of PFO to associate with vesicles was assessed by measuring the increase in intrinsic Trp emission intensity, which occurs when the Trp residues located within Domain 4 come into contact with sterol-containing membranes (31). Unless otherwise noted, PFO (5 g from a stock solution containing ϳ1-2 mg/ml) was added to 1-ml UV preparations at the desired pH (5.1, 6.8, or 7.4) and lipid composition, allowed to incubate for 1.5-2 h at room temperature, and then fluorescence emission intensity was measured as described above.
Binding and Oligomerization Assays-To assess PFO oligomerization, MLV (500 M lipid) composed of DOPC, BrPC and sterol (with BrPC being 10 mol% of total lipid) were prepared in 200 l of PBS at pH 7.4 or 5.1 (depending on desired experimental conditions) and incubated with 10 g PFO for 1 h at room temperature. Samples were spun down at 14,000 rpm in an Eppendorf centrifuge model 5415C (Westbury, NY) for 20 min at room temperature. Pellets containing the MLV and bound PFO were resuspended in 20 l of PBS (pH 7.4), solubilized with 5 l of SDS loading buffer (40% glycerol (v/v), 25% SDS (w/v), and 0.1% bromphenol blue (w/v)), and then analyzed using denaturing SDS-agarose gel electrophoresis (SDS-AGE) as described previously (29) Briefly, 2% (w/v) agarose gels were run for 1-1.25 h at 103 volts in SDS gel reservoir buffer (192 mM glycine, 25 mM Tris-base in 0.1% (w/v) SDS) and fixed overnight in 30% (v/v) methanol, 10% (v/v) acetic acid. The gels were then dried for 3 h at 70°C in a Savant slab gel dryer (Holbrook, NY), stained with 0.2% (w/v) Coomassie Blue dissolved in 30% (v/v) methanol, 10% (v/v) acetic acid for 2 h, and then destained for 30 min to 1 h in the fixing solution.
Assay for Pore Formation-PFO-induced pore formation was measured by assaying the efflux of vesicle-entrapped biocytin via the increase in the BODIPY fluorescence emission intensity upon binding of biocytin to BODIPY-labeled streptavidin (BOD-SA) in the external solution. LUVs with trapped biocytin were prepared by freezing and thawing MLVs (10 mM lipid), as described above, in the presence of 537 M biocytin. The mixture was dialyzed against 4 liters of PBS overnight with one change of dialysis buffer to remove external biocytin. 10 l of LUVs with entrapped biocytin were diluted to a lipid concentration of 100 M and a volume of 990 l with PBS and then BOD-SA (10 l from the stock solution) was added externally to vesicles to give a BOD-SA concentration of 10 nM. BODIPY emission intensity was then measured. PFO was added to a concentration of 5 g/ml, samples were briefly mixed, and then BODIPY intensity was monitored as a function of time for up to 45 min.
PFO Interacts with Membranes at Both Low and Neutral
pH-Because low pH-induced unfolding often aids protein toxin insertion into membranes, we compared the behavior of PFO at low and neutral pH. First, the interaction between (Cysless) PFO and model membrane vesicles was measured. (The removal of the Cys eliminates the sensitivity of PFO to spontaneous inactivation by oxidation (32).) Previous studies have shown that the interaction of PFO with membranes can be detected by the large increase in Domain 4 Trp emission intensity that accompanies association with membranes (31). A similar (4-fold) increase of Trp emission intensity relative to that in aqueous solution is observed when PFO is incubated with vesicles at pH 7.4 and pH 5.1, suggesting PFO interacts with membranes in a similar fashion at low and neutral pH ( Fig. 1). Notice that there is a small red shift in the emission spectrum at low pH in aqueous solution relative to that at neutral pH. This is consistent with an increased Trp exposure to a polar environment, e.g. aqueous solution, at low pH, and suggests that there is a small unfolding event at low pH. In the presence of lipid vesicles, this red shift is not observed, and spectra at low and neutral pH are nearly identical.
The kinetics of PFO interaction with vesicles at neutral and low pH were also compared. Measurements of the time dependence of the increase in emission intensity upon incubation of PFO with vesicles demonstrates that PFO-membrane binding occurs faster at pH 5.1 (t1 ⁄2 ϭ 1.5 min) than at pH 7.4 (t1 ⁄2 ϭ 6.5 min) (data not shown). A similar difference in the rate of interaction at pH 5.1 and 7.4, is observed at 37°C, although the half-times for membrane interaction decrease by a factor of about two relative to those at room temperature and the increase in fluorescence is about 3-fold (data not shown).
PFO-Vesicle Interactions at Low pH Occur at Lower Cholesterol Concentrations
Than at Neutral pH-The above results suggest that low pH might enhance the interaction PFO with membranes. To examine this, the interaction of PFO with vesicles containing various amounts of cholesterol was compared at low and neutral pH. Fig. 2A shows that the cholesterol concentration that induces PFO interaction with DOPC/cholesterol vesicles is less at low pH than at neutral pH, with the increase in fluorescence emission intensity being half-maximal at 15-20 mol% cholesterol at pH 5.1 (E) and 25-30 mol % cholesterol at pH 7.4 (F).
It is possible binding to vesicles might occur without an increase in Trp fluorescence emission intensity. To confirm that the increase in Trp fluorescence emission intensity accurately reports when binding to vesicles occurs, more direct methods were used. First, a pyrene-labeled lipid was used as a fluorescence resonance energy transfer (FRET) acceptor for Trp, and binding as a function of cholesterol concentration assayed via the amount of FRET, as detected by quenching of Trp fluorescence emission intensity. Fig. 2B shows that the cholesterol concentration dependence of Trp fluorescence quenching is very similar to the cholesterol dependence of the Trp intensity increase in the absence of acceptor, with lipid interaction occurring at a lower cholesterol concentration at low pH than at neutral pH. As commonly observed (33), FRETinduced quenching is incomplete because not all of the donors are close enough to the pyrene-labeled lipid to take part in energy transfer. Thus, the maximal level of FRET-induced quenching, 80%, presumably represents complete binding of PFO to the vesicles. It should also be noted that the small amount of apparent FRET at low cholesterol concentration is largely an inner filter artifact arising from a small amount of pyrene absorbance.
These results were further confirmed by measuring the association of PFO with vesicles via centrifugation of PFO mixed with MLV. The amount of bound PFO in the MLV-containing pellet was detected by agarose gel electrophoresis in SDS (SDS-AGE). The cholesterol concentration dependence of PFO binding detected by sedimentation (Fig. 2C) is similar to that obtained from fluorescence intensity measurements in terms of the threshold sterol concentration for binding PFO and its pH dependence. The position of the main PFO band on the gels indicates that when membrane-bound, PFO efficiently forms characteristic SDS-resistant oligomers (29) at both pH values (although a variable amount of monomers can be often observed).
PFO Forms Pores Efficiently at Low pH-While the experiments above show that PFO exhibits similar binding and oligomerization behavior at low and neutral pH, under some circumstances PFO can form pre-pore oligomers that do not deeply membrane-insert (29). We therefore investigated whether PFO pore-forming behavior is retained at low pH. To assay pore formation, we measured the efflux of biocytin encapsulated inside LUVs. In this method, efflux is detected by the increase in the BODIPY emission intensity that occurs when biocytin binds to BODIPY-tagged streptavidin added externally to the vesicles (34 -36). Fig. 3 shows that PFO forms pores efficiently in DOPC vesicles containing 50 mol% cholesterol at both pH 5.1 and 7.4, with the rate of biocytin efflux being slightly faster at low pH. The difference between neutral and low pH is even larger at lower cholesterol concentrations (data not shown), presumably because PFO binds to a greater extent at low pH than at neutral pH. Fig. 3 also shows no pore formation occurred in the absence of cholesterol.
A control experiment using a previously identified mutant (PFO C459A/Y181A) that remains in the pre-pore state (37), shows a lack of pore formation at both low and neutral pH (Fig. 3, A and B). This confirms the validity of the pore-formation assay, and shows that the difference in pre-pore mutant and wild-type PFO behavior is retained at low pH. We conclude that the structure and membrane interactions of PFO at low and neutral pH must be very similar, although low pH enhances PFO membrane interaction and function.
Effect of Sterol Structure upon PFO-Membrane Interaction: Fluorescence Studies-It has been proposed that PFO binds to cholesterol-enriched ordered domains (lipid rafts) (16). Prior studies of sterol specificity have shown that sterol structure is important for interaction with PFO, but have not established whether or not PFO interacts most strongly with sterols promoting lipid raft formation (21). To investigate this, the interactions of PFO with sterols and sterol derivatives that either strongly stabilize ordered domain formation (cholesterol, dihydrocholesterol, epicholesterol, lathosterol, sitosterol (25-28)), weakly stabilize or have little effect of the stability of ordered domain formation (zymostenol, lanosterol, cholesteryl acetate, cholesterol methyl ether, allocholesterol (25-28)) or destabilize ordered domain formation (coprostanol (27)) were compared. The binding of PFO to vesicles as a function of the concentration of sterol or sterol derivative within the vesicles was detected by sterol-induced increases in Trp fluorescence (Fig. 4). At low pH (Fig. 4A), PFO interacts well or moderately well with sterols that strongly promote lipid-ordered domain formation (cholesterol, dihydrocholesterol, sitosterol, lathosterol), weakly stabilize ordered domain formation (desmosterol, zymostenol, allocholesterol), or do not promote raft formation (coprostanol). The interaction with coprostanol and zymostenol requires somewhat higher sterol concentrations than is required for the other sterols. PFO does not interact or interacts very poorly with epicholesterol, which stabilizes ordered domains to a significant degree (27), or with lanosterol and sterol derivatives with a blocked 3- OH (cholesteryl methyl ether, cholesteryl acetate) that have little effect on ordered domain stability. This shows that PFO binding is not tightly correlated with the relative ability of sterols or sterol derivatives to form ordered domains.
The relative sterol specificity of PFO is similar at neutral and low pH. However, the dependence upon sterol concentration is shifted, such that much higher sterol concentrations are required to induce an increase in Trp emission intensity at neutral pH than at low pH (Fig. 4B).
Effect of Sterol Structure upon PFO-Membrane Interactions:
Centrifugation Experiments-It is possible that the apparent dependence of PFO binding to membranes upon sterol structure is not due to a lack of PFO interaction with membranes, but rather to an inability of a particular sterol to induce a conformational change that alters Trp fluorescence emission intensity. To examine this possibility, the binding of PFO to membranes and the oligomeric state of the membrane-bound PFO was determined using centrifugation and SDS-AGE. Fig. 5 shows that at low pH there is near maximal PFO binding to vesicles containing cholesterol, dihydrocholesterol, sitosterol, or lathosterol at 20 mol% sterol, and some binding to vesicles with allocholesterol at 20 mol%. However, binding to vesicles containing coprostanol or zymostenol requires 30 mol% sterol, and no binding to vesicles occurs even with 40 mol% lanosterol, cholesteryl acetate or cholesterol methyl ether. This order of sterol recognition by PFO mirrors that derived from Trp fluorescence emission intensity (Fig. 4).
In every case, the bound PFO is predominantly oligomeric (Fig. 5). It therefore appears that sterol structure does not greatly affect the ability of membrane-associated PFO to oligomerize. However, it should be noted that in several cases, there is some smearing of the oligomers on the gel at the highest sterol concentrations. The origin of this behavior is not understood.
We have also tested two additional sterols, ergosterol and 7-dehydrocholesterol, and found that they promote PFO binding to liposomes. However, this interaction was difficult to quantify because we found these sterols quench Trp fluorescence emission intensity (28), thereby masking the emission intensity increase usually observed when PFO binds to membranes. SDS-AGE showed that PFO binding and oligomer formation with liposomes containing 20 mol% of these sterols was as complete as for liposomes containing 20 mol% cholesterol (data not shown).
The sterol specificity of pre-pore PFO Y181A mutant, which cannot form a TM -barrel (37) was also examined. It shows a sterol specificity profile at low pH (Fig. 6) that is almost identi- FEBRUARY 22, 2008 • VOLUME 283 • NUMBER 8 cal to that of the Cys-less wild type PFO (Fig. 4A) as judged by the dependence of Trp emission intensity upon sterol or sterol derivative concentration within vesicles. Therefore, the step that is sensitive to sterol structure appears to be the initial recognition and binding of the membrane surface by PFO. Once bound, PFO can spontaneously form pre-pore complexes.
PFO-Membrane Interaction
Effect of Sterol Structure upon Pore Formation by PFO-To determine if pore formation by PFO is also sensitive to sterol identity, DOPC vesicles encapsulating biocytin and prepared with different sterols, were exposed to PFO. Pore formation is observed at low pH with cholesterol, dihydrocholesterol (which is strongly raft promoting), desmosterol (which is weakly raft stabilizing) and coprostanol (which destabilizes rafts) (Fig. 7). Therefore, the raft-stabilizing abilities of a sterol are not tightly correlated with its ability to support PFO-induced pore formation. Vesicles containing DOPC mixed with 40 mol% of allocholesterol or lathosterol also show a significant degree of pore formation, but no pore formation was seen in vesicles containing DOPC and 40 mol% lanosterol (data not shown). The rate of pore formation is greater at 40 mol% (Fig. 7A) than at 25 mol% for each sterol (Fig. 7B). In agreement with the binding experiments, samples with 25 mol% coprostanol, which is an insufficient concentration to promote maximal PFO binding, show a significantly reduced rate and extent of pore formation as judged by the rate of biocytin release when compared with samples containing other sterols, which promote near-maximal PFO-membrane interactions at 25 mol% (Fig. 4A). Overall, the sterol dependence of pore formation by Cys-less PFO correlates with the level of its association with vesicles.
Effect of Phospholipid Structure on PFO-Membrane Interaction-To assess whether PFO-membrane interactions would be affected by the relative ability of phospholipids to form ordered domains, four phosphatidylcholines with differing abilities to form ordered domains by themselves and with cholesterol were examined. These four, listed in decreasing order of ability to form ordered domains and pack tightly with cholesterol, were (38): DPPC, which has two saturated palmitoyl acyl chains; POPC, which has a 1-position palmitoyl acyl chain and a 2-position unsaturated oleoyl acyl chain; DOPC, which has two oleoyl acyl chains; and diphytanoyl PC (DPhPC), which has two multibranched acyl chains. As judged by the cholesterolinduced increase in Trp emission intensity at low pH (Fig. 8), the cholesterol concentration needed to induce PFO binding to vesicles increases with PC type in the order: DPhPCϽDOPCϽ POPCϽDPPC. This pattern indicates that PFO binds better to membranes that are loosely packed and have the least tendency to form ordered domains. This does not imply that PFO does not associate with lipid rafts, but does indicate that loose packing, which should increase cholesterol reactivity, promotes sterol binding to PFO (see "Discussion").
PFO interactions with vesicles in which 50 mol% dioleoyl phosphatidylethanolamine (DOPE), or 10 -30 mol% diphytanoyl PE, or 5-20 mol% palmitoyl (C16:0) ceramide were substituted for an equal mol% of DOPC were also examined. In all of these cases, there is a decrease in the % cholesterol need to induce PFO association with membranes (data not shown). These results are also consistent with a model in which cholesterol reactivity in membranes is an important parameter controlling association with PFO (see "Discussion").
Dependence of PFO Interactions with Vesicles on pH: Physiological Implications-To ascertain how membrane composition affects the pH dependence of PFO-membrane interactions, vesicles were prepared with various phospholipids and cholesterol concentrations. The pH dependence of Trp fluorescence was then measured to identify the pH at which membrane interaction was maximal. Fig. 9A shows that PFO binding to vesicles containing POPC/cholesterol (7:3, mol/mol), DOPC/cholesterol (4:1), or DPhPC/cholesterol (17:3) is maximal over a broad low pH plateau. To better define the likely pH maximum under physiological conditions, the pH dependence of PFO-membrane interaction was then measured in vesicles containing a 1:1:1 molar ratio of sphingomyelin (SM): POPC/ cholesterol, a mixture which mimics the outer (exofacial) leaflet of mammalian plasma membranes. Fig. 9B shows the binding of PFO to these vesicles has a somewhat sharper pH maximum near pH 5.5-6. Fig. 9B also shows that in SM/POPC/cholesterol vesicles in which cholesterol concentration is decreased to 25 mol%, there is an even sharper pH maximum of membrane interaction at just below pH 6. These results are consistent with the hypothesis that PFO functions in macrophage phagosomes, as phagosomes have a luminal pH between 5 and 6 (39) (see "Discussion").
Negatively Charged Lipid Enhances Binding of PFO to Vesicles-Rossjohn et al. (40) very recently observed conformational changes in the PFO crystal structure at low pH, and suggested that these changes might aid PFO insertion into membranes at neutral pH when PFO encounters a membrane rich in anionic lipids, because the surface of anionic lipid vesicles have a lower local pH than that of the bulk aqueous solution. To determine if anionic lipid promotes PFO-membrane interactions, the binding of PFO to vesicles containing POPC and cholesterol with and without 20 mol% of the anionic lipid 1-palmitoyl-2-oleoyl phosphatidyl-L-serine (POPS) was compared (Fig. 10). At neutral pH, PFO interacts with vesicles at a slightly lower cholesterol concentration in the presence of POPS (OE) than in its absence (‚). However, at low pH (5.1) the presence of POPS (f) results in an even larger decrease in the mol% of cholesterol required for PFO binding to vesicles. Very similar FEBRUARY 22, 2008 • VOLUME 283 • NUMBER 8 results were obtained by incorporating 20 mol% of the anionic lipid 1,2-dioleoyl phosphophatidyl-rac-1-glycerol) into vesicles with DOPC (data not shown). Thus, anionic phospholipids facilitate PFO binding, although the ability to do so at low pH suggests factors in addition to local surface pH effects may be involved.
DISCUSSION
Low pH and PFO Function: Physiological Significance and Structural Origin of Low pH-enhanced Activity-Although one early experiment hinted that PFO retains the ability to induce hemolysis at low pH (41), it has been generally assumed that PFO acts by punching holes in the plasma membrane (2). However, it has recently been found that PFO is necessary for escape from the phagocytic vesicles of macrophages, suggesting an internal site of action instead of, or in addition to, plasma membranes (3). Our studies are consistent with this model. We find that PFO is significantly more active at low pH than neutral pH, suggesting that its primary site of action is in mildly acidic vacuoles. Because phagosomes are mildly acidic (pH 5.4 Ϯ 0. 4,Ref. 39), this is consistent with the model that phagosomes are a primary site of action. However, unlike listeriolysin O, a CDC that functions much better at low pH than at neutral pH (42), PFO is highly active at neutral pH. Thus, it seems very possible that PFO acts both in acidic vacuoles and at the plasma membrane.
How does low pH promote PFO interactions with the membrane? For acid-triggered toxins such as diphtheria toxin, low pH triggers a partial unfolding event that reorganizes the protein and thereby primes the membrane-penetrating sequences for insertion (43,44). This may also be the case for PFO. Recent crystallographic studies have proposed that low pH-induced conformational changes in Domains 2 and 3 prime PFO for membrane insertion by loosening a critical hinge region (40). In addition, the low pH-triggered changes in Trp fluorescence emission intensity in the absence of lipid indicate that the PFO Trps are more exposed to the aqueous environment at low pH, a result consistent with some degree of unfolding (45). It should be noted that the unfolding event that occurs at low pH is likely to be local. We were unable to induce PFO insertion into model membrane vesicles using conditions that induce more global unfolding, i.e. high temperature or urea (data not shown).
Effect of Phospholipid Structure upon PFO-Membrane Interactions: Implications for Sterol Binding-Another striking result was that PFO interactions with sterol are inversely related to the packing properties of the phospholipids. Specifically, the looser the packing of the phospholipids (38), the lower the concentration of cholesterol needed to induce insertion of PFO into the lipid bilayer. This behavior can be explained in terms of the effect of loose packing upon cholesterol chemical reactivity. The reactivity of membrane-associated cholesterol (as judged by its activity coefficient) should be increased by exposure to aqueous solution. The umbrella model postulates that the headgroups of phospholipids and sphingolipids act like umbrellas, limiting the exposure of the hydrophobic portions of cholesterol to water (cholesterol having too small a polar headgroup to fully shield itself from aqueous solution) and thus reducing its reactivity (46). Acyl chain and headgroup structures that limit the ability of cholesterol to pack closely with phospholipids should limit this shielding of cholesterol from water, thereby increasing cholesterol reactivity and thus its tendency to bind to other molecules. This effect can be very marked, and has been successfully invoked to explain how lipid structure can modulate cholesterol interaction with lipid rafts and with other toxins (30,47).
It is also significant that we have identified conditions in which only relatively low concentrations of sterols (as low as 10 -15 mol%) are required for PFO binding membranes. Studies involving PFO and model membrane systems have typically used liposomal formulations requiring very high cholesterol concentrations (about 50 mol%) to achieve efficient binding, oligomerization, and pore formation (9). Our study shows that there is no absolute requirement for a very high concentration of cholesterol. This result has practical importance because it will allow study of PFO-membrane interactions over a much wider range of in vitro lipid compositions.
Lipid polar headgroup structure also affected PFO-membrane interactions. Our results showed that anionic lipids can promote PFO binding to membranes. The anionic charges near the surface may redistribute the lipid components in the bilayer to alter cholesterol exposure, alter the local pH at the membrane surface, and/or interact with PFO directly or indirectly to stabilize its binding to the membrane surface. We also found that PE and ceramide decreased the % cholesterol needed to induce PFO binding to vesicles. This agrees with previous studies of PFO (48) and that of another cytolysin (47) and can also be rationalized in terms of the umbrella model. The headgroup of PE is smaller than that of PC so should be less able to shield cholesterol from water, thereby increasing cholesterol reactivity. Similarly, as pointed out by Zitzer et al. (47), ceramide has such a small headgroup it can even compete with cholesterol for association with umbrella-forming lipids, as has been observed in lipid rafts (30,47).
Effect of Sterol Structure upon PFO-Membrane Interactions:
Implications for the Nature of the Sterol Binding Site-Another conclusion from this study is that PFO-sterol interactions show a distinct specificity in terms of sterol structure. The structure of the polar headgroup, sterol rings and aliphatic side chains all affect how much sterol is needed to induce PFO membrane binding, oligomerization, and pore formation. The most critical feature is the OH group. In agreement with previous studies (21), our study confirms that PFO requires a free OH group in the -OH configuration to recognize and interact with the sterol. The sterol ring structure is also important. A 5-6 double bond (cholesterol) favors PFO binding more than a double bond in the 4 -5 (allocholesterol), 7-8 (lathosterol), or 8 -9 (zymostenol) positions. Ring system planarity also has a significant effect, with the relatively flat dihydrocholesterol interacting with PFO better than coprostanol, an isomer of dihydrocholesterol that is highly bent between the steroid A and B rings. The methyl groups on the sterol rings, found in lanosterol, strongly interfere with PFO interactions, although this may also be partially due to the 8 -9 double bond that it has in common with zymostenol. (It should be noted that the weak interaction of PFO with lanosterol is consistent with previous studies on ostreolysin (19).) Even aliphatic side chain structure had some effect, as shown by the slightly weaker interactions of PFO with sitosterol (which has a C24 ethyl group) than with cholesterol.
It would appear from these results that PFO recognizes groups all along the sterol molecule. This would be consistent with the presence of a sterol-binding pocket almost totally surrounded by residues in Domain 4 (49). On the other hand, only the tip of Domain 4 at one end of the elongated PFO molecule is embedded in the bilayer, and this is sufficient for cholesterol recognition and binding (7,10). Because different sterols will occupy different steric spaces and hence will pack differently within the bilayer, bilayer surfaces will be created that differ in the exposure of the sterols, including the portions most likely to directly interact with PFO, to the aqueous solution. A bilayerinserted sterol that is more exposed to aqueous solution will be more exposed to PFO in solution and thus interact more readily with PFO than one that is less exposed to solution. In this fashion the steric configurations of the hydrophobic portions of the various sterols would be expected to indirectly dictate the extent to which PFO recognizes the sterol molecule, even when the sterol is not totally buried within the protein. Defining the exact molecular origin of the sterol specificity of interactions with PFO will require further studies.
Effect of Lipid Structure upon PFO-Membrane Interaction: Implications for PFO Interaction with Lipid Rafts-It may seem puzzling that the sterol specificity of PFO binding to membranes does not support a model in which there is a close correlation between the raft (ordered domain) stabilizing abilities of a sterol (25)(26)(27)(28)50) and PFO binding. Several sterols that stabilize ordered domain formation (cholesterol, dihydrocholesterol, sitosterol, lathosterol (25,26,28)) interact well with PFO, but epicholesterol, which also stabilizes ordered domains (27), does not, and coprostanol, which destabilizes ordered domains (27), does.
However, if sterol binding enhances PFO interaction with rafts, an obvious mechanism would be that the raft-associating surfaces of the sterol remain exposed to the lipid bilayer upon binding PFO, and thus not interact with PFO. This would be analogous to the familiar mechanism by which binding to the headgroup of ganglioside GM1 anchors cholera toxin in rafts (51). A sterol bound in a deep cleft within the protein could not directly aid raft association in this way. Thus, one might not expect PFO to have any strong preference for sterols that form lipid rafts.
Furthermore, we have shown that PFO insertion is triggered more readily in a loosely packed lipid environment. This behavior also does not imply that PFO would have a higher affinity for disordered lipid domains than for lipid raft domains. It is possible that PFO could insert into disordered domains and then move into ordered domains subsequent to insertion. Furthermore, in membranes with co-existing disordered and ordered domains it is likely that the cholesterol concentration would be higher in the ordered domains (52,53), and this would tend to cancel out the preference of PFO for cholesterol in a loosely packed environment. Also, it should be kept in mind that the ordered domains in cells would be more complex than in our binary lipid mixtures, and contain some unsaturated lipids that might increase PFO affinity for ordered domains. Indeed, our preliminary studies indicate that in membranes with co-existing ordered and disordered domains, PFO does have a tendency to partition into ordered domains to a significant degree. 4 | 8,964.2 | 2008-02-22T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science"
] |
Nonlinear Blind Identification with Three-Dimensional Tensor Analysis
This paper deals with the analysis of a third-order tensor composed of a fourth-order output cumulants used for blind identification of a second-order Volterra-Hammerstein series. It is demonstrated that this nonlinear identification problem can be converted in amultivariable system with multiequations having the form of Ax By c. The system may be solved using several methods. Simulation results with the Iterative Alternating Least Squares IALS algorithm provide good performances for different signal-to-noise ratio SNR levels. Convergence issues using the reversibility analysis of matrices A and B are addressed. Comparison results with other existing algorithms are carried out to show the efficiency of the proposed algorithm.
Introduction
Nonlinear system modeling based on real-world input/output measurements is so far used in many applications.The appropriate model and the determination of corresponding parameters using the input/output data are owned to apply a suitable and efficient identification method 1-7 .
Hammerstein models are special classes of second-order Volterra systems where the second-order homogenous Volterra kernel is diagonal 8 .These systems have been successfully used to model nonlinear systems in a number of practical applications in several areas such as chemical process, biological process, signal processing, and communications 9-12 , where, for example, in digital communication systems, the communication channels are usually impaired by a nonlinear intersymbol interference ISI .Channel identification allows compensating the ISI effects at the receivers.
In 13 , a penalty transformation method is developed.Indeed a penalty function is formed by equations relating the unknown parameters of the model with the autocorrelations of the signal.This function is then included in the cost function yielding to an augmented Lagrangian function.It has been demonstrated that this approach gives good identification results for a nonlinear systems.However, this approach is still sensitive to additive Gaussian noise because the 2nd-order moment is used as a constraint.Authors, in 7 , overcame this sensitivity by using 4th-order cumulants as a constraint instead of 2nd-order moments in order to smooth out the additive Gaussian noise.But the proposed approach which is based on a simplex-genetic algorithm becomes so long and computationally complex.
The main drawback of identification with Volterra series lies on the parametric complexity and the need to estimate a very big number of parameters.In many cases, Volterra series identification problem may be well simplified using the tensor formulation 10-12, 14 .Authors, in 10 , used a parallel factor PARAFAC decomposition of the kernels to derive Volterra-PARAFAC models yielding an important parametric complexity reduction for Volterra kernels of order higher than two.They proved that these models are equivalent to a set of parallel Wiener models.Consequently, they proposed three adaptive algorithms for identifying these proposed Volterra-PARAFAC models for complex-valued input/output signals, namely, the extended complex Kalman filter, the complex least mean square CLMS algorithm, and the normalized CLMS algorithm.
In this paper, the algorithm derived in 14 is extended to be applied to blind identification of a general second-order Volterra-Hammerstein system.The main idea is to develop a general expression for each direction slices of a cubic tensor and then express the tensor slices in an unfolded representation.The three-dimensional tensor elements are formed by the fourth-order output cumulants.This yields to an Iterative Alternating Least Square IALS algorithm which has the benefit over the original Volterra filters in terms of implementation and complexity reduction.A convergence analysis based on matrices reversibility study is given showing that the proposed IALS algorithm converges to optimal solutions in the least mean squares sense.Furthermore, some simulation results and comparisons with different existing algorithms are provided.
The present work is organized as follows; in Section 2, a brief study of the threedimensional tensor is presented.In Section 3, the model under study and the related output cumulants are then proposed, whereas, in Section 4 the decomposition analysis of the cumulant tensor is developed.In Sections 5 to 8, we give, respectively, the proposed blind identification algorithm, the convergence study, some simulation results, and at the end some main conclusions are drawn.
Three-Dimensional Tensor and Different Slice Expressions
A three-dimensional tensor C ∈ C M×M×M can be expressed by where C ijk is the tensor value in the position i, j, k of the cube with dimension M, e M p denotes the pth canonical basis vector with dimension M, and the symbol • stands for the outer product Figure 1 .A cubic tensor can be always sliced along three possible directions horizontal, vertical, and frontal as depicted in Figure 2.This yields, in each case, to M matrices of M × M dimensions.
The expression of the ith slice in the horizontal direction is given by
2.2
In the same manner, the other matrix expressions along with the vertical and frontal directions are expressed, respectively, by C ijk e M i e M T j .
2.3
It is important to express the tensor slices in an unfolded representation, obtained by stacking up the 2D matrices.Hence, three unfolded representations of C are obtained.For the horizontal, the vertical, and the frontal directions, we get, respectively, . . .
We note that each matrix C p : p 1, 2, 3 is an M × M, M one.
Nonlinear System Model and Output Cumulants Analysis
We focus on the identification of a second-order Volterra-Hammerstein model with finite memory as it is given in 14 : where u n is the input of the system, assumed to be a stationary zero mean Gaussian white random process with E u 2 n γ 2 .M stands for the model order.The Hammerstein coefficients vectors h1 and h2 are defined by hp h p 0 , h p 1 , . . ., h p M T ; p 1; 2.
3.2
As we evoked in the Introduction, identification algorithms based on the computation of 2nd-order output cumulants are sensitive to additive Gaussian noise because 2nd-order cumulants of this latter are in general different to zero.Since the 4th-order cumulants of additive Gaussian noise is null, it will be interesting to use the 4th-order output cumulants to derive identification algorithms.But this will introduce another problem which is the computation complexity.In this paper, we will overcome this shortcoming by using a tensor analysis.
To determine the kernels of this model, we will generate the fourth-order output cumulants.For this purpose, we need to use the standard properties of cumulants and the Leonov-Shiryaev formula for manipulating products of random variables.
The fourth-order output cumulant is given by 15 : where It is easy to verify that c 4y i 1 , i 2 , i 3 0 for all |i 1 |, |i 2 |, |i 3 | > M. All the nonzero terms of c 4y i 1 , i 2 , i 3 are obtained for i 1 , i 2 , i 3 ∈ −M, M 3 .Such a choice allows us to construct a maximal redundant information, in which the fourth-order cumulants are taken for time lags i 1 , i 2 , and i 3 within the range −M, M .
In the sequel we shall present an analysis of a 3rd-order tensor composed of the 4thorder output cumulants.
Formulation and Analysis of a Cumulant Cubic Tensor
Let us define the three-dimensional tensor C 4,y ∈ C 2M 1 × 2M 1 × 2M 1 , in which the element in position i, j, k corresponds to • is given by 3.4 .It follows that
4.2
Then, expression of the tensor C will be given by The mathematical development of the expression 4.3 yields to where , p 1; 2.
4.5
This notation leads to define two channel matrices H 1 ; H 2 ∈ C 2M 1 × M 1 as follows: with p 1; 2, and H • is the operator that builds a special Hankel matrix from the vector argument as shown above.
Mathematical Problems in Engineering
Let us compute now the different slices of the proposed tensor.
Horizontal Slices Expressions
From 2.2 and 4.3 , we get which can be written as
4.8
where diag n • is the diagonal matrix formed by the nth line of its argument.
It can easily be demonstrated that It follows that The expression of the unfolded tensor representation is given by . .
4.11
To develop this expression, we need the following property.
Property 1.
Let A be the matrix with dimensions M, N and B the matrix with dimensions M , N , then where stands for the Khatri-Rao product.
It becomes that 4.13
5.1
Let A and B be
5.2
The problem of the blind nonlinear identification will be expressed as This system can be solved using several methods.We propose to resolve it using the Iterative Alternating Least Square algorithm IALS .
Cost Functions and Iterative Alternating Least Square Algorithm
To apply the IALS algorithm, we suppose alternatively that Ah 1 or Bh 2 is a constant vector.Then, we get two cost functions to be minimized.Assuming that the vector Bh 2 is constant, the first cost function will be expressed by For the second cost function, we assume that Ah 1 is constant; thus The application of the least mean squares algorithm to these two functions leads to the following solutions: where the subscript # denotes the Moore-Penrose pseudoinverse of the corresponding matrix.
Finally, the different steps of the proposed IALS algorithm are summarized in Algorithm 1.
The notation x stands for the estimates of the parameter x.
Convergence Analysis
Equation 6.3 shows that the ALS algorithm converges to optimal solutions if and only if the Moore-Penroze pseudoinverse matrices A # and B # exist, which implies that matrices A and B must be full rank 14, 16 .To do this, we start by affirming that, due to the Hankel structure and the assumption that h i M / 0 3.1 , each of the matrices H 1 and H 2 is full rank.Then rank Let us now find out the rank of matrices H i H j H k ; i, j, k ∈ {1, 2} obtained from Khatri-Rao product 5.1 .We will make use of the following definition and property defining the k-rank of a matrix and the rank of a Khatri-Rao product of two matrices 17 .This means that the rank of the matrix A is the largest integer k for which every set containing k columns of A is independent., ii compute matrices estimate A and B as
Property 3. Consider the Khatri-Rao product A B, where
n , iii minimize cost functions 6.1 and 6.2 so that h n 1 1 Algorithm 1: Different steps of the new blind identification algorithm-based cumulant tensor analysis.
It follows that
which is equivalent to Due to the definition of the Khatri-Rao product and the structure of the Hankel matrices H i ; i ∈ {1, 2}, we conclude that Consequently, which means that each matrix H i H j H k is full rank whatever the values taken by i and j in the set {1, 2}.
Let us now find out the rank of matrices A and B. For this purpose, we will study the structure of the matrix H i H j H k .Recall that H i is a 2M 1, M 1 matrix.Let Θ i 0 0 • • • 0 T be the zero column vector of dimension 2M 1 ; i 1, . . ., M.
Then, for the matrix H i H j H k , we will have the following form: where X i stands for the column vector of dimension 2M 1 2 M 1 which is constituted by products of the kernels model arising from computation of the Khatri-Rao matrix product.We have seen that H i H j H k is full rank.The sum of different matrices H i H j H k has the same form of H i H j H k whatever the system order and the values taken by i, j and k.Consequently, matrices A and B are full rank and then their pseudoinverse exist.We conclude that the IALS converges to an optimal solution in least mean squares sense.
Simulation Results
In this section, simulation results will be given to illustrate the performance of the proposed algorithm.Two identification Volterra-Hammerstein systems are considered: System 1: System 2:
8.1
The input sequence u n is assumed to be stationary, zero mean, white Gaussian noise with variance γ 2 1.The noise signal e n is also assumed to be white Gaussian sequence and independent of the input.The parameter estimation was performed for two different signal-to-noise ratio SNR levels: 20 dB and 3 dB.
The SNR is computed with the following expression: Fourth-order cumulants were estimated from different lengths of output sequences N 4096 and N 16384 assuming perfect knowledge of the system model.To reduce the realization dependency, parameters were averaged over 500 Monte-Carlo runs.For each Mathematical Problems in Engineering
System 1
Figures 3 and 4 show the estimates of the different kernels of the proposed model, with the IALS algorithm, for N 4096 and for different SNR levels 3 dB and 20 dB .
The mean and the standard deviation of the estimated kernels against the true ones are shown in Table 1.
Likewise, Figures 5 and 6 show the estimates of the different kernels of System 1 for N 16384 and for SNR levels equal to 3 dB and 20 dB, while, in Table 2, the mean and the standard deviation of the estimated kernels against the true ones are shown.
From these results, we observe that the proposed IALS algorithm performs well generating estimates for a large variation of the SNR from 20 dB to 3 dB .We also note that the standard deviation is relatively large and decreases with the number of the system observations.
System 2
Figures 7 and 8 show the estimates of the different kernels of the second proposed model for N 4096 and for different SNR 20 dB and 3 dB .
The mean and the standard deviation of the estimated kernels against the true ones are shown in Table 3.
Figures 9 and 10 show the estimates of the different kernels of System 2 for N 16384 and for different SNR 20 dB and 3 dB , while, in Table 4, the mean and the standard deviation of the estimated kernels against the true ones are shown.The mean and the standard deviation of the estimated kernels against the true ones, for the second system, are shown in Table 4.
From these results, we note also that the proposed algorithm provides good estimates for the proposed system.The number of observations N affects the range of variation of the standard deviation values.Indeed, for important values of N, this range becomes so small.The method provides good estimates even for low levels of SNR.Furthermore, we note that the larger the Monte-Carlo runs number, the smaller the standard deviations are.
Comparison with Existing Methods
The performance of the previous algorithm was compared with two works: the algorithm proposed in 9 will be noted as BIL to blind identification with linearization and the Lagrange Programming Neural Network LPNN proposed in 13 .
i In 14 , the problem of blind identification was converted into a linear multivariable form using Kronecker product of the output cumulants.This can be described by the following equations: where C k y τ 1 , τ 2 , . . ., τ k−1 denotes the output cumulants sequence of order k, Γ kw is the intensity zero lag cumulant of order k of the vector W which is formed in its Different important scenarios were discussed and successfully resolved.Here, we are interested in the case of Gaussian input when the input statistics are known.Despite the efficiency of the proposed method, the resulting algorithms are in general cumbersome especially for the high series order As confirmed by authors .For more details, see 14 .ii In their work 13 , authors tried to determine the different Volterra kernels and the variance of the input from the autocorrelation estimates ρ k and the third-order moments estimates μ k, l of the system output, using the Lagrange Programming Neural Network LPNN .As the LPNN is essentially designed for general nonlinear programming, they expressed the identification problem as follows: where R i, j is the autocorrelation function of the real process y n and M i, j is its third order moment sequence.f is the vector formed by the unknown parameters of the Volterra model and the unknown variance of the driving noise.So the Lagrangian function will be written as To improve the convergence and the precision of the algorithm, authors extended the preceding function by defining the Augmented Lagrangian Function such as where {β k } is a penalty parameter sequence satisfying 0 < β k < β k 1 for all k, β k → ∞.So the back-propagation algorithm can be established using the Lagrange multiplier.The performance of the new proposed algorithm was compared with these two algorithms.Each of these algorithms was used to identify the two models presented above 8.1 , for the case of Gaussian excitation, N 16384 samples, and for the tow proposed SNR levels 3 dB and 20 dB.
Figures 11 and 12 show a comparison of the standard deviations given by each algorithm.We note that these results may vary considerably depending on the number of the output observations.These results show that the new proposed algorithm performs well.For a small number of unknown parameters, we note that all algorithms give in general the same STD values and these values decrease by increasing the SNR values.We note furthermore that BIL algorithm is so complex for programming in comparison with the LPNN and the new one.For big number of unknown parameters, the BIL algorithm becomes very computational complex and even the LPNN, while the new algorithm keeps its simplicity and provides good parameters with very competitive STD values.
Conclusion
In this paper, a new approach for blind nonlinear identification problem of a second-order Hammerstein-Volterra system is developed.Thanks to a matrix analysis of a cubic tensor composed of the fourth-order output cumulants, the nonlinear identification problem is reduced to a system having the following general form: Ax By c.This system is solved using the Iterative Alternating Least Square.A convergence analysis shows that matrices A and B are full rank which means that the IALS algorithm converges to optimal solutions in the least mean squares sense.Simulation results on two different systems show good performance of the proposed algorithm.It is noted also that the different values of the estimates improve with the number of the system observation even for small values of SNR.Comparison results with two algorithms show that the new proposed algorithm performs well and especially in the case of great number of unknown parameters.Extending the proposed algorithm for more input classes and for more general Volterra-Hammerstein systems remains an open problem, and it is now the subject matter of current works.
Figure 2 :
Figure 2: Different direction slices of a cubic tensor.
Definition 7 . 1 .
The rank of a matrix A ∈ C E×F denoted by k A is equal to k if and only if every k columns of A are linearly independent.Note that k A ≤ min E, F , for all A.
Figure 3 :
Figure 3: Estimates of the parameters of System 1 with the IALS algorithm for N 4096 and SNR 3 dB.
2 dFigure 4 :
Figure 4: Estimates of the parameters of System 1 with the IALS algorithm for N 4096 and SNR 20 dB.
Figure 5 :
Figure 5: Estimates of the parameters of System 1 with the IALS algorithm for N 16384 and SNR 3 dB.
Figure 6 :Figure 7 :Figure 8 :Figure 9 :
Figure 6: Estimates of the parameters of System 1 with the IALS algorithm for N 16384 and SNR 20 dB.
Figure 10 :
Figure 10: Estimates of the parameters of System 2 with the IALS algorithm for N 16384 and SNR 20 dB.
Figure 11 :Figure 12 :
Figure 11: Comparison of the standard deviations STDs of the new algorithm against those of the LPNN and the BIL algorithms: System 1.
To estimate the Volterra-Hammerstein kernels and to avoid the computation of H p : p 1, 2, we will use the following Khatri-Rao property to propose an Iterative Alternating Least Square IALS procedure.Property 2. If matrices A ∈ C m×n and B ∈ C n×m and vector d ∈ C n are such that X Adiag d B, then it holds that vec X B T A d, where vec • stands for the vectorizing operator.
Initialize h 1 and h 2 as random variables estimates h
Table 1 :
True and estimated values of the kernels of System 1 for N 4096 500 Monte-Carlo runs .
Table 2 :
True and estimated values of the kernels of System 2 for N 16384 500 Monte-Carlo runs .
Table 3 :
True and estimated values of the kernels of System 2 for N 4096 500 Monte-Carlo runs .
Table 4 :
True and estimated values of the kernels of System 2 for N 16384 (500 Monte-Carlo runs). | 4,918.2 | 2012-06-13T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
SEMIDEFINITE PROGRAMMING VIA IMAGE SPACE ANALYSIS
. In this paper, we investigate semidefinite programming by using the image space analysis and present some equivalence between the (regular) linear separation and the saddle points of the Lagrangian functions related to semidefinite programming. Some necessary and sufficient optimality conditions for semidefinite programming are also given under some suitable assumptions. As an application, we obtain some equivalent characterizations for necessary and sufficient optimality conditions for linear semidefinite programming under Slater assumption. optimization problems. This approach can be applied to any kind of problem that can be expressed under the form of the impossibility of a parametric system. The impossibility of such a system is reduced to the disjunction of two suitable subsets of the image space. The disjunction between the two suitable subsets is proved by showing that they lie in two disjoint level sets of a separating functional.
1. Introduction. As pointed by Vandenberghe and Boyd [12], there are good reasons for studying semidefinite programming. First, positive semidefinite (or definite) constraints directly arise in a number of important applications. Second, many convex optimization problems, e.g., linear programming and (convex) quadratically constrained quadratic programming, can be cast as semidefinite programs. So, semidefinite programming offers a unified way to study the properties of and derive algorithms for a wide variety of convex optimization problems. Most importantly, semidefinite programs can be solved very efficiently, both in theory and in practice. Extensive lists of applications from various areas can be found in [9,12]. In recent years, many authors have investigated semidefinite programming (see, for example, [5,11,13,15]).
The image of a constrained extremum problem was developed by Giannessi [2], by exploiting previous results on theorems of the alternative. Recently, there has been an increasing interest in the Image Space Analysis (for short, ISA) of constrained variational inequalities and constrained optimization problems (see, for example, [3,1,4,6,7,14]).
The ISA is a powerful tool and a unifying scheme for studying both variational inequalities and optimization problems. This approach can be applied to any kind of problem that can be expressed under the form of the impossibility of a parametric system. The impossibility of such a system is reduced to the disjunction of two suitable subsets of the image space. The disjunction between the two suitable subsets is proved by showing that they lie in two disjoint level sets of a separating functional.
The purpose of this paper is to carry on the ISA of semidefinite programming. We present some equivalence between the (regular) linear separation and the saddle points of the Lagrangian functions related to semidefinite programming. We give necessary and sufficient optimality conditions for semidefinite programming under suitable a assumptions. As an application, under Slater assumption we obtain some equivalent characterizations for necessary and sufficient optimality conditions for linear semidefinite programming.
The paper is organized as follows. In Section 2, we recall some notations that will be used in the sequel. In Section 3, we define the image of semidefinite programming and the conical extension of the image, and give the equivalence between the solvability of semidefinite programming and an empty intersection of H and the conical extension of the image. In Section 4, we characterize the (regular) linear separation for semidefinite programming and we give the equivalence between (regular) linear separation and saddle points of Lagrangian functions for semidefinite programming. We present sufficient and necessary conditions for the solvability of semidefinite programming in Section 5. Section 6 investigates some equivalent results of necessary and sufficient optimality conditions for linear semidefinite programming.
2. Notations and problem formulation. Let R k be the k dimensional Euclidean space, where k is a given positive integer. We denote by R k k}, where the denotes the transpose. Let R + := R 1 + and R ++ := R 1 ++ . A nonempty subset P of R k is said to be a cone with apex at the origin if λP ⊆ P for all λ ≥ 0. P is said to be a convex cone if P is a cone and P + P = P . The dual cone (or positive polar cone) of a convex cone P is given by where ·, · denotes the inner product.
Denote by dom h := {x ∈ R k : h(x) < +∞} the effective domain of h : for any x, y ∈ K, t ∈ [0, 1]. The subdifferential of a proper convex function h : R k → R ∪ {+∞} at x ∈ dom h is given by It is well known that if h is differential at x, then ∂h(x) = {∇h(x)}, where ∇ denotes the gradient. Let K be a nonempty subset of R k . The indicator function of K is defined by The normal cone to K at x ∈ K, denoted by N K (x), is defined by Denote by S l the linear subspace of the symmetric l ×l matrices with real entries, i.e., Denote by S l + the cone of the symmetric positive semidefinite l × l matrices with real entries, i.e., S l + := {A ∈ S l : x Ax ≥ 0, ∀x ∈ R l }, and by S l ++ the set of the symmetric positive definite l × l matrices with real entries (which actually coincides with intS l + , see [11]), i.e., S l ++ := {A ∈ S l : x Ax > 0, ∀x ∈ R l with x = 0}. Then the so-called Löwner partial order can be introduced as follows: The scalar product and Frobenius norm in S l are given by respectively, where A ij and B ij are (i, j) elements of A and B, respectively. It is well known that the space S l is self-dual, i.e., (S l ) * = S l . It is proved in [15] that the convex cone S l + is also self-dual, i.e., (S l + ) * = S l + . Also one can easily check that S l + is a closed subset of S l , i.e., clS l + = S l + , where the cl denotes the closed hull. Notice that S l can be considered as the l(l+1) 2 dimensional Euclidean space. We say that g : R k → S l is S l + -convex (res., S l + -convexlike) on a convex set for any x, y ∈ K, t ∈ [0, 1] (res., g(K) + S l + is convex). Clearly, if g is S l + -convex on K, then g(K) + S l + is convex. But the converse is not true in general.
In this paper, without other specifications, let K be a nonempty convex subset of R k and let f : R k → R and g : R k → S l . We consider the following semidefinite programming (for short, SDP): Then SDP collapses to the following linear semidefinite programming (for short, LSDP): 3. Preliminaries results on ISA for SDP. In this section, we shall carry on ISA for SDP. Observe that,x ∈ F p solves SDP if and only if the system (in the unknown x): is impossible. We can associate SDP with the following sets: We call the set K(x) the image of SDP atx ∈ F p and R × S l image space. Define the mapping Gx : Clearly, we have the following results: Consequently,x ∈ F p is a solution of SDP if and only if (3.2) is true.
As pointed by Giannessi, to prove directly whether (3.2) holds or not is generally too difficult. The reason is that, in general, the image of SDP is not convex even when the functions involved enjoy some convexity properties. To overcome this difficulty, similar to that in [3,4] (see also [6,7,14]), we introduce a regularization of the image K(x), namely, the extension with respect to the cone clH, denoted by E: We have the following: The following statements are true: (1) The mapping −Gx is R + × S l + -convexlike on K, i.e., −Gx(K) + R + × S l + is convex, if and only if the set E is convex;
2) holds if and only if
or equivalently, Then there is (u 0 , A 0 ) ∈ H ∩ E, i.e., there exists x 0 ∈ K such that As a consequence, We now prove (3.3) and (3.4) are equivalent. We only need to prove (3.4) implies
Linear separation and saddle points of Lagrangian functions for SDP.
In this section, we shall characterize the (regular) linear separation of SDP and investigate the saddle points of the Lagrangian function associated to SDP. (4.1) (2) admit a regular linear separation, if there is (λ,Ā) ∈ R ++ ×S l + such that (4.1) holds.
Similarly, we can define the (regular) linear separation of the sets K(x) and E. The following lemma is well known (see, for example, [6]). (1) The sets K(x) and H are linearly separable; (2) The sets E and H are linearly separable; (3) The sets K(x) and H admit a regular linear separation,; The sets E and H admit a regular linear separation. Then we have (1)⇔ (2)⇐(3)⇔ (4). If the following Slater condition holds:
Letx ∈ F p . Consider the generalized Lagrangian function associated to SDP, defined by L : We also consider the following Lagrangian function associated to SDP, defined by L 0 : S l , ∀(A, x) ∈ S l + × K. Definition 4.4. The point (λ,Ā,x) ∈ R + × S l + × K is said to be a saddle point of the generalized Lagrangian function L on R + × S l + × K, if the following inequalities hold: L(x; λ, A,x) ≤ L(x;λ,Ā,x) ≤ L(x;λ,Ā, x), ∀(λ, A, x) ∈ R + × S l + × K; The point (Ā,x) ∈ S l + × K is said to be a saddle point of the Lagrangian function L 0 on S l + × K, if the following inequalities hold: It is easy to verify the following proposition: Proposition 4.5. Letx ∈ F p and (λ,Ā) ∈ R ++ × S l + . Then the point (λ,Ā,x) ∈ R + × S l + × K is a saddle point of the generalized Lagrangian function L on R + × S l + × K, if and only if (Ā λ ,x) ∈ S l + × K is a saddle point of the Lagrangian function L 0 on S l + × K. We now characterize the linear separation for SDP by using the saddle points of the generalized Lagrangian function related to SDP. Proof. Necessity. Suppose that K(x) and H are linearly separable. Then there exists (λ,Ā) ∈ R + × S l + , with (λ,Ā) = 0, such that (4.1) holds. Letting x :=x in (4.1) allows that Ā , g(x) ≤ 0. Sincex ∈ F p , we have g(x) 0 and therefore Ā , g(x) ≥ 0. As a consequence, Ā , g(x) = 0 and again from (4.1) one has Again from g(x) 0 we have Letting A = 0 in the first inequality leads to Ā , g(x) ≤ 0. Sincex ∈ F p , one has g(x) 0 and so Ā , g(x) ≥ 0 sinceĀ ∈ S l + . Thus Ā , g(x) = 0 and it follows from which yields that the sets K(x) and H are linearly separable.
Similarly, we can prove the following result: Theorem 4.7. Letx ∈ F p . Then the sets K(x) and H admit a regular linear separation if and only if there exists (λ,Ā) ∈ R ++ ×S l + , such that the point (λ,Ā,x) is a saddle point for L on R + × S l + × K. From Proposition 4.5 and Theorem 4.7 we have: Theorem 4.8. Letx ∈ F p . Then the sets K(x) and H admit a regular linear separation if and only if there exists (λ,Ā) ∈ R ++ × S l + , such that (Ā λ ,x) ∈ S l + × K is a saddle point of the Lagrangian function L 0 on S l + × K. Under some convexity assumptions of f and −g we also have the following proposition: Proposition 4.9. Letx ∈ F p . Suppose that f is convex on K and −g is S l + -convex on K. Then the point (λ,Ā,x) ∈ R + × S l + × K is a saddle point of the generalized Lagrangian function L on R + ×S l + ×K, if and only if it is a solution of the following system: Proof. Necessity. Suppose that the point (λ,Ā,x) ∈ R + × S l + × K is a saddle point of the generalized Lagrangian function L on R + × S l + × K. Then from the proof of sufficiency in Theorem 4.6, we have Ā , g(x) = 0. It follows from (4.3) that . Thenx is the minimum point of the function f on R k . Since K is convex, f is convex on K and −g is S l + -convex on K, we have − Ā , g(·) is convex on K (see, for example [8]) and as a consequence, f is convex on R k . Since dom(λ(f (·) − f (x))) ∩ dom(− Ā , g(·) ) ∩ domi K = K and riK = ∅, it follows from [10] that which implies that (λ,Ā,x) solves system (4.4).
Sufficiency. Let (λ,Ā,x) solves system (4.4). Again since dom(λ(f (·) − f (x))) ∩ dom(− Ā , g(·) ) ∩ domi K = K and riK = ∅, it follows from [10] that . Thus it follows that (4.5) holds and since Ā , g(x) = 0, we have Sincex ∈ F p , i.e., g(x) 0, we have A, g(x) ≥ 0 for any A ∈ S l + . As a consequence, Similarly, we have Proposition 4.10. Letx ∈ F p . Suppose that f is convex on K and −g is S l +convex on K. Then the point (Ā,x) ∈ S l + × K is a saddle point of the Lagrangian function L 0 on S l + × K, if and only if it is a solution of the following system: x ∈ F p , A ∈ S l + . Corollary 4.11. Letx ∈ F p . Suppose that K is open, f is convex and differentiable on K and −g is S l + -convex on K and A, g(·) is differentiable on K for each A ∈ S l + . Then the point (Ā,x) ∈ S l + × K is a saddle point of the Lagrangian function L 0 on S l + × K, if and only if it is a solution of the following system: for any x ∈ K. The conclusion follows immediately from Proposition 4.10.
5.
Optimality conditions for SDP. In this section, we shall present the necessary and sufficient optimality conditions for SDP. First we present the following necessary optimality condition for SDP.
From Theorems 5.1 and 4.6 we have the following corollary: Ifx is a solution of SDP, then there exists (λ,Ā) ∈ R + ×S l + , with (λ,Ā) = 0, such that the point (λ,Ā,X) is a saddle point for L on R + × S l + × S l + . Now we present the following sufficient optimality condition for SDP. Theorem 5.3. Letx ∈ F p . If the sets K(x) and H admit a regular linear separation, thenx is a solution of SDP.
From Theorems 5.3 and 4.3 we have the following corollary:
Corollary 5.4. Letx ∈ F p . If the sets K(x) and H are linearly separable and the Slater condition (4.2) holds, thenx is a solution of SDP.
6. Applications to LSDP. In this section, we shall apply the obtained results to characterize the necessary and sufficient optimality conditions of LSDP.
Proof. The conclusion follows immediately from the facts that ∂f (x) = {c 0 } and ∂(− A, g(·) )(x) = {−( A, A 1 , · · · , A, A k ) }. 7. Conclusion. Recently, there has been an increasing interest in the ISA. Separation plays a vital role in the ISA. In this paper, the ISA was employed to investigate semidefinite programming. Some equivalence between the (regular) linear separation and the saddle points of the Lagrangian functions related to the problem were characterized. Some necessary and sufficient optimality conditions for semidefinite programming were given under some suitable assumptions. An application to linear semidefinite programming was also given to illustrate the obtained results. | 3,939.6 | 2016-01-01T00:00:00.000 | [
"Mathematics"
] |
Water Area Extraction Using RADARSAT SAR Imagery Combined with Landsat Imagery and Terrain Information
This paper exploits an effective water extraction method using SAR imagery in preparation for flood mapping in unpredictable flood situations. The proposed method is based on the thresholding method using SAR amplitude, terrain information, and object-based classification techniques for noise removal. Since the water areas in SAR images have the lowest amplitude value, the thresholding method using SAR amplitude could effectively extract water bodies. However, the reflective properties of water areas in SAR imagery cannot distinguish the occluded areas caused by steep relief and they can be eliminated with terrain information. In spite of the thresholding method using SAR amplitude and terrain information, noises which interfered with users’ interpretation of water maps still remained and the object-based classification using an object size criterion was applied for the noise removal and the criterion was determined by a histogram-based technique. When only using SAR amplitude information, the overall accuracy was 83.67%. However, using SAR amplitude, terrain information and the noise removal technique, the overall classification accuracy over the study area turned out to be 96.42%. In particular, user accuracy was improved by 46.00%.
Introduction
SAR is an active sensor using a microwave signal, which can penetrate clouds and generate ground information regardless of the atmospheric conditions, unlike optics-based systems. Due to this characteristic, SAR can collect data from large areas under any weather conditions and the data is suitable for emergent disaster situations. In particular, when transmitted radar signals are reflected on flat water surfaces, a significantly weak return signal reaches the sensor and this characteristic makes it easy to identify flooding areas from SAR imagery [1,2]. Much research has been done on large scale flood mapping and flood dynamics [3][4][5][6][7][8][9][10][11][12]. In particular, the low return signal behavior of open water bodies supports the thresholding method [3,6,7,9,11,12]. For example, Schumann et al. [11] computed a threshold value using Otsu's algorithm [13] to estimate the between-class variance from a normalized histogram. They used the Digital Elevation Model (DEM) of the Shuttle Radar Topography Mission (SRTM) and the ENVISAT imagery to estimate flood profiles on large rivers. Matgen et al. [12] set a radiometric threshold based on a gamma distribution assumption and combined it with a region growing approach to extract water bodies.
Even though those histogram-based thresholding methods automatically obtain flood areas from SAR images with low complexity and computational efficiency, it is difficult to determine the optimized threshold. For example, the thresholding method only using SAR amplitude information cannot distinguish water from occluded regions caused by high terrain relief and flat land which has similar reflective characteristics with open water body [10,14]. Song et al. [10] used Gray Level Co-occurrence Matrix (GLCM), DEM and the Digital Slope Model (DSM) to remove the distortion caused by high relief and found that DSM displays the best performance in mountainous areas. Pierdicca et al. [15] integrated SAR imagery, land cover map and DEM into a fuzzy scheme for the flood mapping. Mason et al. [16] eliminated shadow and layover effects with airborne LiDAR data and detected urban flooding areas.
While the above methods can extract water bodies from SAR imagery, they have several limitations that impede their widespread use. Each flood mapping method with SAR imagery has just been studied in specific areas such as flat terrain, mountainous areas or urban areas. Because SAR reflectance properties are affected by the topography, vegetation and geometry of artificial objects, the signal return tendency cannot be fully estimated and the signal return tendency uncertainty decreases the performance of the water extraction using SAR imagery. For this reason, additional processes considering the ground conditions must be applied to the misclassified areas for large-scale flood mapping using SAR imagery. Moreover, since the pre-defined land cover information is required to determine the threshold value statistically, the periodic updating of land cover maps which have sufficient resolution and accuracy is a heavy and costly burden. Therefore, appropriate land cover maps which can be obtained instantly and a probabilistic approach to determine the threshold are required for practical water area mapping in unpredictable situations.
To overcome the abovementioned shortcomings of the existing methods, we proposed thresholding methods for water body extraction where threshold values concerning SAR amplitude and terrain information are determined based on the maximum-likelihood classifier and a land cover map created using Landsat TM imagery. We also applied the object-based algorithm to eliminate the misclassified pixels due to the unpredictable properties of land surfaces. For verification, we applied the proposed approach to RADARSAT-1 SAR images which captured the Jeollabuk-do area in Korea in 2005 when a flood event happened. A quantitative analysis was performed with a reference water map created using high-resolution orthorectified aerial images.
Overview
The water extraction technique proposed in this study is divided into four steps as shown in Figure 1: (1) SAR image geocoding to ground coordinates and eliminating the geometric and radiometric topographic effect using DEM; (2) generation of a land cover map from Landsat TM imagery using the Interactive Self-Organizing Data Analysis (ISODATA) algorithm; (3) using the land cover map, determining threshold values for SAR amplitude and terrain information for water area classification; (4) eliminating the remaining noises using object-based classification with a histogram-based algorithm.
Accurate geocoding is an important process for pixel-based image fusion [17]. The geocoding process includes three sub-procedures including obtaining Ground Control Points (GCPs), geometric correction and topographic correction. GCPs were obtained from 1:5000 scale digital topographic maps and the geometric correction is conducted using two fundamental equations in SAR geometry, i.e., Range and Doppler equations [18]. Then, the topography correction was performed using DEM created from the 1:5000 scale digital topographic maps provided by the Korean National Geographic Information Institute. A land cover map is required to estimate the threshold values for the SAR amplitude and terrain information. For this purpose, we used Landsat 5 TM imagery with ISODATA algorithm to extract a land cover map of water and the other four classes including urban land, agricultural land, forest, rangeland, wetland and barren land and the classes has specific SAR amplitude and terrain properties. Among the five classes, threshold values of the SAR amplitude and terrain information were estimated using a maximum-likelihood classifier. The determined threshold value of SAR amplitude information could extract water bodies and that of terrain information could remove terrain areas distorted due to high relief. Even though most of water bodies could be extracted by the thresholding method, a number of misclassified objects decreased the accuracy. For this reason, object-based classification method based on the criterion about object sizes was conducted to eliminate unnecessary misclassified objects.
To verify the performance of the proposed method, we divided it into six cases and checked the accuracy of water area classification. We also focused on misclassified objects in high relief areas. The performance of the proposed method was assessed by producer accuracy corresponding to errors of commission and user accuracy corresponding to errors of omission. According to the input data and application of noise removal method, the six cases were defined as follows: Case #1: Amplitude Case #2: Amplitude + Height Case #3: Amplitude + Slope Case #4: Amplitude + Removal of the misclassified areas Case #5: Amplitude + Height + Removal of the misclassified areas Case #6: Amplitude + Slope + Removal of the misclassified areas
Geometric Correction and Topographic Correction of SAR Imagery
Since SAR imagery has geometric and radiometric distortion due to the platform and sensor information errors, topographic effects and atmospheric delay, geometric correction is indispensable for the pixel-based image fusion technique [10,19]. In particular, since the RADARSAT-1 satellite had provided inaccurate platform position and velocity information, an accurate geometric correction using sufficient GCPs is required. Substantial research to define rigorous 3-D physical models has been carried out and the mathematical functions for SAR imagery are the Doppler and Range equations, as shown in Equations (1) and (2) [18,20]: where f is the Doppler value which is the difference between the Doppler centroid and Doppler shift, where, t is the pixel sampling timing and 0 1 2 0 0 1 2 , , , ,..., , a a a b f f and f are orbit parameters which are geometrically corrected. Even after the geometric correction, topographic distortion is still remained because the terrain relief causes geometric terrain distortions such as overlay, foreshortening and shadow effects, and thus could produce positional errors and make interpretation of SAR imagery difficult. Not only that, but due to the local incidence angle between the SAR sensor and the actual terrain, the actual back scattering is different from that estimated based on a flat Earth assumption [10,18]. In SAR imagery, the radiometric effect of non-flat scattering varies according to the ground slope, incidence angle and the azimuth direction and the power of received waves from the ground target point is represented by where, t P is the transmitted power; λ is the radar wavelength, R is the distance to the scattering area, γ is the radar look angle, ( ) t G γ and ( ) r G γ are the transmitted and received antenna gains at look angle γ respectively, and 0 σ is the normalized radar cross section for area A . A K is correction coefficient for topographic effect and defined as in Equation (5): with: where A is the flat ground area without terrain relief or spherical Earth and ' A is the actual scattering area of the non-flat terrain. η is the incidence angle, δ r and δ a are the slant range and azimuth pixel spacing, respectively, and φ r is the tilt of the surface in the range direction, and φ a is the tilt of the surface in the azimuth direction. The accurate information of SAR signal scattering surface is required to correct the topographic effect caused by local terrain relief and can be calculated from the geometry between DEM and satellite orbit model.
Threshold Determination
The maximum-likelihood classifier is one of the most widely used supervised classification methods [22]. The supervised classification method needs appropriate training data to estimate threshold values, however, it is difficult to achieve suitable reference data like a land cover map when the water map is needed in unpredictable situations like floods. For this reason, cloud-free Landsat TM imagery captured on the closest date to the SAR imagery was utilized to create a land cover map.
Officially, the land cover maps made by the Korean Ministry of Environment divide the territory in Korea into seven classes: urban land, agricultural land, forest, rangeland, wetland, barren land and water. Because our purpose is the extraction of water area and flood detection using SAR imagery and terrain information, the land cover map with water class and the other four classes including the classes of urban land, agricultural land, forest, rangeland, wetland and barren land is enough to create a water map. This is because that agricultural land, rangeland and wetland have very similar SAR amplitude and terrain information properties.
In maximum-likelihood classification, each pixel is assigned a certain class i which has the highest probability ( ( ) Since the prior probability ( ( )) i p C typically cannot be estimated, it is assumed that the prior probabilities of all classes are equal. When two classes have the same probability to be included, the value of threshold (T ) is defined by Equation (8): does not guarantee that the threshold value will be uniquely defined [23]. In this study, we used a threshold value between the two mean values of classes. Since the reflectance characteristics of homogeneous areas in SAR images including speckle noise typically determined by gamma distribution [24], the gamma probability density function was used to estimate threshold values on the SAR amplitude. The gamma distribution can be applied to positive values. To shift the entire amplitude values to positive values, Matgen et al. [12] added minimum amplitude value to entire values and the equation of the gamma probability density function is given as follows: where, x is the pixel value of SAR imagery, min x is the minimum pixel value of SAR imagery, i k and θ i are the shape parameter and the scale parameter of class i and ( ) i k Γ is the gamma function about i k .
In case of the threshold determination about terrain information, the normal distribution was used and the normal probability density function of i is represented by Equation (10) where, μ i and σ i are mean and standard deviation of class i . There were four threshold values each relating to SAR amplitude and terrain information accordingly as the water class and the other four classes were in the land cover map. Of the SAR amplitude information and terrain information, the highest value among the estimated values was respectively selected for the threshold value. In the case of water extraction using a threshold for SAR amplitude information, low threshold values could misclassify actual water bodies as land. For the threshold value relating to terrain information, the occluded areas caused by steep relief have low values like water areas and the most of the steep relief exists on mountainous areas at a high altitude and with a high slope. For this reason, the selected threshold values relating to SAR amplitude and terrain information can extract water bodies from the occluded area with minimized misclassification.
Noise Removal
Threshold taxonomy is used on SAR imagery for water area classification in order to effectively enhance the classification accuracy in the mountainous region areas, but removal of the misclassified objects which appear in the form of noise in the image is required. Since the misclassified segments appear like small points, these types of error can be removed based on the size of extracted objects. The object size criterion to distinguish the misclassified objects is determined by the histogram-based technique proposed by Zack et al. [25]. It is an algorithm that searches for a valley point of the histogram corresponding to the farthest point on the line connecting the value of highest peak and the maximum value in the histogram. The advantage of this method is unaffected by histogram irregularities for valley detection. The algorithm can be described by Equations (11) with: The distance between each point i P on the graph to the line is calculated by Equation (15): The histogram valley corresponds to the point of the histogram whose distance i P D is farthest.
Study Sites and Data Preparation
Test image scenes were obtained from RADARSAT-1 SAR imagery as shown in Figure 2 to investigate the suitability of the proposed flood mapping procedure. These two images clearly show the pre-post differences in the flooded areas caused by the disaster in 2005. Images captured in fine mode with about 40° incidence angle and 6.25 m spatial resolution were used. The study area covers the Jeollabuk-do region of Korea in which flooding damage occurred during July to August of 2005 due to heavy rainfall. This flood not only caused five times larger recovery costs than average but also killed 12 people and injured 26 people. The maximum elevation of the test site is 605 m, and the maximum terrain slope is up to 87.24°. Figure 3 shows the DEM and DSM which are the terrain information for geometric correction. The maximum change of elevation over the pixel distance in the 3 × 3 window defined the slope values in the DSM. The DEM whose spatial resolution is 5 m was produced using digital aerial images and the accuracy was verified by ground survey. DSM was generated by the DEM. (c) orthorectified aerial image over the study area. Figure 4 is Landsat TM imagery, the land cover map created using ISODATA algorithm and an orthorectified aerial image over the study area. The cloud-free Landsat imagery was acquired on 21 September 2003 was the closest to the flood epoch. The ISODATA algorithm was carried out to classify water and the other four class types. All of the data were resampled at 6.25 m pixel-spacing using a nearest neighborhood interpolation to be co-registered with the fine mode SAR images. The reference water map for the accuracy assessment was manually extracted from the orthorectified digital color aerial images which have 50 cm spatial resolution and 25 cm geolocation accuracy.
Geometric Correction of SAR Images
There is geometric and radiometric terrain distortion due to topographic properties as well as inaccurate platform positional data which the RADARSAT-1 satellite provided. To eliminate the positional error of the RADARSAT-1 imagery and alleviate the terrain distortion, 10 GCPs and 15 check points in the each image were selected from 1:5000 Korean digital topographic maps and orthorectified digital aerial images. The 10 GCPs were used to refine the orbit parameters and the precision of the correction was checked by the 15 check points. As a result of the geometric correction, the precisions of two SAR images were ensured within 0.5 pixels of RMSE. Figure 5 shows the selected GCPs and check points in the 50 cm spatial resolution orthorectified digital aerial images and the topographic correction results. Before topographic effect correction, there was significant distortion between the actual locations of mountain ridges and ridges represented in the SAR imagery as shown in Figure 5b. After topographic correction using DEM generated from digital topographic map, as shown in Figure 5c, the topographic effects such as overlay and foreshortening were significantly alleviated. inconsistency of ridge location due to terrain distortion); (c) orthorectified aerial image and SAR imagery after terrain effect correction (red: an actual ridge corresponds with a ridge represented in SAR imagery).
Statistics and Threshold of SAR Amplitude and Terrain Information
Tables 1 and 2 are the statistics and the thresholds for SAR amplitude and terrain information, which were estimated for water and the other four land cover classes. As shown in Tables 1 and 2, Class 1 had the properties of relatively high amplitude value, altitude and steep slope. Class 1 represented forested land, which occupies 70% of the Korean territory. Class 2 was urban land and Class 3 contained agricultural land, rangeland and wetland. Classes 2 and 3 had higher SAR amplitude, altitude and slope values than the water class, but the altitude and slope values were lower than for Class 1. Class 4 consisted of barren land and tideland. Since the tideland contained a large amount of moisture, its SAR amplitude value was relatively lower than all the other classes, except for the water class.
Using statistical properties and a maximum-likelihood classifier, we calculated the threshold values between the water class and the other classes. Since it is ambiguous to determine an optimized threshold, the determination of a suitable value is one of the most important issues for any thresholding technique. If too low a threshold value is selected, water pixels in SAR imagery can be classified as land. Therefore, the highest value of the estimated SAR amplitude thresholds was selected. Class 2 and Class 3 had the highest decision boundary value which can extract water bodies from SAR imagery.
In the case of topographic properties, the threshold of terrain information between the water class and Class 1 was selected to eliminate the terrain distortion. This is because Class 1 which contains mountainous areas has high altitude and steep slope properties which cause occluded areas. The threshold value relating to topographic properties could eliminate occluded areas due to high relief. Table 3 and Figure 6 are the classification result and the A, B and C regions in Figure 6 are the areas having high relief. As shown in the results, the misclassified region caused by the steep relief could be eliminated effectively using the slope threshold. Although elevation information was also able to get rid of the misclassified pixels, most water bodies in high lands were removed by the elevation threshold. In the case where only the amplitude value was used, the overall accuracy was 83.67%. When removing the occluded regions by using DEM and DSM, 88.93% and 88.81% overall accuracies were confirmed, respectively. The accuracies of Case #2 and Case #3 were higher than that of Case #1. As shown in region B, the elevation information removes more misclassified objects in high land than slope information, therefore, the accuracy of Case #2 seemed a little better than Case #3. However, the DEM threshold got rid of reservoirs and streams in the high land. Focusing on the A, B and C regions in Figure 6, while the elevation information of Case #2 removed not only occluded regions but also water objects at high altitude, the slope information of Case #3 only eliminated occluded areas except for water bodies in mountainous areas.
Object-Based Noise Removal
After thresholding with respect to SAR amplitude and terrain information, the water map could be represented by millions of labelled objects. Figure 7 is an example of labelled objects and application of the algorithm to the actual SAR imagery captured in 3 August. The objects were sorted by the object size and drawn on the histogram in order to estimate the threshold (T) to get rid of the misclassified objects. As shown in Figure 7b, the valley of this histogram was evidently visible and it was easily calculated by the algorithm mentioned in Section 2.4. Table 4 and Figure 8 are the result of the noise removal. After differentiating the misclassified segments, the accuracies of Cases #4, #5 and #6 were greatly improved and the accuracies were 93.93%, 96.22% and 96.42%, respectively. As a result of Case #4, we found that the object segmentation method without terrain information was not able to remove the misclassified objects due to steep relief. In particular, the user accuracy for water extraction was remarkably improved to 94.54% compared to 48.54% in Case #1. Figure 9 illustrates the flood mapping results. The red parts in the figures were the detected flood areas during the flooding season, and white regions are permanent water bodies in the reference map. Comparing the flood mapping results, we found that it was impossible to create accurate flood maps in mountainous regions without slope information and flood maps using the elevation information were not substantially helpful to eliminate the topographic distortion. As shown in the results of Cases #4 , 1 0 0 2 2 2 2 0 1 0 2 2 2 2 2 2 1 1 0 2 2 2 2 0 1 0 0 0 0 0 0 0 1 0 3 3 3 3 3 0 1 0 3 3 3 3 3 0 1 0 0 0 3 3 3
Conclusions
Estimating optimized threshold values for water body extraction and the removal of misclassified land objects are key issues for water mapping using SAR imagery. This paper proposed water extraction methods using SAR amplitude imagery and terrain information such as DEM, DSM for the thresholding method and object-based noise removal method.
Prior to applying the proposed method, we corrected the satellite parameters and removed the topographic distortion for locational consistency of input data. Then, a land cover map which classified water and four other classes of land was created from Landsat TM imagery using the ISODATA algorithm and used to determinate threshold values. The estimated threshold value relating to SAR amplitude could find water areas due to the reflective properties of water in SAR imagery, however, there were occluded areas due to steep relief and the slope information could remove the misclassified areas. Because eliminating occluded areas using the elevation information tended to erase water areas at high altitude, the slope information showed better performance for this problem. Even if the thresholding method using SAR amplitude and terrain information was applied to extract water bodies, noises which were non-water areas classified as water areas remained and it reduced the user accuracy of the water map. The object-based classification method with an object size criterion was applied to remove the noises. The criterion was estimated with a histogram-based technique. With the object-based method, the noise objects were eliminated and the classification accuracy was significantly improved. In particular, the user accuracy was remarkably improved.
In this study, the proposed water classification procedure was applied to medium resolution SAR images (C band, 6.25 m of spatial resolution) and it could solve the problems effectively. The limitation of the proposed method is the resolution and quality of the input data. Even though low resolution multi-spectral imagery (Landsat TM, 30 m of spatial resolution) and a digital map with 2.5 m of height accuracy were used, the proposed method could classify the water areas from SAR imagery with high accuracy. However, to apply the proposed method for high resolution SAR data, more accurate and detailed elevation and land cover information is necessary, but construction of high-quality data is costly and time-consuming, therefore, the problem of how to overcome the limitations of ancillary data quality requires further work. | 5,807.2 | 2015-03-01T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Ag/CeO 2 Composites for Catalytic Abatement of CO, Soot and VOCs
: Nowadays catalytic technologies are widely used to purify indoor and outdoor air from harmful compounds. Recently, Ag–CeO 2 composites have found various applications in catalysis due to distinctive physical-chemical properties and relatively low costs as compared to those based on other noble metals. Currently, metal–support interaction is considered the key factor that determines high catalytic performance of silver–ceria composites. Despite thorough investigations, several questions remain debating. Among such issues, there are (1) morphology and size effects of both Ag and CeO 2 particles, including their defective structure, (2) chemical and charge state of silver, (3) charge transfer between silver and ceria, (4) role of oxygen vacancies, (5) reducibility of support and the catalyst on the basis thereof. In this review, we consider recent advances and trends on the role of silver–ceria interactions in catalytic performance of Ag/CeO 2 composites in low-temperature CO oxidation, soot oxidation, and volatile organic compounds (VOCs) abatement. Promising photo-and electrocatalytic applications of Ag/CeO 2 composites are also discussed. surface oxygen vacancies and, hence, different activity in CO oxidation. Ag 0 nanoparticles (NPs) promote the reducibility of surface lattice oxygen and catalytic activity of CeO 2 in CO oxidation. The control of the shape of CeO 2 may be used as a strategy to design the metal/CeO 2 catalysts with reduced amounts of noble metals. An increase of the Ag content from 1 to up to 3 wt. % mitigates the difference in turnover frequency (TOF) CO for the composites based on nanocubes and nanorods that allows concluding on the need of coexistence of charged Ag n+ species and reduced Ag 0 NPs on the CeO 2 surface to
Introduction
Air pollution is a major environmental problem. According to the World health organization, ambient air pollution contributes to 6.7 percent of all deaths worldwide [1], and the emissions of harmful compounds from industrial plants and motor vehicles in crowded urban areas are getting more attention. By reducing the level of air pollution, countries can reduce the morbidity rates of heart disease, lung cancer, chronic and acute respiratory diseases, etc. Many substances cause air pollution, including carbon monoxide (CO), particulate matter, ozone, nitrogen dioxide, soot, sulfur dioxide, organic dyes, etc., with CO being the most common among these pollutants. Volatile organic compounds (VOCs) comprising organic compounds with an initial boiling point inferior or equal to 250 • C (measured at a standard pressure of 101.3 kPa) also impact pollution of indoor and outdoor air [2]. In a recent review [3], the authors consider several main classes of VOCs, including halogenated VOCs, aldehydes, aromatic compounds, alcohols, ketones, polycyclic aromatic hydrocarbons, etc.
Therefore, air cleaning is a pivotal challenge, and new solutions are required. Catalytic total oxidation of organic pollutants into CO 2 and water is the most effective way to address this challenge. Metal/ceria-based catalysts were found promising heterogeneous catalysts for CO, soot and VOCs oxidation, and the highly dispersed noble metals (Me = Au, Pt, Pd, Ru, etc.) were used as the active components of these catalysts. The ceria-supported catalysts containing Pd [4][5][6][7][8][9][10][11], Pt [12][13][14][15][16], activity in CO and propylene oxidation. This finding was associated with formation of Ag 2+ species in these catalysts, confirmed by Electron Paramagnetic Resonance (EPR). Such species improve the redox properties due to creation of three different redox couples: Ag 2+ /Ag + , Ag 2+ /Ag 0 , and Ag + /Ag 0 .
The effect of shape of ceria nanoparticles on the catalytic properties of ceria-based catalysts is also discussed in the review [69]. In Ref. [70] synthesis of ceria nanopolyhedra, nanorods, and nanocubes by a hydrothermal method is described (Figure 1). The oxygen storage capacity of CeO 2 nanorods and nanocubes was attributed to both surface and bulk oxygen species. The lowest oxygen storage capacity for ceria nanopolyhedra was attributed to a predominance of (111) boundaries on the surface of particles with low reaction ability toward CO. Thus, the shape-selective synthetic strategy may be used for designing the catalysts with desired oxidative activity. catalysts, confirmed by Electron Paramagnetic Resonance (EPR). Such species improve the redox properties due to creation of three different redox couples: Ag 2+ /Ag + , Ag 2+ /Ag 0 , and Ag + /Ag 0 . The effect of shape of ceria nanoparticles on the catalytic properties of ceria-based catalysts is also discussed in the review [69]. In Ref. [70] synthesis of ceria nanopolyhedra, nanorods, and nanocubes by a hydrothermal method is described (Figure 1). The oxygen storage capacity of CeO2 nanorods and nanocubes was attributed to both surface and bulk oxygen species. The lowest oxygen storage capacity for ceria nanopolyhedra was attributed to a predominance of (111) boundaries on the surface of particles with low reaction ability toward CO. Thus, the shape-selective synthetic strategy may be used for designing the catalysts with desired oxidative activity. In Ref. [71] the catalytic activity of ceria rods, cubes and octahedra was studied in CO oxidation. The highest activity of ceria nanorods was attributed to a predominance of (110) and (100) surfaces, while the lowest activity of ceria octahedra was caused by a predominance of (111) surface. The activity of different surfaces also depends on the energy of oxygen vacancy formation, which is predicted to follow the reverse order of lattice oxygen reactivity: (110) < (100) < (111). Supporting of silver on the surface of ceria nanoparticles with different shapes by conventional incipient wetness impregnation followed by calcination at 500 °C led to creation of additional oxygen vacancies in ceria surface [43]. Ag nanoparticles were suggested to facilitate the formation of oxygen vacancies in ceria surface in a larger extent than in case of positively charged Agn + clusters. Thus, Ag loading (1 and 3 wt. %) in Ag/CeO2 affects the amount of Ag 0 and Agn + clusters that yields different concentrations of In Ref. [71] the catalytic activity of ceria rods, cubes and octahedra was studied in CO oxidation. The highest activity of ceria nanorods was attributed to a predominance of (110) and (100) surfaces, while the lowest activity of ceria octahedra was caused by a predominance of (111) surface. The activity of different surfaces also depends on the energy of oxygen vacancy formation, which is predicted to follow the reverse order of lattice oxygen reactivity: (110) < (100) < (111). Supporting of silver on the surface of ceria nanoparticles with different shapes by conventional incipient wetness impregnation followed by calcination at 500 • C led to creation of additional oxygen vacancies in ceria surface [43]. Ag nanoparticles were suggested to facilitate the formation of oxygen vacancies in ceria surface in a larger extent than in case of positively charged Ag n + clusters. Thus, Ag loading (1 and 3 wt. %) in Ag/CeO 2 affects the amount of Ag 0 and Ag n + clusters that yields different concentrations of surface oxygen vacancies and, hence, different activity in CO oxidation. Ag 0 nanoparticles (NPs) promote the reducibility of surface lattice oxygen and catalytic activity of CeO 2 in CO oxidation. The control of the shape of CeO 2 may be used as a strategy to design the metal/CeO 2 catalysts with reduced amounts of noble metals. An increase of the Ag content from 1 to up to 3 wt. % mitigates the difference in turnover frequency (TOF) CO for the composites based on nanocubes and nanorods that allows concluding on the need of coexistence of charged Ag n + species and reduced Ag 0 NPs on the CeO 2 surface to create an active catalyst. The role of oxygen vacancies of Ag/CeO 2 catalysts in CO oxidation is also discussed in Ref. [72]. Using Raman spectroscopy, it was shown that Ag promoted the formation of oxygen vacancies in ceria. This effect is pronounced, when CeO 2 and Ag/CeO 2 were reduced in CO/N 2 atmosphere up to 300 • C (Figure 2a,b). Treatment in oxygen atmosphere leads to the decreased amount of oxygen vacancies (Figure 2c,d). Thus, the introduction of Ag into CeO 2 promotes the activation of lattice oxygen of ceria and formation of oxygen vacancies that is the main reason for enhanced catalytic activity of Ag/CeO 2 in CO oxidation. surface oxygen vacancies and, hence, different activity in CO oxidation. Ag 0 nanoparticles (NPs) promote the reducibility of surface lattice oxygen and catalytic activity of CeO2 in CO oxidation. The control of the shape of CeO2 may be used as a strategy to design the metal/CeO2 catalysts with reduced amounts of noble metals. An increase of the Ag content from 1 to up to 3 wt. % mitigates the difference in turnover frequency (TOF) CO for the composites based on nanocubes and nanorods that allows concluding on the need of coexistence of charged Agn + species and reduced Ag 0 NPs on the CeO2 surface to create an active catalyst. The role of oxygen vacancies of Ag/CeO2 catalysts in CO oxidation is also discussed in Ref. [72]. Using Raman spectroscopy, it was shown that Ag promoted the formation of oxygen vacancies in ceria. This effect is pronounced, when CeO2 and Ag/CeO2 were reduced in CO/N2 atmosphere up to 300 °C (Figure 2a,b). Treatment in oxygen atmosphere leads to the decreased amount of oxygen vacancies (Figure 2c,d). Thus, the introduction of Ag into CeO2 promotes the activation of lattice oxygen of ceria and formation of oxygen vacancies that is the main reason for enhanced catalytic activity of Ag/CeO2 in CO oxidation. The role of the shape of ceria nanoparticles in CO oxidation over Ag/CeO2 was also discussed in terms of the complex or hierarchical structure of ceria. The Ag-based catalysts supported on mesoporous CeO2 prepared by hard-template method and surfactant-template method was studied in CO oxidation in Ref. [42]. Mesoporous ceria was prepared by hard-template method using the SBA-15 material as a template, which was etched by NaOH. Hexadecyl trimethyl ammonium bromide (CTAB) was used as a classical soft template to synthesize ceria by surfactant-template method. Mesoporous ceria prepared by hard-template method was the preferable support for Ag catalysts, and total conversion of CO (200 mg catalyst, 1% CO, a gas flow of 30 mL/min) for this catalyst was achieved at 65 °C. High activity of this catalyst was attributed to oxygen vacancies in mesoporous CeO2 support, which stabilizes dispersed silver and facilitates the transfer of electrons from Ag to CeO2 via the Ag-CeO2 interface. However, one cannot exclude the participation of SiO2 The role of the shape of ceria nanoparticles in CO oxidation over Ag/CeO 2 was also discussed in terms of the complex or hierarchical structure of ceria. The Ag-based catalysts supported on mesoporous CeO 2 prepared by hard-template method and surfactant-template method was studied in CO oxidation in Ref. [42]. Mesoporous ceria was prepared by hard-template method using the SBA-15 material as a template, which was etched by NaOH. Hexadecyl trimethyl ammonium bromide (CTAB) was used as a classical soft template to synthesize ceria by surfactant-template method. Mesoporous ceria prepared by hard-template method was the preferable support for Ag catalysts, and total conversion of CO (200 mg catalyst, 1% CO, a gas flow of 30 mL/min) for this catalyst was achieved at 65 • C. High activity of this catalyst was attributed to oxygen vacancies in mesoporous CeO 2 support, which stabilizes dispersed silver and facilitates the transfer of electrons from Ag to CeO 2 via the Ag-CeO 2 interface. However, one cannot exclude the participation of SiO 2 used as a template to produce mesoporous CeO 2 in formation of Ag-containing species highly reactive toward low-temperature CO oxidation.
In Ref. [73] Ag/CeO 2 catalysts with the Ag loading from 5 to 20 wt. % were prepared by the HCl etching of CuO/CeO 2 /Ag 2 O mixed oxides followed by CuO removal. The formation of Ag nanoparticles inside the ultrafine nanoporous CeO 2 with sizes of pore channels below 20 nm was observed after reduction by glucose in solution. The obtained composites also showed enhanced catalytic activity in CO oxidation in comparison with CeO 2 -Ag composite prepared by co-precipitation method, and the highest catalytic activity was observed for catalysts with 10 wt. % loading of Ag (T 50% ≈ 130 • C, 1% CO and 10% O 2 , WHSV of 60,000 mL g −1 h −1 ).
The CeO 2 mesoporous spheres with a diameter of~100 nm and Ag catalysts on the basis thereof were synthesized in Ref. [74] (Figure 3). CeO 2 mesoporous spheres were synthesized using glycol as a solvent with addition of C 2 H 5 COOH in an autoclave at 180 • C for 200 min. Ag NPs were prepared separately, and their dispersion in cyclohexane was stirred together with CeO 2 mesoporous spheres. The catalysts were characterized by high surface area (216 m 2 /g) and regular morphology. Ag molar content was 10%. CO conversion achieved 96.5% at 70 • C (100 mL/min) and the enhanced catalytic performance in CO oxidation was attributed to the unique structure of ceria support. used as a template to produce mesoporous CeO2 in formation of Ag-containing species highly reactive toward low-temperature CO oxidation. In Ref. [73] Ag/CeO2 catalysts with the Ag loading from 5 to 20 wt. % were prepared by the HCl etching of CuO/CeO2/Ag2O mixed oxides followed by CuO removal. The formation of Ag nanoparticles inside the ultrafine nanoporous CeO2 with sizes of pore channels below 20 nm was observed after reduction by glucose in solution. The obtained composites also showed enhanced catalytic activity in CO oxidation in comparison with CeO2-Ag composite prepared by coprecipitation method, and the highest catalytic activity was observed for catalysts with 10 wt. % loading of Ag (T50% ≈ 130 °C, 1% CO and 10% O2, WHSV of 60,000 mL g −1 h −1 ).
The CeO2 mesoporous spheres with a diameter of ~100 nm and Ag catalysts on the basis thereof were synthesized in Ref. [74] (Figure 3). CeO2 mesoporous spheres were synthesized using glycol as a solvent with addition of C2H5COOH in an autoclave at 180 °C for 200 min. Ag NPs were prepared separately, and their dispersion in cyclohexane was stirred together with CeO2 mesoporous spheres. The catalysts were characterized by high surface area (216 m 2 /g) and regular morphology. Ag molar content was 10%. CO conversion achieved 96.5% at 70 °C (100 mL/min) and the enhanced catalytic performance in CO oxidation was attributed to the unique structure of ceria support. The catalysts with core-shell and yolk-shell structures also attract attention [75,76]. The Ag@CeO2 catalysts with a core-shell structure were prepared by surfactant-free method with subsequent annealing redox reaction between silver and ceria precursor during co-deposition [77]. The particles with metallic Ag cores with a diameter of 50-100 nm CeO2 shell with a thickness of 30-50 nm were tested in CO oxidation (catalyst mass was 100 mg, 1% CO, a gas flow of 20 mL/min). The calcination of Ag@CeO2 at 500 °C in air flow led to the growth of catalytic activity (100% CO conversion at ~120 °C) in comparison with freshly deposited precipitate and catalyst after hydrothermal treatment and drying at 80 °C. This growth of activity was attributed to the strengthened interfacial interactions between Ag core and CeO2 shell during the calcination process (confirmed by TPR-H2) and to the fast desorption of CO2 from the surface of catalyst that was shown by Fourier Transform Infrared (FTIR) spectroscopy of adsorbed CO2. The charge transfer due to enhanced metal-support interaction from Ag to CeO2 was shown by XPS [39]. It is noteworthy that The catalysts with core-shell and yolk-shell structures also attract attention [75,76]. The Ag@CeO 2 catalysts with a core-shell structure were prepared by surfactant-free method with subsequent annealing redox reaction between silver and ceria precursor during co-deposition [77]. The particles with metallic Ag cores with a diameter of 50-100 nm CeO 2 shell with a thickness of 30-50 nm were tested in CO oxidation (catalyst mass was 100 mg, 1% CO, a gas flow of 20 mL/min). The calcination of Ag@CeO 2 at 500 • C in air flow led to the growth of catalytic activity (100% CO conversion at~120 • C) in comparison with freshly deposited precipitate and catalyst after hydrothermal treatment and drying at 80 • C. This growth of activity was attributed to the strengthened interfacial interactions between Ag core and CeO 2 shell during the calcination process (confirmed by TPR-H 2 ) and to the fast desorption of CO 2 from the surface of catalyst that was shown by Fourier Transform Infrared (FTIR) spectroscopy of adsorbed CO 2 . The charge transfer due to enhanced metal-support interaction from Ag to CeO 2 was shown by XPS [39]. It is noteworthy that one-, two-and three-coordinated OH groups were shown to exist over CeO 2 surface [78], and their effect cannot be neglected.
Thus, according to the literature, Ag/CeO 2 composites are promising catalysts for CO oxidation. The method of catalyst preparation, shape of ceria nanoparticles, and morphology of ceria are the factors determining the catalytic properties of the composites. Special attention is given to oxygen vacancies, and their concentration depends on the shape of ceria particles, amount of silver and charge states of its clusters/nanoparticles as well as pre-treatment conditions. Certainly, the presence of silver on the surface of ceria promotes the formation of oxygen vacancies and facilitates the growth of catalytic activity in CO oxidation. The features of interfacial interaction also should be considered since the transfer of electronic density from silver NPs to ceria accompanies metal-support interaction in Ag/CeO 2 catalysts. These phenomena may play a key role in oxidative catalysis [79,80], reduction of nitroarenes [81], photocatalysis [82]. Different synthetic strategies may be developed to synthesize Ag/CeO 2 with high activity in CO oxidation and find real application in industrial or indoor air purification from CO and VOCs.
Soot Oxidation
Soot is an amorphous impure carbon formed during incomplete combustion of fuels and hydrocarbons in internal combustion engines, coal burning, power-plant boilers, etc. It is formed as a by-product impairing the normal operation of combustion engines by fouling of exhaust systems, generation of exhaust plumes, blocking the pipes, etc. [83]. Soot particles are harmful to the human respiratory system since they cannot be filtered by upper airways. Thus, the development of materials that prevent the harmful impact of soot on the environment and human health is an important research and technology challenge. The soot combustion of diesel exhaust particulate occurs at temperatures above 600 • C, while typical diesel engine exhaust temperatures are in the range of 200-500 • C [84,85]. Therefore, the decreasing of the temperature of soot combustion is the main requirement for catalysts in this reaction.
The contact between soot and catalyst plays a key role in solid-solid reactions, and the observed catalytic activity depends on the gas-solid-solid interaction [86]. The contact conditions between soot and catalyst determine the combustion performance. In the literature two types of catalyst-soot contact studies under laboratory conditions are proposed: tight contact (TC) and loose contact (LC) [85][86][87]. The LC mode comprises a mixing or shaking of the catalyst-soot mixture with a spatula providing conditions for contact between soot particle and catalyst similar to those over diesel filter. TC mode is achieved by milling (ball or mortar milling) of the mixture during several minutes. Compared to the LC mode, the TC mode is less representative of the real contact conditions but is required to better understand and discriminate the morphologies [86,88].
Many effective catalytic systems have been proposed for soot combustion and other oxidation reactions [83,89]. Due to their unique physical-chemical properties, especially high redox properties and the lability of lattice oxygen, ceria and ceria-based materials also show high catalytic activity in total oxidation reactions, and soot oxidation to carbon dioxide is not an exception. Ceria also possesses high oxygen storage capacity (OSC), which allows using the oxide not only as a support or modifying additive, but also as a catalyst for soot oxidation. A selection of CeO 2 -based catalysts for soot oxidation is presented in Table 1. In Ref. [90] the catalytic activity of pure ceria prepared by co-precipitation method was described. Precipitation of aqueous solution of HNO 3 and Ce(NO 3 ) 3 was carried out using the 0.4 M NaOH solution and 0.4 M Na 2 CO 3 . Combustion temperature of pure oxide samples was achieved in the region of 445-560 • C. The acidification of cerium precursor at the stage of catalyst preparation improved the catalytic performance of the obtained materials. The sample prepared by precipitation method using HNO 3 /Ce(NO 3 ) 3 = 2 had the highest catalytic activity with T m = 465 • C. It is noteworthy that the use of large amounts of alkali metals at the stage of synthesis may significantly influence on the morphology and defective structure of cerium oxide, which will impact on the observed catalytic activity [91].
Morphology is known to play an important role in solid-solid reactions, where the number of contact points is a crucial criterion of activity. In Ref. [92] three different morphologies of pure cerium oxide were studied in soot oxidation reaction. The materials comprised (1) ceria nanofibers that capture the soot particles in several contact points, while having low specific surface area (~4 m 2 /g), (2) solution combustion synthesis ceria having an uncontrolled morphology, but higher specific surface area (31 m 2 /g), and (3) three-dimensional self-assembled (SA) ceria stars having high specific surface area (105 m 2 /g) and highly available contact points. The latter showed the highest catalytic activity, and the temperature of soot oxidation reduced from 614 to up to 403 • C for TC and to up to 552 • C in case of LC ( Figure 4). oxide were studied in soot oxidation reaction. The materials comprised (1) ceria nanofibers that capture the soot particles in several contact points, while having low specific surface area (~4 m 2 /g), (2) solution combustion synthesis ceria having an uncontrolled morphology, but higher specific surface area (31 m 2 /g), and (3) three-dimensional self-assembled (SA) ceria stars having high specific surface area (105 m 2 /g) and highly available contact points. The latter showed the highest catalytic activity, and the temperature of soot oxidation reduced from 614 to up to 403 °C for TC and to up to 552 °C in case of LC ( Figure 4). Comparing to the morphologies in groups 1 and 2, the three-dimensional shape of SA stars may involve more of the soot cake layer that can be a reason for enhancement of the total number of contact points and higher catalytic activity ( Figure 5). SA stars also keep their high intrinsic activity after aging. A comparison of the catalytic performance of pure ceria with different morphology under LC conditions was carried in [60], and the results were compared to those reported in Refs. [46,[93][94][95]. The activity was shown to decrease in the following order: nanorod > nanocube > fiber > flake, and the lowest temperature of complete combustion of 485 °C is observed for nanorod samples.
In Ref. [96] hydrothermal and solvothermal methods were used to prepare nanostructured ceria with different morphology (nanorod, nanoparticle, and flake). The nanorod sample showed the best catalytic activity (soot combustion temperatures for TC and LC modes were 368 and 500 °C, Comparing to the morphologies in groups 1 and 2, the three-dimensional shape of SA stars may involve more of the soot cake layer that can be a reason for enhancement of the total number of contact points and higher catalytic activity ( Figure 5). SA stars also keep their high intrinsic activity after aging. oxide were studied in soot oxidation reaction. The materials comprised (1) ceria nanofibers that capture the soot particles in several contact points, while having low specific surface area (~4 m 2 /g), (2) solution combustion synthesis ceria having an uncontrolled morphology, but higher specific surface area (31 m 2 /g), and (3) three-dimensional self-assembled (SA) ceria stars having high specific surface area (105 m 2 /g) and highly available contact points. The latter showed the highest catalytic activity, and the temperature of soot oxidation reduced from 614 to up to 403 °C for TC and to up to 552 °C in case of LC ( Figure 4). Comparing to the morphologies in groups 1 and 2, the three-dimensional shape of SA stars may involve more of the soot cake layer that can be a reason for enhancement of the total number of contact points and higher catalytic activity ( Figure 5). SA stars also keep their high intrinsic activity after aging. A comparison of the catalytic performance of pure ceria with different morphology under LC conditions was carried in [60], and the results were compared to those reported in Refs. [46,[93][94][95]. The activity was shown to decrease in the following order: nanorod > nanocube > fiber > flake, and the lowest temperature of complete combustion of 485 °C is observed for nanorod samples.
In Ref. [96] hydrothermal and solvothermal methods were used to prepare nanostructured ceria with different morphology (nanorod, nanoparticle, and flake). The nanorod sample showed the best catalytic activity (soot combustion temperatures for TC and LC modes were 368 and 500 °C, A comparison of the catalytic performance of pure ceria with different morphology under LC conditions was carried in [60], and the results were compared to those reported in Refs. [46,[93][94][95]. The activity was shown to decrease in the following order: nanorod > nanocube > fiber > flake, and the lowest temperature of complete combustion of 485 • C is observed for nanorod samples. In Ref. [96] hydrothermal and solvothermal methods were used to prepare nanostructured ceria with different morphology (nanorod, nanoparticle, and flake). The nanorod sample showed the best catalytic activity (soot combustion temperatures for TC and LC modes were 368 and 500 • C, respectively) that was attributed to the maximal amount of adsorbed oxygen species on its surface. Moreover, the high specific surface area, determined by BET (Brunauer Hemmet Teller) method, was pointed out to have a positive effect in improving the activity under the LC mode. In Ref. [97] hydrothermal method was used to prepare conventional polycrystalline ceria and single-crystalline ceria nanorods and nanocubes. The obtained samples differ by the surface formed ((100) surfaces were typical for nanocubes, a mixture of (100), (110) and (111) surfaces for nanorods, while (111) surface was obtained for conventional polycrystalline ceria). More reactive exposed surfaces demonstrated higher catalytic activity and soot oxidation becomes a surface-dependent reaction. Soot, while located at the soot-ceria interface, can reduce ceria, and such surface becomes the source of active superoxide ions. The formation energy of a surface oxygen vacancy is considered important for activity enhancement.
According to Ref. [48], the redox properties of ceria are an important, but not the major factor for catalytic soot oxidation. A comparison of fluorite-type oxides CeO 2 , Pr 6 O 11 , CeO 2 -ZrO 2 , ZrO 2 characterized by high oxygen capacity revealed that the reactivity rather than quantity of oxygen species involved in oxygen release/storage processes is a favorable factor for low-temperature soot oxidation. CeO 2 was shown to be much more active in soot oxidation, than Pr 6 O 11 and CeO 2 -ZrO 2 that had higher OSC values than pure CeO 2 . Using the electron spin resonance (ESR) method it was demonstrated that the reason was connected with the ability of the CeO 2 surface to generate superoxide ions (O 2 − ) that can rapidly react with neighboring carbon or recombine to yield O 2 .
Despite unique physical-chemical properties, it is often not feasible to use pure ceria, since a significant loss of specific surface may occur due to thermal sintering, deactivation of redox pair, reduction of OSC leading to deterioration of catalytic activity [98], etc. Even small sintering causes a large impact on the crystallite sizes and the presence of oxygen vacancies, which significantly reduces the catalytic activity. The presence of metal ions in the ceria lattice allows reducing the effects of sintering and loss of catalytic activity along with a significant increase of OSC [99,100].
Special attention should be paid to the effect of introduction of Ag into the CeO 2 structure. Loading of Ag NPs on CeO 2 improves the reactivity of CeO 2 lattice oxygen toward soot oxidation. Kinetic studies showed [45] that lattice oxygen of ceria interacting with Ag NPs had similar reactivity to the one of lattice oxygen in Ag 2 O. Ag NPs enhance reducibility of ceria (which was also shown in [101] and was attributed to reverse spillover of oxygen atoms from the Ag-CeO 2 boundary to the Ag NPs along with other possible interpretations), but not the reoxidation ability of reduced ceria surface by dioxygen. Silver can become an agent that allows rapid formation of O x − . In Ref. [102] using cyclic H 2 -TPR and Raman studies, it was shown that both dissociative adsorption of gaseous oxygen and migration of bulk oxygen of ceria can be facilitated by silver. This results in a rapid generation of atomic oxygen over silver, which under the TC mode can transfer onto soot particle and lead to catalytic oxidation reaction [103]. If not, its spillover onto the ceria surface occurs, and the oxygen In Figure 6 an effect of silver loading on the catalytic activity of ceria in soot oxidation is represented. The temperature of soot combustion shifted from 668 • C in case of combustion of pure soot to 393 • C for CeO 2 and to up to 345 • C for the case of Ag/CeO 2 [48].
In Figure 6 an effect of silver loading on the catalytic activity of ceria in soot oxidation is represented. The temperature of soot combustion shifted from 668 °C in case of combustion of pure soot to 393 °C for СeO2 and to up to 345 °C for the case of Ag/СeO2 [48]. Figure 6. Effect of Ag loading on soot combustion profiles of CeO2. Soot/CeO2 tight-contact mixtures with a weight ratio of 1/20 were heated in 10% O2/N2 at the rate of 10 °C·min −1 . Reproduced from Ref. [48] with the permission from the American chemical society.
By comparing the onset temperature, Ti, of soot oxidation over various metal-loaded CeO2 with different loadings (Figure 7) it results that Ti can be lowered with an increase of Ag loading from 357 to 324 °C (20 wt %). On the contrary, loading other metals, such as Pd, Pt, and Rh, could not improve the activity. This result supports that superoxides activated over silver are the active species responsible for low-temperature soot oxidation. balance. Tight-contact soot/catalyst mixtures with a weight ratio of 1/20 were heated at the rate of 10 °C·min −1 . Reproduced from Ref. [48] with the permission from the American chemical society.
The catalysts for soot combustion have two main drawbacks, i.e. poor soot/catalyst contact and restricted amount of active site. The promising composites should possess relatively low specific surface area and have no micropores and small mesopores, which will provide the presence of a maximal number of active sites on the external surface of the grain and will facilitate the effectiveness of catalyst performance. Various preparation technique can be used to create such active surfaces. While the impregnation method still can be used [48], the relatively simple and economically feasible co-precipitation technique is considered the major way to prepare Ag/CeO2 catalysts for soot oxidation [44,49,104,108]. As a result, an opportunity exists to design favorable structure to transfer/diffuse the activated oxygen species to reaction zones of the catalyst and promote better catalyst-soot contact.
Among the catalysts prepared by co-precipitation technique, special interest is devoted to those with the "rice-ball" core-shell structure [49,104] comprising metallic Ag particles in the core surrounded by CeO2 particles. These catalysts possess a unique agglomerated structure with a By comparing the onset temperature, Ti, of soot oxidation over various metal-loaded CeO 2 with different loadings (Figure 7) it results that Ti can be lowered with an increase of Ag loading from 357 to 324 • C (20 wt %). On the contrary, loading other metals, such as Pd, Pt, and Rh, could not improve the activity. This result supports that superoxides activated over silver are the active species responsible for low-temperature soot oxidation. In Figure 6 an effect of silver loading on the catalytic activity of ceria in soot oxidation is represented. The temperature of soot combustion shifted from 668 °C in case of combustion of pure soot to 393 °C for СeO2 and to up to 345 °C for the case of Ag/СeO2 [48]. Figure 6. Effect of Ag loading on soot combustion profiles of CeO2. Soot/CeO2 tight-contact mixtures with a weight ratio of 1/20 were heated in 10% O2/N2 at the rate of 10 °C·min −1 . Reproduced from Ref. [48] with the permission from the American chemical society.
By comparing the onset temperature, Ti, of soot oxidation over various metal-loaded CeO2 with different loadings (Figure 7) it results that Ti can be lowered with an increase of Ag loading from 357 to 324 °C (20 wt %). On the contrary, loading other metals, such as Pd, Pt, and Rh, could not improve the activity. This result supports that superoxides activated over silver are the active species responsible for low-temperature soot oxidation. The catalysts for soot combustion have two main drawbacks, i.e. poor soot/catalyst contact and restricted amount of active site. The promising composites should possess relatively low specific surface area and have no micropores and small mesopores, which will provide the presence of a maximal number of active sites on the external surface of the grain and will facilitate the effectiveness of catalyst performance. Various preparation technique can be used to create such active surfaces. While the impregnation method still can be used [48], the relatively simple and economically feasible co-precipitation technique is considered the major way to prepare Ag/CeO2 catalysts for soot oxidation [44,49,104,108]. As a result, an opportunity exists to design favorable structure to transfer/diffuse the activated oxygen species to reaction zones of the catalyst and promote better catalyst-soot contact.
Among the catalysts prepared by co-precipitation technique, special interest is devoted to those with the "rice-ball" core-shell structure [49,104] comprising metallic Ag particles in the core surrounded by CeO2 particles. These catalysts possess a unique agglomerated structure with a The catalysts for soot combustion have two main drawbacks, i.e. poor soot/catalyst contact and restricted amount of active site. The promising composites should possess relatively low specific surface area and have no micropores and small mesopores, which will provide the presence of a maximal number of active sites on the external surface of the grain and will facilitate the effectiveness of catalyst performance. Various preparation technique can be used to create such active surfaces. While the impregnation method still can be used [48], the relatively simple and economically feasible co-precipitation technique is considered the major way to prepare Ag/CeO 2 catalysts for soot oxidation [44,49,104,108]. As a result, an opportunity exists to design favorable structure to transfer/diffuse the activated oxygen species to reaction zones of the catalyst and promote better catalyst-soot contact.
Among the catalysts prepared by co-precipitation technique, special interest is devoted to those with the "rice-ball" core-shell structure [49,104] comprising metallic Ag particles in the core surrounded by CeO 2 particles. These catalysts possess a unique agglomerated structure with a diameter of about 100 nm, where large Ag particles (30-40 nm) and a large interface between the Ag and CeO 2 particles cause its excellent catalytic performance in soot oxidation due to this morphological compatibility (the oxidation proceeds below 300 • C).
A less common way to prepare Ag/CeO 2 catalysts for soot oxidation is the electrospinning method [47]. CeO 2 nanofibers with diameters of 241-253 nm were produced using this method ( Figure 8). diameter of about 100 nm, where large Ag particles (30-40 nm) and a large interface between the Ag and CeO2 particles cause its excellent catalytic performance in soot oxidation due to this morphological compatibility (the oxidation proceeds below 300 °C). A less common way to prepare Ag/CeO2 catalysts for soot oxidation is the electrospinning method [47]. CeO2 nanofibers with diameters of 241-253 nm were produced using this method ( Figure 8). Reproduced from Ref. [47] with the permission from the Elsevier.
The Ag/CeO2 and CeO2 fibrous catalysts calcined at 500 °C exhibited an improved catalytic performance in soot oxidation caused by their large pore sizes related to the macroporous characteristics of the porous structure in CeO2. Large surface areas of CeO2 and Ag metallic species can contribute to high soot oxidation activity ( Figure 9). In Ref. [106] it is pointed out that under oxygen-rich conditions the activity of Ag/CeO2 catalysts is caused by oxygen vacancies near Ag particles, while under oxygen-poor conditions it is controlled by bulk oxygen vacancies. The generation and transfer of active oxygen are affected by combinations of both types of oxygen vacancies.
The mechanism of soot oxidation over Ag/CeO2 composites is also debating. Soot oxidation is a solid-solid-gas reaction, and there are two points of views on the predominant reaction mechanisms of soot oxidation in the literature [48,[109][110][111]. On one hand, soot oxidation is initiated by the surfaceactive oxygen (peroxide and superoxide (O − and O2 − ) species), which may be activated by the oxygen Figure 8. Schematic illustration of Ag/CeO 2 nanofiber synthesis sequence. CeO 2 nanofibers were fabricated through the electrospinning of spinnable Ce/PVP in a DMF/EtOH precursor solution followed by thermal treatment. Ag was then loaded on the surfaces of the CeO 2 nanofibers. Reproduced from Ref. [47] with the permission from the Elsevier.
The Ag/CeO 2 and CeO 2 fibrous catalysts calcined at 500 • C exhibited an improved catalytic performance in soot oxidation caused by their large pore sizes related to the macroporous characteristics of the porous structure in CeO 2 . Large surface areas of CeO 2 and Ag metallic species can contribute to high soot oxidation activity ( Figure 9). diameter of about 100 nm, where large Ag particles (30-40 nm) and a large interface between the Ag and CeO2 particles cause its excellent catalytic performance in soot oxidation due to this morphological compatibility (the oxidation proceeds below 300 °C). A less common way to prepare Ag/CeO2 catalysts for soot oxidation is the electrospinning method [47]. CeO2 nanofibers with diameters of 241-253 nm were produced using this method ( Figure 8). Reproduced from Ref. [47] with the permission from the Elsevier.
The Ag/CeO2 and CeO2 fibrous catalysts calcined at 500 °C exhibited an improved catalytic performance in soot oxidation caused by their large pore sizes related to the macroporous characteristics of the porous structure in CeO2. Large surface areas of CeO2 and Ag metallic species can contribute to high soot oxidation activity ( Figure 9). In Ref. [106] it is pointed out that under oxygen-rich conditions the activity of Ag/CeO2 catalysts is caused by oxygen vacancies near Ag particles, while under oxygen-poor conditions it is controlled by bulk oxygen vacancies. The generation and transfer of active oxygen are affected by combinations of both types of oxygen vacancies.
The mechanism of soot oxidation over Ag/CeO2 composites is also debating. Soot oxidation is a solid-solid-gas reaction, and there are two points of views on the predominant reaction mechanisms of soot oxidation in the literature [48,[109][110][111]. On one hand, soot oxidation is initiated by the surfaceactive oxygen (peroxide and superoxide (O − and O2 − ) species), which may be activated by the oxygen In Ref. [106] it is pointed out that under oxygen-rich conditions the activity of Ag/CeO 2 catalysts is caused by oxygen vacancies near Ag particles, while under oxygen-poor conditions it is controlled by bulk oxygen vacancies. The generation and transfer of active oxygen are affected by combinations of both types of oxygen vacancies.
The mechanism of soot oxidation over Ag/CeO 2 composites is also debating. Soot oxidation is a solid-solid-gas reaction, and there are two points of views on the predominant reaction mechanisms of soot oxidation in the literature [48,[109][110][111]. On one hand, soot oxidation is initiated by the surface-active oxygen (peroxide and superoxide (O − and O 2 − ) species), which may be activated by the oxygen vacancies. From the other hand, surface active oxygen comes from the bulk by migration of lattice oxygen. Ref. [99,108] describes a mechanism of metal oxide catalyst participation in redox cycle, where metal is subjected to repeated oxidation and reduction according to the following reaction set: where M red and M oxd -O ads represent the reduced and oxidized states of the catalyst, respectively; O gas and O ads are gaseous O 2 and surface adsorbed oxygen species, respectively; C f denotes a carbon active site or free site on the carbon surface, and SOC represents a surface carbon-oxygen complex. According to this mechanism, atomic O ads species is formed through dissociative adsorption of gas-phase oxygen on the metal oxide surface, and then attacks the reactive free carbon site C f yielding an oxygen-containing active intermediate. CO/CO 2 are formed through the reaction between the intermediate and either O ads or gas-phase O 2 . The authors [45,99,108] suggest that in this mechanism the surface adsorbed oxygen species play the key role in soot oxidation, in contrast to CO oxidation that occurs through the Mars-van Krevelen mechanism. However, some researchers consider that the second reaction mechanism is prevalent in soot oxidation over ceria-based catalysts under real conditions [48] (Figure 10). vacancies. From the other hand, surface active oxygen comes from the bulk by migration of lattice oxygen. Ref. [99,108] describes a mechanism of metal oxide catalyst participation in redox cycle, where metal is subjected to repeated oxidation and reduction according to the following reaction set: where Mred and Moxd-Oads represent the reduced and oxidized states of the catalyst, respectively; Ogas and Oads are gaseous O2 and surface adsorbed oxygen species, respectively; Cf denotes a carbon active site or free site on the carbon surface, and SOC represents a surface carbon-oxygen complex. According to this mechanism, atomic Oads species is formed through dissociative adsorption of gas-phase oxygen on the metal oxide surface, and then attacks the reactive free carbon site Cf yielding an oxygen-containing active intermediate. CO/CO2 are formed through the reaction between the intermediate and either Oads or gas-phase O2. The authors [45,99,108] suggest that in this mechanism the surface adsorbed oxygen species play the key role in soot oxidation, in contrast to CO oxidation that occurs through the Mars-van Krevelen mechanism. However, some researchers consider that the second reaction mechanism is prevalent in soot oxidation over ceria-based catalysts under real conditions [48] (Figure 10). Figure 10. A schematic mechanism of soot oxidation over Ag/CeO2 catalyst. Reproduced from Ref. [45] with the permission from the Elsevier.
In case of reverse CeO2-Ag catalyst [49], a synergistic effect of Ag and CeO2 particles causes adsorption of gas-phase O2 followed by formation of atomic oxygen species and the process is facilitated due to large Ag-CeO2 interface. The O species on the silver surface migrates to the surface of ceria particles through the interface and transforms into On x− species ( Figure 11). These atomic oxygen species exist in equilibrium during soot oxidation. Then the mobile active On x− species migrates onto soot particle through the soot-ceria contact and completely oxidizes the soot into CO2. Figure 11. A schematic mechanism for soot oxidation over the CeO2-Ag catalyst. Reproduced from Ref. [49] with the permission from the Elsevier. Figure 10. A schematic mechanism of soot oxidation over Ag/CeO 2 catalyst. Reproduced from Ref. [45] with the permission from the Elsevier.
In case of reverse CeO 2 -Ag catalyst [49], a synergistic effect of Ag and CeO 2 particles causes adsorption of gas-phase O 2 followed by formation of atomic oxygen species and the process is facilitated due to large Ag-CeO 2 interface. The O species on the silver surface migrates to the surface of ceria particles through the interface and transforms into O n x− species ( Figure 11). These atomic oxygen species exist in equilibrium during soot oxidation. Then the mobile active O n x− species migrates onto soot particle through the soot-ceria contact and completely oxidizes the soot into CO 2 . vacancies. From the other hand, surface active oxygen comes from the bulk by migration of lattice oxygen. Ref. [99,108] describes a mechanism of metal oxide catalyst participation in redox cycle, where metal is subjected to repeated oxidation and reduction according to the following reaction set: where Mred and Moxd-Oads represent the reduced and oxidized states of the catalyst, respectively; Ogas and Oads are gaseous O2 and surface adsorbed oxygen species, respectively; Cf denotes a carbon active site or free site on the carbon surface, and SOC represents a surface carbon-oxygen complex. According to this mechanism, atomic Oads species is formed through dissociative adsorption of gas-phase oxygen on the metal oxide surface, and then attacks the reactive free carbon site Cf yielding an oxygen-containing active intermediate. CO/CO2 are formed through the reaction between the intermediate and either Oads or gas-phase O2. The authors [45,99,108] suggest that in this mechanism the surface adsorbed oxygen species play the key role in soot oxidation, in contrast to CO oxidation that occurs through the Mars-van Krevelen mechanism. However, some researchers consider that the second reaction mechanism is prevalent in soot oxidation over ceria-based catalysts under real conditions [48] (Figure 10). Figure 10. A schematic mechanism of soot oxidation over Ag/CeO2 catalyst. Reproduced from Ref. [45] with the permission from the Elsevier.
In case of reverse CeO2-Ag catalyst [49], a synergistic effect of Ag and CeO2 particles causes adsorption of gas-phase O2 followed by formation of atomic oxygen species and the process is facilitated due to large Ag-CeO2 interface. The O species on the silver surface migrates to the surface of ceria particles through the interface and transforms into On x− species (Figure 11). These atomic oxygen species exist in equilibrium during soot oxidation. Then the mobile active On x− species migrates onto soot particle through the soot-ceria contact and completely oxidizes the soot into CO2. Figure 11. A schematic mechanism for soot oxidation over the CeO2-Ag catalyst. Reproduced from Ref. [49] with the permission from the Elsevier. Figure 11. A schematic mechanism for soot oxidation over the CeO 2 -Ag catalyst. Reproduced from Ref. [49] with the permission from the Elsevier.
Another important problem that occurs in particulate filters under real conditions is connected with the loss of contact between the catalyst and solid reactant (e.g., unreactive ash). In Ref. [104] the catalytic soot oxidation was shown to occur, when a physical barrier of ash deposit exists between the catalyst and the solid soot, and the reaction proceeds without a direct catalyst-soot contact or any external energy applied (Figure 12). A CeO 2 -Ag catalyst prepared by the co-precipitation and a Ag/CeO 2 catalyst prepared by impregnation showed catalytic activity for remote oxidation of soot separated by the deposition of alumina or calcium sulfate, while CeO 2 catalyst did not. The remote oxidation effect is extended to more than 50 µm for both the CeO 2 -Ag and Ag/CeO 2 catalysts, with the highest effect over the former catalyst. Based on the results of the ESR experiments, a mechanism for the observed phenomenon was proposed, in which a superoxide ion (O 2 − ) generated on the catalyst surface first migrated to the ash surface and then to the soot particles and then subsequently oxidizes it. Another important problem that occurs in particulate filters under real conditions is connected with the loss of contact between the catalyst and solid reactant (e.g., unreactive ash). In Ref. [104] the catalytic soot oxidation was shown to occur, when a physical barrier of ash deposit exists between the catalyst and the solid soot, and the reaction proceeds without a direct catalyst-soot contact or any external energy applied (Figure 12). A CeO2-Ag catalyst prepared by the co-precipitation and a Ag/CeO2 catalyst prepared by impregnation showed catalytic activity for remote oxidation of soot separated by the deposition of alumina or calcium sulfate, while CeO2 catalyst did not. The remote oxidation effect is extended to more than 50 μm for both the CeO2-Ag and Ag/CeO2 catalysts, with the highest effect over the former catalyst. Based on the results of the ESR experiments, a mechanism for the observed phenomenon was proposed, in which a superoxide ion (O2 − ) generated on the catalyst surface first migrated to the ash surface and then to the soot particles and then subsequently oxidizes it. Figure 12. A schematic mechanism for remote catalytic soot oxidation over a catalyst composed of Ag and CeO2. Reproduced from Ref. [104] with the permission from the Elsevier.
In [112] several model Ag/CeO2 catalysts with uniform structures and diverse surface oxygen vacancy (VO-s) contents were prepared by solution combustion method, and the processes of their activation and deactivation were considered ( Figure 13). The VO-s content, conditions of catalyst-soot contact and extra oxygen supplier were pointed out as the most important structural factors in the activity of soot oxidation catalysts. The dioxygen concentration in the reaction atmosphere was assumed to influence the VO-s content, while ceria reduction was mentioned to occur around the catalyst-soot contact points and did not take place in the presence of O2. Moderate amounts of VO-s were shown to boost the catalytic activity by generating more Ox n− species, while their excess yields O 2− instead of O2 − that hinders the process. The interfacial reduction of ceria and insufficient O2 − delivery and regeneration were suggested to determine the catalyst performance. The deactivation can be postponed by noble metal addition, resulting in accelerated soot combustion over noble metalcontaining catalysts. Another important problem that occurs in particulate filters under real conditions is connected with the loss of contact between the catalyst and solid reactant (e.g., unreactive ash). In Ref. [104] the catalytic soot oxidation was shown to occur, when a physical barrier of ash deposit exists between the catalyst and the solid soot, and the reaction proceeds without a direct catalyst-soot contact or any external energy applied (Figure 12). A CeO2-Ag catalyst prepared by the co-precipitation and a Ag/CeO2 catalyst prepared by impregnation showed catalytic activity for remote oxidation of soot separated by the deposition of alumina or calcium sulfate, while CeO2 catalyst did not. The remote oxidation effect is extended to more than 50 μm for both the CeO2-Ag and Ag/CeO2 catalysts, with the highest effect over the former catalyst. Based on the results of the ESR experiments, a mechanism for the observed phenomenon was proposed, in which a superoxide ion (O2 − ) generated on the catalyst surface first migrated to the ash surface and then to the soot particles and then subsequently oxidizes it. In [112] several model Ag/CeO2 catalysts with uniform structures and diverse surface oxygen vacancy (VO-s) contents were prepared by solution combustion method, and the processes of their activation and deactivation were considered ( Figure 13). The VO-s content, conditions of catalyst-soot contact and extra oxygen supplier were pointed out as the most important structural factors in the activity of soot oxidation catalysts. The dioxygen concentration in the reaction atmosphere was assumed to influence the VO-s content, while ceria reduction was mentioned to occur around the catalyst-soot contact points and did not take place in the presence of O2. Moderate amounts of VO-s were shown to boost the catalytic activity by generating more Ox n− species, while their excess yields O 2− instead of O2 − that hinders the process. The interfacial reduction of ceria and insufficient O2 − delivery and regeneration were suggested to determine the catalyst performance. The deactivation can be postponed by noble metal addition, resulting in accelerated soot combustion over noble metalcontaining catalysts. Thus, the development of catalysts with a special state of the deposited phase, characterized by a strong metal-support interaction, makes it possible to stop the migration of the deposited particles of the active phase, preventing the process of thermal aging (sintering) of the catalyst, which is one of the main problems in the operation of catalytic systems for cleaning emissions of internal combustion engines, both gasoline and diesel. The synergistic effect of Ag/CeO 2 catalysts is determined by high activity, stability and is achieved by decreasing the costs for use of expensive metals, e.g., platinum [113][114][115], with saving of efficiency in the processes of catalytic cleaning of emissions of internal combustion engines. In this way, Ag/CeO 2 composites are considered promising catalysts for soot oxidation.
VOCs Abatement
VOCs are a large group of organic chemicals having high vapor pressure and low boiling point at atmospheric pressure (these include, but are not limited to aldehydes, alcohols, aromatic compounds, etc.). These properties cause evaporation or sublimation of these compounds from liquid or solid state and entering the indoor and outdoor air. VOCs are known to possess high toxicity, poison the atmosphere and have a negative impact on human health and the environment [34]. To date, numerous ways to solve the challenge of air pollution, such as combustion of wastes, biodegradation [116], adsorption [117], plasmochemical decomposition [118], photocatalytic oxidation [119], ozonation [120], etc., have been proposed. The main drawback of these methods is the high-energy consumption that may be accompanied by the formation of formaldehyde and CO as well as the complexity of regeneration of the active phase (bacteria, adsorbents, photocatalysts). Catalytic oxidation of VOCs to carbon dioxide and water are considered the most promising methods to control the emissions [121][122][123][124]. The use of catalysts allows carrying out VOCs oxidation at relatively low temperatures at complete conversion. As a rule, two main types of effective catalysts for total oxidation of VOCs are developed, including supported metals (e.g., Au, Pt, Pd, Ag) [125][126][127][128][129][130] and transition metal oxides (CeO 2 , MnO 2 , Co 3 O 4 ) [130][131][132][133]. The combination of noble metal and transition metal oxide used as a support or modifier is promising to increase the effectiveness of catalytic composites [134][135][136].
Currently, Ag-CeO 2 composites represent both scientific and practical interest as catalysts for VOCs abatement, in particular oxidation of formaldehyde, methanol, toluene, acetone, etc. A selection of literature data on Ag/CeO 2 composites used in VOCs abatement is presented in Table 2. Several articles were published on formaldehyde oxidation over Ag/CeO 2 catalysts [40,[137][138][139]. One of the pioneer works in this field was carried out by S. Imamura et al. [140], who suggested using Ag/CeO 2 as catalysts for formaldehyde oxidation. High activity of the Ag/CeO 2 composite was suggested to be governed by high dispersion of active silver on CeO 2 and easier removal of surface oxygen as compared to the one over individual Ag or CeO 2 components. The authors pointed out that compared to other group VIII metals, silver is less expensive and more abundant and shows high activity and durability, when high temperatures are not required. WI-wetness impregnation, DP-deposition-precipitation, DPU-deposition-precipitation with urea, IRC-impregnation-reduction with citrate, HT-hydrothermal synthesis, nr-nanorod, np-nanoparticle, nc-nanocube. *-the calculation was carried out as a ratio of mole of converted formaldehyde per mole of Ag loading in the catalysts.
Thus, in Refs. [137,138] a comparison of Ag/CeO 2 catalysts with Ag-containing catalysts supported on various supports and the those with different active components supported on CeO 2 was considered. Catalytic activity toward formaldehyde oxidation was shown to strongly depend on the Ag particle size and dispersion and the amount of active oxygen species [137]. The 100% formaldehyde conversion was achieved above 125 • C. In Ref. [138] the defective sites of mesostructured CeO 2 support prepared by pyrolysis of oxalate precursor were suggested to increase oxygen vacancies able to absorb and activate dioxygen, and highly dispersed silver particles promote this process. This allowed achieving the complete formaldehyde conversion at 100 • C and was accompanied by a strong synergistic interaction between active component and CeO 2 support causing enhancement of redox capability of the catalyst.
L. Ma et al. [40] also pointed out the synergistic interaction between Ag and CeO 2 that caused an activity enhancement of Ag/CeO 2 nanosphere catalysts with average sizes around 80-100 nm composed of small particles with a crystallite size of 2-5 nm as compared to normal Ag/CeO 2 particle catalysts prepared by conventional impregnation method. The complete formaldehyde conversion was achieved above 110 • C, which was also explained by the fact that surface chemisorbed oxygen can be easily formed on the Ag/CeO 2 nanosphere catalysts. Silver facilitated oxygen activation, which was considered an important aspect of formaldehyde oxidation.
Similar idea was reported in Ref. [139], where the comparison of catalytic properties of Ag/CeO 2 catalyst with different morphologies (nanorods, nanoparticles, and nanocubes) of CeO 2 prepared by hydrothermal and impregnation method was carried out. The authors pointed out shape dependence of the chemical state of ceria-supported Ag NPs, with the catalysts supported on CeO 2 nanorods showing the highest activity caused by the highest surface oxygen vacancy concentration, high low-temperature reducibility as well as existence of lattice oxygen species and lattice defects formed with the participation of both silver and ceria. The electronic silver-ceria interaction yielded Ag 0 in Ag/CeO 2 composites, and the Ag 0 /(Ag 0 +Ag + ) ratio was found the highest for the catalysts supported on ceria nanorods. These results show that the catalytic activity of Ag/CeO 2 composites toward formaldehyde abatement can be regulated by engineering the proper shapes of CeO 2 supports.
One of the main parameters that allows comparing the catalytic activity of different materials is a TOF. Table 2 presents the TOF values calculated by the authors. Unfortunately, the differences in calculation methods and absence of required experimental information in original papers did not allow comparing the activity of Ag/CeO 2 materials correctly.
Besides formaldehyde, Ag/CeO 2 catalysts were also used to oxidize other VOCs, e.g. methanol, toluene, acetone, and naphthalene [41,54,[141][142][143]. In these articles, a comparison of catalysts prepared by different methods was represented. The authors attempted to determine the influence of the preparation method and structure of the catalyst on its catalytic activity. Thus, in Ref. [54] the properties of catalysts prepared by deposition-precipitation and co-precipitation methods were compared in total oxidation of methanol, acetone, and toluene. The catalysts prepared by co-precipitation method were revealed to be more active in oxidation reactions. Small crystallites of silver and ceria enhanced the mobility and reactivity of oxygen species over ceria surface, which participated in the said reactions through the Mars-van Krevelen mechanism. The reactivity of the VOCs changed in a row: methanol > acetone > toluene.
In Refs. [41,142] the comparison of the catalytic activity of M/CeO 2 composites (M = Au, Cu, Ag) prepared by conventional wet impregnation and deposition-precipitation methods was carried out in propylene oxidation. It was shown that the Ag-containing catalyst prepared by conventional wet impregnation method possessed higher catalytic activity. In Ref. [142] the presence of silver in high oxidation state was considered responsible for high catalytic activity of Ag/CeO 2 composites. Using EPR technique it was shown that this is connected with the presence of Ag 2+ ions (isotopes 107 Ag 2+ and 109 Ag 2+ were detected) along with Ag + and Ag 0 in the Ag/CeO 2 -Imp sample, while this was not observed in case of Ag/CeO 2 -DP ( Figure 14, A) [41]. In the presence of Ag 2+ ions, a mobility of some oxygen species increases, which sets conditions for the formation of three redox couples (Ag 2+ /Ag + , Ag 2+ /Ag 0 , and Ag + /Ag 0 ). Nitrate precursor decomposition with the participation of O 2− of ceria lattice was considered a source of Ag 2+ ions, while the regeneration of oxygen vacancy may occur either from nitrate or from gaseous oxygen: In Figure 14B the catalytic conversion of propylene over CeO2, Ag/CeO2-Imp and Ag/CeO2-DP is shown. Adding Ag to CeO2 enhanced the catalytic activity, moreover, the performance of the Imp catalyst was better than that for the DP. In order to evaluate the stability of the catalyst over time, the authors also presented both static (isothermic conditions at 175 °C) and dynamic (7 consecutive cycles vs temperature in the range from 50 to up to 300 °C) aging tests for the activity of the 10% Ag/CeO2 (Imp) sample in propene oxidation. Moreover, EPR studies were carried out for the samples before and after catalysis. It was stated that after catalysis the Ag 2+ ions retained on the ceria surface. This allows formulating the key role of Ag 2+ /Ag + and Ag 2+ /Ag 0 redox couples as active species in propene oxidation over 10% Ag/CeO2 by prepared impregnation method.
S. Benaissa et al. [141] prepared a mesoporous CeO2 using nanocasting pathway with SBA-15 as a structural template and cerium nitrate as a CeO2 precursor and compared the properties of catalysts on the basis thereof prepared by wetness impregnation (WI), deposition-precipitation with urea (DPU) and impregnation-reduction with citrate (IRC) methods, with the latter being the most active and stable (the catalytic activity and selectivity did not significantly change after 50 h). The authors connected this with higher surface lattice oxygen mobility over this catalyst and with strong silvermesoporous ceria interaction.
The authors [143] carried out isothermal naphthalene oxidation comparing the activity of catalysts with different Ag content (0.5-5 wt. %), with the sample containing 1 wt. % Ag being the most active one. This was explained by the balance between two factors: oxygen availability and oxygen regeneration capacity. Introduction of Ag to CeO2 was shown to increase both factors. Regeneration capacity was related to the number of oxygen vacancies in bulk ceria, and Ag facilitated the process by reverse spillover effect. Ce x+ ions were suggested to be the main active sites. Impregnated silver was claimed to serve as a "pump" and increase bulk oxygen vacancies, while reducing the surface ones, which resulted in oxygen availability and determined the oxygen In the presence of Ag 2+ ions, a mobility of some oxygen species increases, which sets conditions for the formation of three redox couples (Ag 2+ /Ag + , Ag 2+ /Ag 0 , and Ag + /Ag 0 ). Nitrate precursor decomposition with the participation of O 2− of ceria lattice was considered a source of Ag 2+ ions, while the regeneration of oxygen vacancy may occur either from nitrate or from gaseous oxygen: In Figure 14B the catalytic conversion of propylene over CeO 2 , Ag/CeO 2 -Imp and Ag/CeO 2 -DP is shown. Adding Ag to CeO 2 enhanced the catalytic activity, moreover, the performance of the Imp catalyst was better than that for the DP. In order to evaluate the stability of the catalyst over time, the authors also presented both static (isothermic conditions at 175 • C) and dynamic (7 consecutive cycles vs temperature in the range from 50 to up to 300 • C) aging tests for the activity of the 10% Ag/CeO 2 (Imp) sample in propene oxidation. Moreover, EPR studies were carried out for the samples before and after catalysis. It was stated that after catalysis the Ag 2+ ions retained on the ceria surface. This allows formulating the key role of Ag 2+ /Ag + and Ag 2+ /Ag 0 redox couples as active species in propene oxidation over 10% Ag/CeO 2 by prepared impregnation method.
S. Benaissa et al. [141] prepared a mesoporous CeO 2 using nanocasting pathway with SBA-15 as a structural template and cerium nitrate as a CeO 2 precursor and compared the properties of catalysts on the basis thereof prepared by wetness impregnation (WI), deposition-precipitation with urea (DPU) and impregnation-reduction with citrate (IRC) methods, with the latter being the most active and stable (the catalytic activity and selectivity did not significantly change after 50 h). The authors connected this with higher surface lattice oxygen mobility over this catalyst and with strong silver-mesoporous ceria interaction.
The authors [143] carried out isothermal naphthalene oxidation comparing the activity of catalysts with different Ag content (0.5-5 wt. %), with the sample containing 1 wt. % Ag being the most active one. This was explained by the balance between two factors: oxygen availability and oxygen regeneration capacity. Introduction of Ag to CeO 2 was shown to increase both factors. Regeneration capacity was related to the number of oxygen vacancies in bulk ceria, and Ag facilitated the process by reverse spillover effect. Ce x+ ions were suggested to be the main active sites. Impregnated silver was claimed to serve as a "pump" and increase bulk oxygen vacancies, while reducing the surface ones, which resulted in oxygen availability and determined the oxygen regeneration. Spillover effect was proposed to reduce the regeneration ability of active oxygen, when Ag loading is high, which was connected with lower concentration of surface oxygen vacancies.
Of particular interest is the approach to locate the Ag/CeO 2 composition on the inert support, which is usually represented by alumina or silica [144,145]. Thus, H. Yang et al. [144] used 3DOM CeO 2 -Al 2 O 3 as a support for Ag catalysts for toluene oxidation. This support was prepared using the Pluronic F127 (EO 106 PO 70 EO 106 ) and PMMA as soft and hard templates, respectively. The obtained support showed high-quality 3DOM architecture with a diameter of macropores of 180-200 nm, where ordered mesopores with a diameter of 4-6 nm were formed on the skeletons of macropores. Such structure allowed producing the particles of active component with sizes of 3-4 nm that were evenly distributed on the catalyst surface. The 50% and 90% toluene conversion (1000 ppm) over 0.81Ag/3DOM 26.9CeO 2 -Al 2 O 3 sample was achieved at 308 and 338 • C, respectively.
In Ref. [145] silica gel prepared by sol-gel method and subjected to hydrothermal treatment was used as a primary support. Ceria and then silver were supported onto silica gel using consecutive impregnation method. The activity of the obtained catalysts was studied in formaldehyde oxidation reaction. The author pointed out that the activity of Ag/CeO 2 /SiO 2 catalysts was significantly higher than the one of Ag/SiO 2 sample, which was attributed to synergetic action between silver and ceria. The results obtained for the silver catalyst with small amounts of ceria were not significantly inferior to silver supported over bulk ceria ( Figure 15). Thus, the silica-supported ceria-modified silver catalyst can be used for formaldehyde oxidation. regeneration. Spillover effect was proposed to reduce the regeneration ability of active oxygen, when Ag loading is high, which was connected with lower concentration of surface oxygen vacancies.
Of particular interest is the approach to locate the Ag/CeO2 composition on the inert support, which is usually represented by alumina or silica [144,145]. Thus, H. Yang et al. [144] used 3DOM CeO2-Al2O3 as a support for Ag catalysts for toluene oxidation. This support was prepared using the Pluronic F127 (EO106PO70EO106) and PMMA as soft and hard templates, respectively. The obtained support showed high-quality 3DOM architecture with a diameter of macropores of 180-200 nm, where ordered mesopores with a diameter of 4-6 nm were formed on the skeletons of macropores. Such structure allowed producing the particles of active component with sizes of 3-4 nm that were evenly distributed on the catalyst surface. The 50% and 90% toluene conversion (1000 ppm) over 0.81Ag/3DOM 26.9CeO2-Al2O3 sample was achieved at 308 and 338 °C, respectively.
In Ref. [145] silica gel prepared by sol-gel method and subjected to hydrothermal treatment was used as a primary support. Ceria and then silver were supported onto silica gel using consecutive impregnation method. The activity of the obtained catalysts was studied in formaldehyde oxidation reaction. The author pointed out that the activity of Ag/CeO2/SiO2 catalysts was significantly higher than the one of Ag/SiO2 sample, which was attributed to synergetic action between silver and ceria. The results obtained for the silver catalyst with small amounts of ceria were not significantly inferior to silver supported over bulk ceria ( Figure 15). Thus, the silica-supported ceria-modified silver catalyst can be used for formaldehyde oxidation. To conclude, Ag/CeO2 catalysts are promising materials for VOCs abatement. Even though their activity is inferior to the one of catalysts based on noble metals, their use still represents wide interest due to lower costs. Moreover, the opportunity to increase their activity due to the application of various preparation methods as well as changing of Ag/Ce ratio forms the ground for future research in this field. It is noteworthy that in the literature there is no consensus on the effect of preparation method of Ag/CeO2 composites on their catalytic activity in VOCs abatement.
Ag/CeO2 Composites: Insights from Theory
Due to low amounts of silver that are usually used in the preparation of highly effective Ag/CeO2 composites for total oxidation of VOCs, soot, and CO, not all experimental techniques can provide a representation of silver-ceria interface and the ways it works in the said catalytic transformations. regeneration. Spillover effect was proposed to reduce the regeneration ability of active oxygen, when Ag loading is high, which was connected with lower concentration of surface oxygen vacancies.
Of particular interest is the approach to locate the Ag/CeO2 composition on the inert support, which is usually represented by alumina or silica [144,145]. Thus, H. Yang et al. [144] used 3DOM CeO2-Al2O3 as a support for Ag catalysts for toluene oxidation. This support was prepared using the Pluronic F127 (EO106PO70EO106) and PMMA as soft and hard templates, respectively. The obtained support showed high-quality 3DOM architecture with a diameter of macropores of 180-200 nm, where ordered mesopores with a diameter of 4-6 nm were formed on the skeletons of macropores. Such structure allowed producing the particles of active component with sizes of 3-4 nm that were evenly distributed on the catalyst surface. The 50% and 90% toluene conversion (1000 ppm) over 0.81Ag/3DOM 26.9CeO2-Al2O3 sample was achieved at 308 and 338 °C, respectively.
In Ref. [145] silica gel prepared by sol-gel method and subjected to hydrothermal treatment was used as a primary support. Ceria and then silver were supported onto silica gel using consecutive impregnation method. The activity of the obtained catalysts was studied in formaldehyde oxidation reaction. The author pointed out that the activity of Ag/CeO2/SiO2 catalysts was significantly higher than the one of Ag/SiO2 sample, which was attributed to synergetic action between silver and ceria. The results obtained for the silver catalyst with small amounts of ceria were not significantly inferior to silver supported over bulk ceria ( Figure 15). Thus, the silica-supported ceria-modified silver catalyst can be used for formaldehyde oxidation. To conclude, Ag/CeO2 catalysts are promising materials for VOCs abatement. Even though their activity is inferior to the one of catalysts based on noble metals, their use still represents wide interest due to lower costs. Moreover, the opportunity to increase their activity due to the application of various preparation methods as well as changing of Ag/Ce ratio forms the ground for future research in this field. It is noteworthy that in the literature there is no consensus on the effect of preparation method of Ag/CeO2 composites on their catalytic activity in VOCs abatement.
Ag/CeO2 Composites: Insights from Theory
Due to low amounts of silver that are usually used in the preparation of highly effective Ag/CeO2 composites for total oxidation of VOCs, soot, and CO, not all experimental techniques can provide a representation of silver-ceria interface and the ways it works in the said catalytic transformations.
To conclude, Ag/CeO 2 catalysts are promising materials for VOCs abatement. Even though their activity is inferior to the one of catalysts based on noble metals, their use still represents wide interest due to lower costs. Moreover, the opportunity to increase their activity due to the application of various preparation methods as well as changing of Ag/Ce ratio forms the ground for future research in this field. It is noteworthy that in the literature there is no consensus on the effect of preparation method of Ag/CeO 2 composites on their catalytic activity in VOCs abatement.
Ag/CeO 2 Composites: Insights from Theory
Due to low amounts of silver that are usually used in the preparation of highly effective Ag/CeO 2 composites for total oxidation of VOCs, soot, and CO, not all experimental techniques can provide a representation of silver-ceria interface and the ways it works in the said catalytic transformations. Thus, Ag/CeO 2 composites have attracted the attention of theoretical chemists. Two main directions are considered: (1) adequate representation and modeling of regular and defective ceria surfaces [132,[146][147][148][149][150][151], (2) systematic studies of the adsorption behavior of Ag clusters on ceria surfaces [152][153][154][155][156][157][158][159][160]. In the latter case, the structure of Ag-ceria interface is widely discussed, while the adsorption behavior of adsorbates over such composites and their roles in tuning the interfacial properties are modeled in a lesser extent [152].
Researchers point out several difficulties in terms of theoretical modeling of CeO 2 -based composites. These difficulties are as follows: (1) density functional theory (DFT) does not predict correctly the localized nature of Ce 4f states, (2) change of Ce oxidation state causes incorrect lattice parameters, (3) the calculation results strongly depend on the used methods and functionals, and the obtained energy values oscillate.
These issues were partially addressed by application of hybrid functionals [132,161,162] or DFT+U approach [152,157,163]. The latter is connected with the inclusion of U term for highly correlated Ce 4f electrons in reduced ceria providing partial occupancy of the corresponding atomic level and increasing the accuracy of modeling of the on-site Coulomb interactions in CeO 2 -based materials. The values for U are usually selected semiempirically. The formalism by Dudarev et al. [164] is usually used. A combination of local density approximation (LDA) and generalized gradient approximation (GGA) in periodic calculations is shown to adequately describe geometry and energy parameters [165] under this approach. However, it is noteworthy that the results of DFT+U calculations depend on many parameters (e.g., lattice constants), which requires special attention to their interpretation.
In Ref. [160] using LDA+U and GGA+U DFT approaches with different U values and periodic slab surface models, charge transfer was shown to occur from Ag to ceria with a concomitant reduction of one Ce surface atom of the top layer, and the transferred electron was localized on Ce atoms. For Ag-based systems, the most favorable adsorption site comprised three surface oxygen atoms. In Ref. [159] the studies of surface structures and electrophilic states of Ag adsorbed on CeO 2 (111) revealed that charge redistribution can be caused by local structural distortion effects. The distribution of charge was not uniform over the top O layer because of Ag clusters on the underlying O ions, which increased the ionic charge of the remaining O ions and decreased the effective cationic charge over Ce atoms bonded with uncovered O atoms. This also influenced back on the structure of Ag cluster. Silver clusters were shown to induce changes in the oxidation state of several Ce atoms located in the top layer (Ce 4+ to Ce 3+ ), which are accompanied by a charge flow from metal cluster to surface caused by electronegativity difference between Ag and O atoms [154].
In Ref. [158] charge redistribution during Ag adsorption was confirmed by construction of spin density isosurfaces and site projected density of states. The distortions of selected Ce-O distances were imposed to study the energetics of Ce 4+ to Ce 3+ reduction. Oxidation of Ag 0 to Ag + was assumed, while the probable formation of partially oxidized Ag x O y species was not considered. Two nearest neighbor Ce 3+ sites relative to Ag showed the highest Ag adsorption energy at O bridge sites, while three nearest neighbor Ce 3+ sites showed the highest Ag adsorption at Ce bridge sites.
DFT calculations were carried out for ceria-supported 4-atom transition metal (including Ag) clusters in Ref. [155] and showed that the strength of metal-metal and metal-oxygen interactions depended on the hybridization of d-states of metal with p-states of oxygen as well as the occupation of antibonding Ag d-states. The interactions changed the itinerant f-states of cerium to localized ones, which created a lateral tensile strain in the top layer of Ce on the surface. It was suggested also that the structure of Ag cluster determined the number of cerium atoms in the localized Ce 3+ oxidation state.
Combined experimental (XPS, STM) and theoretical (DFT+U) approaches were used to study the nucleation and growth of Ag nanoparticles deposited on stoichiometric and reduced thin CeO 2 films grown on Pt(111) [157]. A direct electron transfer from Ag clusters and nanoparticles to ceria was reported, and its extent, as well as spin, localization depended on the level of theory used. Ag atoms or nanoparticles supported on stoichiometric CeO 2 acted as electron donors and are subjected to spontaneous direct oxidation at the expense of ceria followed by reduction of Ce ions of the support. The energy costs to move single O atom from ceria toward adsorbed Ag nanoparticle was high, and reverse spillover of oxygen cannot be considered a favorable mechanism of ceria reduction.
Silver-ceria interaction is often compared with the one in Au/CeO 2 and Cu/CeO 2 systems. Due to relatively lower ionization potential, Ag and Cu show higher adsorption energies. Moreover, silver nanoparticles act as a platform for oxygen diffusion leading to partially oxidized Ag nanoparticles located on the surface of the partially reduced ceria [157]. To quantitatively explore the interactions between silver and ceria, a method is proposed utilizing the conversion of total adsorption energy into the interaction energy per Ag-O bond and measurement of a deviation of Ag-O-Ce bond angle from the angle of the sp 3 orbital hybridization of O atom [153]. It is noteworthy that coordination number of O atom, although generally considered, is not included into the correlation, while in Ref. [156] multiple adsorption configurations are shown to exist over single adsorption sites for Ag/CeO 2 (100), and electron charge transfer occurs between the neutral silver atom and neighboring Ce 4+ cation.
In Ref. [152] the reactivity of Ag-modified CeO 2 (111) surface used in soot combustion was considered. The interactions of stoichiometric and reduced CeO 2 (111) surfaces with dioxygen, carbon clusters, isolated Ag atoms and silver clusters were studied using DFT+U approach. Carbonaceous species yielded oxygenated carbon moieties of reduced ceria. Peroxo and superoxo species are shown to form, when O 2 is adsorbed over Ag cluster. The role of Ag atoms is to act as a donor, which, when oxidized, donate the valence electron to ceria yielding reduced Ce 3+ ions. The presence of small Ag clusters mediates the formation of oxygen vacancies ( Figure 16). The energy costs to move single O atom from ceria toward adsorbed Ag nanoparticle was high, and reverse spillover of oxygen cannot be considered a favorable mechanism of ceria reduction. Silver-ceria interaction is often compared with the one in Au/CeO2 and Cu/CeO2 systems. Due to relatively lower ionization potential, Ag and Cu show higher adsorption energies. Moreover, silver nanoparticles act as a platform for oxygen diffusion leading to partially oxidized Ag nanoparticles located on the surface of the partially reduced ceria [157]. To quantitatively explore the interactions between silver and ceria, a method is proposed utilizing the conversion of total adsorption energy into the interaction energy per Ag-O bond and measurement of a deviation of Ag-O-Ce bond angle from the angle of the sp 3 orbital hybridization of O atom [153]. It is noteworthy that coordination number of O atom, although generally considered, is not included into the correlation, while in Ref. [156] multiple adsorption configurations are shown to exist over single adsorption sites for Ag/CeO2(100), and electron charge transfer occurs between the neutral silver atom and neighboring Ce 4+ cation.
In Ref. [152] the reactivity of Ag-modified CeO2 (111) surface used in soot combustion was considered. The interactions of stoichiometric and reduced CeO2 (111) surfaces with dioxygen, carbon clusters, isolated Ag atoms and silver clusters were studied using DFT+U approach. Carbonaceous species yielded oxygenated carbon moieties of reduced ceria. Peroxo and superoxo species are shown to form, when O2 is adsorbed over Ag cluster. The role of Ag atoms is to act as a donor, which, when oxidized, donate the valence electron to ceria yielding reduced Ce 3+ ions. The presence of small Ag clusters mediates the formation of oxygen vacancies ( Figure 16). The vacancies possess stronger affinity with respect to oxygen as compared to silver that leads to refilling of the cavities with dioxygen. Co-presence of Ag clusters and reduced ceria lightens electron transfer and activation of dioxygen molecule. Silver atoms perform as alkali metal promoters to facilitate O2 to O2 − transition that leads to the formation of reduced Ce 3+ ions. However, partial oxidation of silver can take place in this case.
Despite thorough investigations, still there are several debating issues in the theoretical description of Ag/CeO2 composites. Among them are the mechanism of oxygen replenishing in the support, different behavior of CeO2 surfaces, adsorption of silver atoms over long and short O-O bridge sites, quantitative description of Ag-CeO2 interactions, etc.
Photocatalysis
The wide application of CeO2-based catalysts in oxidative catalysis is mainly attributed to intrinsic redox properties [166]. Conversely, the interest in using ceria in photocatalysis is much lower. This is connected with fast recombination of photoinduced electron-hole pairs and limited The vacancies possess stronger affinity with respect to oxygen as compared to silver that leads to refilling of the cavities with dioxygen. Co-presence of Ag clusters and reduced ceria lightens electron transfer and activation of dioxygen molecule. Silver atoms perform as alkali metal promoters to facilitate O 2 to O 2 − transition that leads to the formation of reduced Ce 3+ ions. However, partial oxidation of silver can take place in this case. Despite thorough investigations, still there are several debating issues in the theoretical description of Ag/CeO 2 composites. Among them are the mechanism of oxygen replenishing in the support, different behavior of CeO 2 surfaces, adsorption of silver atoms over long and short O-O bridge sites, quantitative description of Ag-CeO 2 interactions, etc.
Photocatalysis
The wide application of CeO 2 -based catalysts in oxidative catalysis is mainly attributed to intrinsic redox properties [166]. Conversely, the interest in using ceria in photocatalysis is much lower. This is connected with fast recombination of photoinduced electron-hole pairs and limited visible light adsorption capacity [167]. CeO 2 is an n-type semiconductor with a relatively wide bandgap (E g = 3.15-3.2 eV) [167,168]. On the other hand, CeO 2 has emerged as a promising material for photocatalysis owing to its chemical stability and photocorrosion resistance [169]. Redox Ce 4+ ↔Ce 3+ transition is accompanied by oxygen vacancy formation, which has high importance for both oxidative catalysis and electron-hole separation/recombination in photocatalyst [170]. Thus, in Ref. [171] a mesoporous nanorod-like ceria prepared by microwave-assisted hydrolysis of Ce(NO 3 ) 3 *6H 2 O in the presence of urea was characterized by significant shifts of adsorption to the visible region (a band gap of 2.75 eV) that was associated with the presence of Ce 3+ . The growth of temperature was also shown to result in significant reduction of the recombination of photogenerated electron-hole pairs. The increased photocatalytic activity in gas-phase oxidation of benzene, hexane, and acetone was found for the prepared mesoporous nanorod-like ceria due to these two phenomena. Thus, the shape of ceria nanoparticles and the presence of Ce 3+ in the structure provided a growth of photocatalytic activity, including the one under visible light.
Various strategies are being developed to improve the photocatalytic properties of ceria-based materials: morphology control [172,173], doping by europium or yttrium [174,175], fabrication of heterojunctions [176], etc. Thus, in Ref. [172] the degradation of the azo dye acid orange 7 (AO7) under ultraviolet irradiation over hierarchical rose-flower-like CeO 2 nanostructures ( Figure 17) is studied. The synthesis of CeO 2 sheets active under the visible light is described in Ref. [173]. visible light adsorption capacity [167]. CeO2 is an n-type semiconductor with a relatively wide bandgap (Eg = 3.15-3.2 eV) [167,168]. On the other hand, CeO2 has emerged as a promising material for photocatalysis owing to its chemical stability and photocorrosion resistance [169]. Redox Ce 4+ ↔Ce 3+ transition is accompanied by oxygen vacancy formation, which has high importance for both oxidative catalysis and electron-hole separation/recombination in photocatalyst [170]. Thus, in Ref. [171] a mesoporous nanorod-like ceria prepared by microwave-assisted hydrolysis of Ce(NO3)3*6H2O in the presence of urea was characterized by significant shifts of adsorption to the visible region (a band gap of 2.75 eV) that was associated with the presence of Ce 3+ . The growth of temperature was also shown to result in significant reduction of the recombination of photogenerated electron-hole pairs. The increased photocatalytic activity in gas-phase oxidation of benzene, hexane, and acetone was found for the prepared mesoporous nanorod-like ceria due to these two phenomena. Thus, the shape of ceria nanoparticles and the presence of Ce 3+ in the structure provided a growth of photocatalytic activity, including the one under visible light. Various strategies are being developed to improve the photocatalytic properties of ceria-based materials: morphology control [172,173], doping by europium or yttrium [174,175], fabrication of heterojunctions [176], etc. Thus, in Ref. [172] the degradation of the azo dye acid orange 7 (AO7) under ultraviolet irradiation over hierarchical rose-flower-like CeO2 nanostructures ( Figure 17) is studied. The synthesis of CeO2 sheets active under the visible light is described in Ref. [173]. Moreover, the fabrication of CeO2-based heterostructures is a more promising way to reduce the band gap and provide improved electron-hole separation due to charge transfer through the interfacial boundaries. Silver salts may be used in photocatalysis due to their semiconductors properties. Thus, Ag3PO4 are characterized by relatively small band gap (2.36-2.43 eV) [177], absorb visible light (has yellow color) and possess a good photocatalytic stability. In Ref. [178] the photocatalytic activity of new composite Ag3PO4/CeO2 in degradation of methylene blue and phenol under visible light and UV light irradiation was studied. The photocatalytic activity of the Ag3PO4/CeO2 composite was shown to be associated with the fast transfer and efficient separation of electron-hole pairs at the interfaces of two semiconductors (CeO2 and Ag3PO4). The stability of photocatalyst was demonstrated during five catalytic cycles.
The photocatalytic remediation of water polluted by some chemically stable azo dyes using Ag2CO3/CeO2 microcomposite under visible light irradiation was studied in Ref. [179]. The enhanced photocatalytic activity for the photodegradation of enrofloxacin in aqueous solutions over Moreover, the fabrication of CeO 2 -based heterostructures is a more promising way to reduce the band gap and provide improved electron-hole separation due to charge transfer through the interfacial boundaries. Silver salts may be used in photocatalysis due to their semiconductors properties. Thus, Ag 3 PO 4 are characterized by relatively small band gap (2.36-2.43 eV) [177], absorb visible light (has yellow color) and possess a good photocatalytic stability. In Ref. [178] the photocatalytic activity of new composite Ag 3 PO 4 /CeO 2 in degradation of methylene blue and phenol under visible light and UV light irradiation was studied. The photocatalytic activity of the Ag 3 PO 4 /CeO 2 composite was shown to be associated with the fast transfer and efficient separation of electron-hole pairs at the interfaces of two semiconductors (CeO 2 and Ag 3 PO 4 ). The stability of photocatalyst was demonstrated during five catalytic cycles.
The photocatalytic remediation of water polluted by some chemically stable azo dyes using Ag 2 CO 3 /CeO 2 microcomposite under visible light irradiation was studied in Ref. [179]. The enhanced photocatalytic activity for the photodegradation of enrofloxacin in aqueous solutions over Ag 2 O/CeO 2 composites under visible light irradiation was demonstrated in Ref. [167]. The composite was synthesized by an in situ loading of Ag 2 CO 3 on CeO 2 followed by thermal decomposition. The p-n heterojuction between two semiconductors provided efficient separation of photoinduced charges through the contact of semiconductors that was shown by photoluminescence spectra (Figure 18a). The formation of Ag nanoparticles was associated with photoreduction of Ag 2 O. The surface plasmon resonance (SPR) on Ag NPs may lead to the formation of electrons and holes in such a way that the electrons could migrate from Ag NPs to the conduction band (CB) of Ag 2 O (Figure 18b). Thus, Ag NPs may play a specific role in photocatalytic degradation of organic pollutants. charges through the contact of semiconductors that was shown by photoluminescence spectra ( Figure 18a). The formation of Ag nanoparticles was associated with photoreduction of Ag2O. The surface plasmon resonance (SPR) on Ag NPs may lead to the formation of electrons and holes in such a way that the electrons could migrate from Ag NPs to the conduction band (CB) of Ag2O (Figure 18b). Thus, Ag NPs may play a specific role in photocatalytic degradation of organic pollutants. Figure 18. The proposed mechanism for the enhancement of photocatalytic activity of Ag2O/CeO2 catalyst in degradation of enrofloxacin. Reproduced from Ref. [167] with the permission from the Elsevier.
The same effect of photoreduction of silver compounds with the formation of Ag NPs was observed for Ag/AgCl-CeO2 catalysts [180]. The energy of hot electrons, generated on Ag NPs due to SPR, is between 1.0 and 4.0 eV [181], and these electrons could migrate to the CB of AgCl in such a way that the electrons and holes generated on CeO2 and Ag NPs would be efficiently separated. Thus, in composite photocatalysts the role of Ag NPs in visible light adsorption and separation of charges is high.
The decoration of ceria by metals (Au, Pt, Pd, Ag) provides growth of photocatalitic activity due to increased electron-hole separation and extended time of light response of semiconductors [170]. The three main phenomena of charge transfer are involved through metal-semiconductor interface: Schottky barrier (transfer of electrons from semiconductor to metal) (Figure 19a), metal SPR with transfer of charge from metal to semiconductor (Figure 19b) and metal SPR-local electric field (accompanied by recombination of electrons from metal and holes of semiconductors) (Figure 19c). The SPR for Ag NPs is observed generally near the wave-length of 400 nm, while adsorption of Au NPs is observed at 550 nm [181], which makes gold more attractive for photocatalysis [182,183]. However, the position of the absorption band of nanoparticles depends on many factors, including the size and shape of particles, interaction with surroundings. Thus, significant shift of SPR of Ag NPs from 400 nm to 480-500 nm is observed for Ag/CeO2 catalysts [184] that may be attributed to strong electronic metal-support interaction between Ag and CeO2. This provides an enhanced photocatalytic activity of Ag/CeO2 composites in the degradation of methylene blue under the simulated sunlight [50] or visible light [185]. According to [50], Ag acts as an acceptor of photoelectrons, and then the electron rapidly reacts with O2 yielding O2 − that reduces the probability of recombination of electron-hole pairs. The correlation between the rate of degradation and amount of Ag NPs (active sites) was found. High stability and high recyclability of the Ag/CeO2 Figure 18. The proposed mechanism for the enhancement of photocatalytic activity of Ag 2 O/CeO 2 catalyst in degradation of enrofloxacin. Reproduced from Ref. [167] with the permission from the Elsevier.
The same effect of photoreduction of silver compounds with the formation of Ag NPs was observed for Ag/AgCl-CeO 2 catalysts [180]. The energy of hot electrons, generated on Ag NPs due to SPR, is between 1.0 and 4.0 eV [181], and these electrons could migrate to the CB of AgCl in such a way that the electrons and holes generated on CeO 2 and Ag NPs would be efficiently separated. Thus, in composite photocatalysts the role of Ag NPs in visible light adsorption and separation of charges is high.
The decoration of ceria by metals (Au, Pt, Pd, Ag) provides growth of photocatalitic activity due to increased electron-hole separation and extended time of light response of semiconductors [170]. The three main phenomena of charge transfer are involved through metal-semiconductor interface: Schottky barrier (transfer of electrons from semiconductor to metal) (Figure 19a), metal SPR with transfer of charge from metal to semiconductor (Figure 19b) and metal SPR-local electric field (accompanied by recombination of electrons from metal and holes of semiconductors) (Figure 19c). The SPR for Ag NPs is observed generally near the wave-length of 400 nm, while adsorption of Au NPs is observed at 550 nm [181], which makes gold more attractive for photocatalysis [182,183]. However, the position of the absorption band of nanoparticles depends on many factors, including the size and shape of particles, interaction with surroundings. Thus, significant shift of SPR of Ag NPs from 400 nm to 480-500 nm is observed for Ag/CeO 2 catalysts [184] that may be attributed to strong electronic metal-support interaction between Ag and CeO 2 . This provides an enhanced photocatalytic activity of Ag/CeO 2 composites in the degradation of methylene blue under the simulated sunlight [50] or visible light [185]. According to [50], Ag acts as an acceptor of photoelectrons, and then the electron rapidly reacts with O 2 yielding O 2 − that reduces the probability of recombination of electron-hole pairs. The correlation between the rate of degradation and amount of Ag NPs (active sites) was found. High stability and high recyclability of the Ag/CeO 2 heterostructure catalysts was shown. In Ref. [186], a photocatalytic degradation of Congo Red under UV light and visible light over three-dimensionally ordered macroporous (3DOM) Ag/CeO2-ZrO2 material was studied. It was shown that the SPR effect of Ag particles provides the adsorption of visible light and promotes separation of electrons and holes, reducing their recombination and improving the photocatalytic activity. The superior photocatalytic activity of Ag/CeO2/ZnO nanostructure was shown in degradation of azo dyes (methylene orange and methylene blue) and phenol solution under visible light irradiation was demonstrated in Ref. [187]. It was found that formation of oxygen vacancies led to a narrow band gap (2.66 eV), which helps to produce sufficient electrons and holes under visible light in the ternary Ag/CeO2/ZnO nanostructure. The defect structure of composite inhibited the electron-hole recombination and provided synergistic effect of narrow band gap. The SPR of Ag NPs and defects (Ce 3+ and oxygen vacancies) in CeO2 and ZnO resulted in superior photocatalytic activity. In Ref. [188], the correlation between Ce 3+ loading, amount of oxygen vacancies and activity of Ag/CeO2 and Au/CeO2 catalysts in photodegradation of rhodamine blue dye in an aqueous medium under UV-vis irradiations were found. The conditions of synthesis (pH of precipitation) and Ag/Au loading provided different Ce 3+ loading, distortion of CeO2 lattice and concentration of vacancies. All these parameters affected on light absorbance, separation of photogenerated charges and photocatalytic properties.
Thus, silver and its compounds supported on ceria have high importance in photocatalytic degradation of organic pollutants. Semiconductor properties of silver compounds and SPR of Ag NPs provide both absorbance of visible light, separation of electrons and holes and result in increased photocatalytic activity. Several common aspects were found between classical oxidation catalysis and photocatalysis over Ag/CeO2 composites. The interaction of silver with ceria (including electronic metal-support interaction) influences on the catalytic activity of Ag/CeO2 due to cooperation of active sites of Ag and ceria. The presence of Ag-CeO2 contact also leads to a growth of the amount of oxygen vacancies in the structure of CeO2 that also promotes an enhanced catalytic/photocatalytic activity. Generally, Ag/CeO2 composites are new for photocatalysis and poorly described. The study of Ag/CeO2 systems in photocatalysis has high importance for fundamental research and real application of catalysts in the purification of aqueous wastes from dyes and other organic pollutants.
Recently, ceria has attracted a growing interest as a component of materials for electrocatalytic applications [192,193]. The main reasons for this are its high oxygen storage and transfer abilities. Application of proper amounts of noble metal improves the conductive properties of CeO2-based materials, thus making them promising composites for electrocatalytic applications in fuel cells, In Ref. [186], a photocatalytic degradation of Congo Red under UV light and visible light over three-dimensionally ordered macroporous (3DOM) Ag/CeO 2 -ZrO 2 material was studied. It was shown that the SPR effect of Ag particles provides the adsorption of visible light and promotes separation of electrons and holes, reducing their recombination and improving the photocatalytic activity. The superior photocatalytic activity of Ag/CeO 2 /ZnO nanostructure was shown in degradation of azo dyes (methylene orange and methylene blue) and phenol solution under visible light irradiation was demonstrated in Ref. [187]. It was found that formation of oxygen vacancies led to a narrow band gap (2.66 eV), which helps to produce sufficient electrons and holes under visible light in the ternary Ag/CeO 2 /ZnO nanostructure. The defect structure of composite inhibited the electron-hole recombination and provided synergistic effect of narrow band gap. The SPR of Ag NPs and defects (Ce 3+ and oxygen vacancies) in CeO 2 and ZnO resulted in superior photocatalytic activity. In Ref. [188], the correlation between Ce 3+ loading, amount of oxygen vacancies and activity of Ag/CeO 2 and Au/CeO 2 catalysts in photodegradation of rhodamine blue dye in an aqueous medium under UV-vis irradiations were found. The conditions of synthesis (pH of precipitation) and Ag/Au loading provided different Ce 3+ loading, distortion of CeO 2 lattice and concentration of vacancies. All these parameters affected on light absorbance, separation of photogenerated charges and photocatalytic properties.
Thus, silver and its compounds supported on ceria have high importance in photocatalytic degradation of organic pollutants. Semiconductor properties of silver compounds and SPR of Ag NPs provide both absorbance of visible light, separation of electrons and holes and result in increased photocatalytic activity. Several common aspects were found between classical oxidation catalysis and photocatalysis over Ag/CeO 2 composites. The interaction of silver with ceria (including electronic metal-support interaction) influences on the catalytic activity of Ag/CeO 2 due to cooperation of active sites of Ag and ceria. The presence of Ag-CeO 2 contact also leads to a growth of the amount of oxygen vacancies in the structure of CeO 2 that also promotes an enhanced catalytic/photocatalytic activity. Generally, Ag/CeO 2 composites are new for photocatalysis and poorly described. The study of Ag/CeO 2 systems in photocatalysis has high importance for fundamental research and real application of catalysts in the purification of aqueous wastes from dyes and other organic pollutants.
Electrocatalysis
Silver was also shown to be a promising material for electrocatalytic applications [189][190][191]. Recently, ceria has attracted a growing interest as a component of materials for electrocatalytic applications [192,193]. The main reasons for this are its high oxygen storage and transfer abilities. Application of proper amounts of noble metal improves the conductive properties of CeO 2 -based materials, thus making them promising composites for electrocatalytic applications in fuel cells, metal-air batteries, and other alternative energy transfer devices [194].
A combination of silver and ceria in Ag/CeO 2 composites was used in several publications [51,52,195,196]. In Ref. [196] the Ag/CeO 2 composites comprising 30-50 nm silver nanoparticles uniformly anchored on the surface of nanosheet-constructing porous CeO 2 microspheres were used as oxygen reduction reaction catalysts. CeO 2 is known to show high oxygen storage capability and oxygen transfer ability, and silver was added to improve the conductivity of the latter. As a result, an enhanced activity was observed, and aluminum-air batteries based on Ag/CeO 2 composites exhibited an output power density of 345 mW/cm 2 and low degradation rate of 2.6% per 100 h, respectively.
In Ref. [51] a method was developed to prepare nanoporous Ag-CeO 2 ribbons with a homogeneous pore/grain structure by dealloying melt-spun Al-Ag-Ce alloy in a 5 wt. % NaOH aqueous solution. The resulting structure comprised uniform CeO 2 particles dispersed on the fine Ag grains, with the amount of oxygen vacancies growing as the calcination temperature increases. An enhanced Ag-CeO 2 interfacial interaction was assumed to cause high performance of the composites in electrocatalytic oxidation of sodium borohydride. In Ref. [195] Au was shown to impair the promoting effect on these composites and decrease the reaction resistance. The activity improvement was assumed to be caused by strengthening of interfacial interaction between the Ag-Au solid solution and CeO 2 particles due to Au effect, while the thermal stability and electron transport properties also improved. An increase of the Au content in the precursor alloy results in the reduction of catalytic activity and thermal stability.
In Ref. [52] 3D Ag/CeO 2 nanorods with high electrocatalytic activity for NaBH 4 electrooxidation were discussed. The ongoing calcination in air resulted in the dispersion of small Ag nanoparticles on the CeO 2 surface, and well-defined Ag-CeO 2 interfaces were created, where nanorods were connected by large conductive Ag nanoparticles. The resulting mass specific current of the composite 2.5 times exceeded the one for pure Ag in borohydride oxidation reaction. High concentration of surface oxygen species was assumed to determine the exhibited enhanced catalytic activity along with a 3D architecture of nanorod and strong metal-support interaction.
Thus, a variation of the chemical composition of Ag/CeO 2 by using various promoters and modifiers allows tuning the electrocatalytic activity of the composite.
Conclusions and Outlook
In the present review we have summarized the recent advances and trends on the role of metal-support interaction in Ag/CeO 2 composites in their catalytic performance in total oxidation of CO, soot, and VOCs. Promising photo-and electrocatalytic applications of Ag/CeO 2 composites have also been discussed. The key function of the silver-ceria interaction is connected with the following major aspects: 1.
the catalytic performance of Ag/CeO 2 composites strongly depends on the preparation method that determines the morphology of both Ag and ceria nanoparticles, interfacial configuration and strength of metal-support interaction; 2.
active surface sites are formed at the Ag-CeO 2 interface, with the interfacial O atoms exhibiting different reactivity as compared to other surface O atoms, while oxygen species over Ag particles are still of importance and participate in catalysis; 3. positively charged Ag clusters facilitate the formation of surface oxygen vacancies over ceria support, while metal Ag nanoparticles promote the reduction of CeO 2 nanocrystals and enhance their catalytic activity; 4.
an enhanced activity of Ag/CeO 2 materials is caused by the highest surface oxygen vacancy concentration, high low-temperature reducibility as well as existence of lattice oxygen species and lattice defects formed with the participation of both silver and ceria; 5.
the role of impurities (such as alkali ions, carbon-containing species, etc., appeared on the surface and/or bulk of ceria during the preparation procedure and participating in transferring of electron density to O surface species) should be considered; 6. redox properties are caused by coexistence and interplay between Ag + /Ag 0 and Ce 3+ /Ce 4+ pairs; 7.
high photocatalytic activity of Ag/CeO 2 composites is caused by the ability of Ag nanoparticles to prolong the lifetime of photogenerated electron-hole pairs due to the effect of localized SPR and reduction of the recombination of free charges; 8.
enhanced electrocatalytic activity and good electrochemical stability of Ag/CeO 2 composites are connected with strong interfacial interactions between Ag and CeO 2 moieties that are caused by their specific morphology and architecture, which hinder the particulate agglomeration during the long-term electrocatalytic reaction.
Thus, the configuration of the silver-ceria interface provides the enhanced catalyst performance caused by synergistic effects of silver and cerium oxide. A proper selection of preparation method allows achieving the desired features of the composites and fine-tuning the strength of electronic metal-support interactions that can be additionally improved by application of ordered supports (e.g., SBA, MCM, MOFs, etc.) and promoters. This will allow rational designing of a new generation of highly effective Ag/CeO 2 composites for environmental, energy, photo-and electrocatalytic applications. | 24,162.6 | 2018-07-16T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
On p 2-Ranks in the Class Field Tower Problem
Much recent progress in the 2-class field tower problem revolves around demonstrating infinite such towers for fields – in particular, quadratic fields – whose class groups have large 4-ranks. Generalizing to all primes, we use Golod-Safarevic-type inequalities to analyse the source of the p2-rank of the class group as a quantity of relevance in the p-class field tower problem. We also make significant partial progress toward demonstrating that all real quadratic number fields whose class groups have a 2-rank of 5 must have an infinite 2-class field tower. p2-rangs et p-tours de Hilbert Résumé Les récents progrès sur le problème de la 2-tour de Hilbert des corps de nombres portent sur l’infinitude – en particulier pour les corps quadratiques – quand le groupe des classes a un grand 4-rang. Généralisant à tout nombre premier p, nous utilisons les inégalités de type Golod-Safarevic afin d’analyser la contribution du p2-rang du groupe des classes à l’étude de la p-tour de Hilbert. Nous apportons également des résultats partiels en direction de l’infinitude de le 2-tour de Hilbert des corps quadratiques réels lorsque que le 2-rang du groupe des classes vaut 5.
Introduction
The p-class field tower problem for a number field K is the question of whether the maximal unramified p-extension K/K is an infinite extension, or equivalently, whether G = Gal( K/K) is infinite.The first positive answer to the class field tower problem, demonstrating number fields for which K/K is infinite, came from the landmark paper of Golod and Safarevic [2].This is done via what is now known as (one of many forms of) the Golod-Shafarevich inequality, dictating that if G is finite, then for all t ∈ (0, 1) we have the polynomial inequality Here, d = d p Cl(K) denotes the p-rank of the class group of K, and the various r k are invariants of G defined from the Magnus embedding of G into the ring of formal power series over F p in d non-commuting variables.We will not need the precise definitions of these r k , but note that while difficult to compute, they were computable enough to permit the first demonstrations of infinite class field towers.For example, from this it can be deduced that a quadratic field whose class group satisfies d 2 Cl(K) ≥ 6 has an infinite class field tower.A variant of this result due to Schoof [4] gives the same inequality but with the r k replaced by certain cohomologically-defined invariants r k .These new invariants, while still difficult to compute, greatly expanded the collection of number fields for which we could affirmatively answer the class field tower problem.
The central tool of the current paper is a third version of the inequality, due to Maire [3].Here, the invariants r k or r k are replaced by yet another set, m k , which thanks to their concrete arithmetic interpretation can be explicitly computed for many examples.Our main result (Theorem 2.2) is an explicit formula for the second such invariant, m 2 , in terms of the arithmetic of K and that of its maximal elementary abelian unramified p-extension L. This calculation affords us several significant corollaries, two of which we mention here.The first is a correction of a claim in [3] about the effect of the p 2 -rank of the class group on the class field tower problem.The second is an application to 2-class field towers over real quadratic fields: We recall that the original argument of Golod and Shafarevich proves that such towers are infinite when d 2 Cl(K) ≥ 6.The article [1] asks if this requirement can be relaxed to d 2 Cl(K) ≥ 5. Corollary 4.2 gives significant partial progress toward an affirmative answer to this question, showing that it is true under any of a wide range of additional arithmetic hypotheses.Finally, while explicit relationships between the sequences m k , r k , and r k are in general hard to come by, Section 4.3 demonstrates examples where they are provably not all equal.
Notation and Setup
For a group A, we denote its p-rank by and define its p 2 -rank by d p 2 A = d p (A p ).Let K be a number field and p a prime.We denote by K (1) the Hilbert p-class field of K, the maximal unramified abelian p-extension of K, and define the p-class field tower over K recursively by K (n) = (K (n−1) ) (1) for n ≥ 2. We let K denote the top of the tower: K = ∪K (n) .It is easily verified that K/K is Galois, and we put G = Gal( K/K) for the remainder of the paper.Let L i = KG i denote the fixed field of K corresponding to the i-th lower central subgroup , the maximal unramified p-extension of K whose Galois group is elementary abelian.Let E K and A K = Cl p (K) respectively denote the unit group and p-class group of K, and we define the generator and relation rank of G respectively by
Statement of the Main Theorem
We now turn to the construction of the invariants m i , which form the foundation for the third variant of the Golod-Shafarevich inequality mentioned in the introduction.For each i ≥ 1, define Then when G is finite, i.e., K has a finite p-class field tower, there is an isomorphism ([3], Proposition 4.2) giving a concrete arithmetic interpretation to the relation rank.We then filter ∆ 1 by the images of the higher ∆ i , setting: for i ≥ 2, where the arrows are the obvious norm maps.Finally, we define the invariants m i by m i = dim Fp M i and note that by the filtration, i≥2 m i = r.We can now state the main result of [3]: Theorem 2.1 can be used to prove class field towers infinite using the same type of argument as used in the original Golod-Shafarevich examples: For example, we can re-derive the famous inequality r > d 2 4 for finite towers from the standard argument that the inequality . The goal of the current article is to use the invariants m i to prove certain class field towers infinite, analogously to how refinements of the Golod-Shafarevich theorem (e.g., the refinements of Koch-Venkov [6] and Schoof [4]) provided new examples of infinite class field towers.In particular, whereas Koch-Venkov and Schoof used symmetry arguments to prove that the even invariants vanish (r 2k = r 2k = 0 for k ≥ 1), we will compute the early m-invariants (in this article, specifically m 2 ) in terms of the arithmetic of K and L.
The algebra behind the subsequent numerical results is then rather straight-forward, as in (2.1): Since t k ≤ t 2 for all k ≥ 2 and all t ∈ (0, 1), stronger results come from Theorem 2.1 if we can bound m 2 from above.The article's main result follows this path, culminating in an explicit computation of m 2 in terms of the arithmetic of K and L.
Theorem 2.2. With L/K as above, we have
Here and elsewhere, we abuse notation in using the symbol N = N L/K for all of the obvious norm maps from L to K (e.g., norms of units, ideals, ideal classes,...).This calculation corrects an earlier attempt to bound m 2 from above, namely, the claim given in [3] that we have As a consequence of the theorem, we can see that this inequality does not in general hold without further assumptions, and demonstrate other bounds to replace it.For example, we can deduce the following: We will give the explicit counter-example of 2) in the case that this restricted norm map is not surjective in Example 4.1.
Proof of the main theorem
We begin with the following easily-verified diagram of fields and their Galois groups.In particular, we note that since L/K is the maximal elementary abelian sub-extension of K (1) /K, the Galois group Gal(L/K) is the maximal elementary abelian quotient of A K , isomorphic to (Z/pZ) d .K K (1) L L (1) with Galois groups: We begin by identifying the kernel and image of the norm map on ideal classes.
Proof.This follows from the fact that under Artin reciprocity, the norm map on ideal class groups corresponds to the restriction map on Galois groups.That is, we have the following commutative diagram: Now, via the Artin map, we have ker(N ) ∼ = ker(res) ∼ = Gal(L (1) /K (1) ) and
Lemma 3.2. Given a commutative diagram of vectors spaces with exact rows as below,
where ∂ : ker(f 3 ) → cok(f 1 ) denotes the connecting homomorphism from the snake lemma.
Proof.From the snake lemma we have an exact sequence where ∂ is the standard connecting homomorphism.From this we obtain the following short exact sequence: Taking an alternating sum of dimensions of this sequence gives the result.
Next, for any subfield F ⊂ K, recall/define and ) and consider the following commutative diagram: Here the two rows are the well-known exact sequences stemming from the morphism φ L defined by φ where Q is an ideal of L chosen so that Q p = (x), and the two rows are connected by the appropriate norm maps N = N L/K .In anticipation of applying Lemma 3.2, we note that the connecting map ∂ can be made explicit in our context as follows: For Then by commutativity of the rightmost square, N ([x]) = [N (x)] ∈ ∆ K is in the kernel of φ K , so by exactness ∂(c) := [N (x)] ∈ E K .It is now easy to characterize the subgroup of elements of cok(N : . This is the last ingredient needed to apply Lemma 3.2 to the commutative diagram above, which yields the following formula for the key dimension of interest: This calculation reduces the proof of the main theorem to combining a collection of established identities: . Finally, we note that we have The theorem and Lemma 3.1 combine to explain the appearance of the p 2 -rank in the p-class field tower problem.Namely, since and so the dimension of this norm group is bounded above by the p 2 -rank of A K .In cases where this norm map on the p-torsion ideal classes is surjective, we can thus get a lot of mileage from Golod-Shafarevich type arguments by using a large p 2 -rank to demonstrate a small value of m 2 .Unfortunately, this norm map is not always surjective, as we shall see in the next section.
Consequences
Before turning to explicit corollaries of the main theorem, we pause for the brief general remark that the theorem implies that the ultimate goal of bounding m 2 from above can be achieved via two principal routes: • Finding elements of E K which are norms from Λ L (including, in particular, norms from E L ).
This illuminates the perspective that the likelihood of a number field K having an infinite p-class field tower increases directly with the non-triviality of the norm maps on ideal classes and units from L.
A Counter-Example to Inequality (2.2)
All computations were done in SAGE [5].
Then the claim of inequality (2.2) predicts that We will show that, to the contrary, m 2 = 2, but first note that verifying that m 2 ≥ 2 is easier.We simply compute in SAGE that both norm maps [2] are the trivial map.Then the Main Theorem gives We include some auxiliary calculations to show that in fact m 2 = 2 by showing that −1 ∈ N (∆ L ).The genus field L of K is given by L = K( √ 13, √ −3), and A L ∼ = (8, 8, 4).We have SAGE choose a basis {c i } 3 i=1 of A L [2], choose representative ideals I i ∈ c i , and let x i be a generator for I 2 i .We check that for SAGE's particular choice of x 1 , x 2 , and x 3 , we have From this we see that the image of Incidentally, this computation makes clear the oversight which led to the inequality.The claim in the proof of [3,Corollary 3.2] that we can find the described elements σ 1 , . . ., σ 4 is tantamount to the claim that the surjection on 2-torsion.The example above shows how this can be false from a purely group-theoretic perspective: Taking abstract groups G = (8, 8, 4) and H = (4, 4) as in the class groups of the example above, there is no surjection from G to H 2 which restricts to a surjection G [2] Next, we return to the application from the introduction concerning real quadratic fields.Suppose K is real quadratic with fundamental unit ε, and d = d 2 A K = 5.Then as in the first sentence of the proof of the main theorem, we have =: e and so r ≤ 5 + e.Now by Theorem 2.1, if K has a finite 2-class field tower, then for all t ∈ (0, 1), we have It is trivial to verify that for each possible value of e (note e ≤ 2 for real quadratic fields), this inequality is violated for some t ∈ (0, 1) if m 2 ≤ 7 − e.Using Theorem 2.2 to compute m 2 , this is equivalent to This inequality thus provides a sufficient condition for K to have an infinite 2-class field tower.We continue to develop this expression.In particular, note that since E L ⊂ Λ L , we have It is easy to enumerate the list of possible dimensions of the three F 2vector spaces appearing in ( * ) for which the inequalities in ( * ) and ( * * ) are all satisfied, providing the next corollary.
It is desirable to have a version of this result which does not depend on knowing anything about N K/K (E K ).To that end, we extract from the previous corollary a slightly weaker sufficient condition that is in practice vastly simpler to evaluate.[2]).Even more directly, we note that this condition is satisfied, and hence K has an infinite 2-class field tower, if any of the following hold: • d 2 (N (A L [2])) ≥ 2; • d 2 (N (A L [2])) = 1 and at least one of −1 or ε are norms from Λ L .
Finally, to return to the topic of p 2 -ranks, recall that d 2 N (A L [2]) is bounded above by the 4-rank of A K .This tells us, for example, that if d 4 A K = 0, only the third of the three conditions in the above list will be viable as an argument to show that K has an infinite 2-class field tower via this method.
A Comparison of Invariants
While the computation of m 2 does not improve upon the result of Koch-Venkov for quadratic imaginary number fields and p odd, the simplicity of Theorem 2.2 in this case permits us to demonstrate a distinction between the m-invariants and the two types of r-invariants in this case.Namely, by Koch-Venkov [6] and Schoof [4]
Corollary 4 . 2 .
Suppose K is a real quadratic field with d 2 A K = 5.Then K has an infinite 2-class field tower if either
Corollary 4 . 3 .
Suppose K is a real quadratic field with d 2 A K = 5.Then K has an infinite 2-class field tower if respectively, we have r 2 = 0 and r 2 = 0. Applying Theorem 3, sinceE K = E p K = {±1}, we conclude simply that m 2 = d − d p N (A L [p]).But since N (A L [p]) ⊂ A p K [p],we conclude m 2 = d if, for example, K has a cyclic class group. | 3,733.8 | 2014-01-01T00:00:00.000 | [
"Mathematics"
] |
The Ribosome Biogenesis Factor Ltv1 Is Essential for Digestive Organ Development and Definitive Hematopoiesis in Zebrafish
Ribosome biogenesis is a fundamental activity in cells. Ribosomal dysfunction underlies a category of diseases called ribosomopathies in humans. The symptomatic characteristics of ribosomopathies often include abnormalities in craniofacial skeletons, digestive organs, and hematopoiesis. Consistently, disruptions of ribosome biogenesis in animals are deleterious to embryonic development with hypoplasia of digestive organs and/or impaired hematopoiesis. In this study, ltv1, a gene involved in the small ribosomal subunit assembly, was knocked out in zebrafish by clustered regularly interspaced short palindromic repeats (CRISPRs)/CRISPR associated protein 9 (Cas9) technology. The recessive lethal mutation resulted in disrupted ribosome biogenesis, and ltv1Δ14/Δ14 embryos displayed hypoplastic craniofacial cartilage, digestive organs, and hematopoiesis. In addition, we showed that the impaired cell proliferation, instead of apoptosis, led to the defects in exocrine pancreas and hematopoietic stem and progenitor cells (HSPCs) in ltv1Δ14/Δ14 embryos. It was reported that loss of function of genes associated with ribosome biogenesis often caused phenotypes in a P53-dependent manner. In ltv1Δ14/Δ14 embryos, both P53 protein level and the expression of p53 target genes, Δ113p53 and p21, were upregulated. However, knockdown of p53 failed to rescue the phenotypes in ltv1Δ14/Δ14 larvae. Taken together, our data demonstrate that LTV1 ribosome biogenesis factor (Ltv1) plays an essential role in digestive organs and hematopoiesis development in zebrafish in a P53-independent manner.
INTRODUCTION
The ribosome is a fundamental macromolecular machine, found within all living cells, that synthesizes proteins according to mRNA sequences. Ribosome biogenesis is a very intricate process in cells (Warner, 2001). Eukaryotic ribosome consists of the large 60S and small 40S subunits, which are assembled to form the functional 80S ribosome. In addition to the four ribosomal RNAs (rRNAs) and 82 core ribosomal proteins, which are the components of the 80S ribosome, over 200 non-ribosomal proteins are involved in ribosome biogenesis. This precisely controlled process is inextricably associated with many fundamental cellular activities, such as growth and division (Panse and Johnson, 2010). Disruption of ribosome biogenesis leads to a class of human genetic diseases, collectively termed as ribosomopathies (Narla and Ebert, 2010). Although these diseases are all related to the ribosome dysfunction, ribosomopathies display different clinical manifestations and mechanisms. The symptomatic features of ribosomopathies often include craniofacial defects, digestive organs dysplasia, hematological abnormalities, and the increased risk of some blood cancers (Narla and Ebert, 2010). Ribosomopathies with defects in digestive organs and/or hematological abnormalities include Shwachman-Diamond syndrome (SDS), 5q-syndrome, Diamond-Blackfan anemia (DBA), X-linked dyskeratosis congenita (DC), Treacher Collins syndrome (TCS), and North American Indian childhood cirrhosis (NAIC) (Armistead and Triggs-Raine, 2014).
Numerous genetic models have been established for the investigation of the mechanisms underlying ribosomopathies. In mice, conditional deletion of the syntenic region, including Rps14, absent in 5q-syndrome leads to macrocytic anemia, which is the key clinical feature of the disease (Barlow et al., 2010). In zebrafish, knockdown of rps19 expression causes hematopoietic and developmental abnormalities that is similar to the symptoms of DBA (Danilova et al., 2008). Besides DBA, zebrafish models of SDS (Provost et al., 2012;Carapito et al., 2017;Oyarbide et al., 2020), 5q-syndrome (Ear et al., 2016), NAIC (Wilkins et al., 2013), and DC (Zhang et al., 2012;Anchelin et al., 2013) were generated and characterized. These models are valuable resources to develop potential therapies of ribosomopathies according to the underlying mechanisms. In addition to causative genes in ribosomopathies, some other genes involved in ribosome biogenesis were either mutated or knocked down in zebrafish, such as bms1l (Wang et al., 2012(Wang et al., , 2016, kri1l (Jia et al., 2015), nol9 (Bielczyk-Maczynska et al., 2015), nom1 (Qin et al., 2014), and pwp2h (Boglev et al., 2013). Depletion of these genes causes disrupted rRNA processing and leads to defects in digestive organs and/or hematological abnormalities, which suggest common roles for ribosome biogenesis factors in organogenesis.
LTV1 ribosome biogenesis factor (Ltv1) is a non-ribosomal factor required for the processing of 40S ribosomal subunit (Ameismeier et al., 2018;Collins et al., 2018). Alterations of LTV1 can cause aberrant processing of 18S rRNA in yeast, fruit fly and human cells (Seiser et al., 2006;Tafforeau et al., 2011;Ghalei et al., 2015;Kressler et al., 2015). In LTV1 yeast cells, the accumulation of 18S rRNA precursors (20S, 21S, and 23S rRNA) is evident, accounted by the decreased pre-rRNA cleavage at sites A0, A1, and A2 (Seiser et al., 2006). Similarly, in fruit fly and human cells, LTV1 deficiency leads to increased level of 21S rRNA, hence a reduced production of the final 18S rRNA and eventually a higher than expected ratio of 28S/18S rRNA (Tafforeau et al., 2011;Kressler et al., 2015). Cell growth is inhibited in LTV1 loss-of-function yeast strains (Seiser et al., 2006). The fruit fly LTV1 mutant larvae exhibit development delay and lethality at the second larvae stage (Kressler et al., 2015). These studies suggest the conserved role of LTV1 in ribosome biogenesis and cell growth from yeast to multicellular animals. However, the function of LTV1 in vertebrate development remains poorly understood.
Here, we reported that knockout of ltv1 in zebrafish embryo disrupted ribosome biogenesis. The zebrafish ltv1 ∆14/∆14 larvae displayed aberrant cartilage structure, defects in digestive organs, characterized by smaller size of liver, intestine and exocrine pancreas, and impaired definitive hematopoiesis. Further characterization of ltv1 ∆14/∆14 larvae showed that the decreased proliferation gave rise to the dysplastic features of exocrine pancreas and hematopoietic stem and progenitor cells (HSPCs). Although P53 and its target genes 113p53 and p21 were upregulated, knockdown of p53 failed to rescue the developmental abnormalities in ltv1 ∆14/∆14 mutant.
RESULTS
Craniofacial Cartilage Was Defective in ltv1 ∆14/∆14 Zebrafish Mutant Embryo Ltv1 is highly conserved by amino acid sequence homology between human and zebrafish, with approximately 60.9% identity and 76.2% similarity (Supplementary Figure 1). To determine the function of ltv1, zebrafish ltv1 −/− mutants were generated using the clustered regularly interspaced short palindromic repeat (CRISPR)/CRISPR associated protein 9 (Cas9)-mediated approach, and a guide RNA (gRNA) was designed to target the exon 7 of ltv1. Two F1 mutant alleles were identified with 14 and 7 bp nucleotides deletion, respectively in the coding region ( Figure 1A). Both mutations were predicted to result in frame shifts and premature stop codons in mutant transcripts, encoding two truncated Ltv1 proteins with 284 and 283 N-terminal and 32 and 31 missense amino acids, respectively ( Figure 1B). These two mutant alleles could not genetically complement each other, and the ltv1 ∆14/∆14 mutant allele was used for the following experiments. RNA whole mount in situ hybridization (WISH) showed that ltv1 transcripts were almost absent in ltv1 ∆14/∆14 mutant at 3 days post fertilization (dpf), indicating that the knockout of ltv1 was successful ( Figure 1C). The mutant mRNA probably underwent a nonsense-mediated decay.
Digestive Organs Were Hypoplastic in ltv1 ∆14/∆14 Mutant Embryo
To further characterize the digestive organ phenotype observed in bright field, WISH was performed to analyze the specific organ formation. Both the liver (marked by fabp10) and exocrine pancreas (marked by trypsin) of ltv1 ∆14/∆14 displayed a smaller size compared with sibling at 3 dpf (Figures 2A-C). However, no visible defect was found in the endocrine pancreas (marked by insulin) (Figure 2D). In zebrafish, differentiated intestinal cells include three types: enterocytes, goblet cells, and enteroendocrine cells (Chen et al., 2009). In ltv1 ∆14/∆14 mutant, enterocytes (marked by fabp2) at 3 dpf ( Figure 2E) and goblet cells (Alcian blue-stained) at 5 dpf ( Figures 2F,H) were substantially decreased in number. In zebrafish, both the enteroendocrine and goblet cells of the intestine could be labeled by 2F11 monoclonal antibody (Roach et al., 2013). In zebrafish, goblet cells are only distributed in the posterior part (Roach et al., 2013), not in the intestine bulb, so the 2F11 antibody marked cells in the intestine bulb are enteroendocrine cells. The number of enteroendocrine cells in the intestine bulb was reduced significantly in ltv1 ∆14/∆14 mutant at 4 dpf ( Figures 2G,H). To examine the gut morphology, DCFH-DA, a dye that could label zebrafish gut lumen, was used to visualize the intestine. At 5 dpf, although the overall shape of the mutant intestine resembled that of the sibling, the lumen was narrower than that of the sibling (Figure 2I).
Developmental defects of digestive organs could be due to impaired differentiation of endodermal cells. The genes foxA1, foxA3, and gata6 are early endodermal markers that can also label digestive organ primordia in zebrafish (Tao and Peng, 2009). These three genes expressed normally in the ltv1 ∆14/∆14 mutant endoderm at 1 dpf (data not shown). Both liver and pancreatic buds were found to be smaller in the mutant than that in the sibling, while the intestine seemed normal at 2 dpf (Figures 2J,K and Supplementary Figures 2A,B). These data suggested that the process from the endoderm to bud initiation was intact whereas bud expansion, taking place at a later stage, was affected in the mutant. To test whether liver specification was impaired in the ltv1 ∆14/∆14 mutant, two of the earliest markers of hepatoblasts, prox1 and hhex, were analyzed (Ober et al., 2006). Consistent with foxA1, foxA3, and gata6, the expression of prox1 and hhex revealed a slightly smaller liver bud in the mutant compared with the sibling at 2 dpf (Supplementary Figures 2C,D). A noticeable hypoplastic liver phenotype in the ltv1 ∆14/∆14 mutants could be observed at 34 hours post fertilization (hpf) by tracing the prox1 expression at earlier developmental time points (Supplementary Figures 2E,F). There are two types of glandular tissue in the zebrafish pancreas: exocrine pancreas and endocrine pancreas (Field et al., 2003). By checking pdx1 (precursor cell of endocrine pancreas), gcga (alpha cell), insulin (beta cell), and sst2 (delta cell) expression at 2 dpf, no obvious defect was observed in ltv1 ∆14/∆14 mutants (Supplementary Figures 2G-J). These data suggested that cell differentiation of endocrine pancreas was not affected in the mutant. However, the number of ptf1a + cells (exocrine pancreas progenitor cells) decreased significantly at 2 dpf in the ltv1 ∆14/∆14 mutant on the ptf1a:gfp background (Supplementary Figures 2K,L). Thus, hypoplasia of digestive organs in ltv1 ∆14/∆14 mutant embryos could be a consequence of impaired organ progenitor cell expansion.
Two waves of hematopoiesis are involved in zebrafish: the primitive wave and the definitive wave (Jagannathan-Bogdan and Zon, 2013). To assess the status of primitive hematopoiesis in the ltv1 ∆14/∆14 mutant, two genes regulating the primitive erythroid and myeloid fates, gata1 and pu.1, were examined using WISH at 20 and 22 hpf, respectively, and no visible defect was observed in the mutant (Supplementary Figures 3A,B). The results of c-myb expression from 2 to 4 dpf showed that a decreased c-myb expression was detectable starting from 3 dpf ( Figure 3A and Supplementary Figures 4A-C). Taken together, the primitive hematopoiesis was unaffected while the definitive hematopoiesis was impaired in the ltv1 ∆14/∆14 mutant, probably due to the reduced HSPCs.
To confirm whether ltv1 mutation is indeed responsible for the mutant phenotypes observed, zebrafish wild type and mutant form ltv1 mRNAs were used for rescue experiments. At 3 dpf, both the smaller liver and reduced HSPC phenotypes in the ltv1 ∆14/∆14 mutant were rescued by zebrafish wildtype ltv1 mRNA efficiently but not by the mutant form ( Supplementary Figures 5A-D).
Proliferation of Exocrine Pancreas Progenitor Cells and Hematopoietic Stem and Progenitor Cells in ltv1 ∆14/∆14 Mutant Embryo Was Significantly Reduced
Disrupted cell proliferation and/or enhanced apoptosis may account for the digestive organs and hematopoiesis defects. The terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay revealed no apoptotic cell in the pancreas region in sectioned ltv1 ∆14/∆14 mutant and sibling at 3 dpf, indicating that apoptosis is not the reason of the dysplastic exocrine pancreas in the mutant (Supplementary Figure 6A). In order to detect the level of cell proliferation, phospho-Histone H3 (pH3) immunostaining and bromodeoxyuridine (BrdU) labeling experiments were performed in mutants and siblings on ptf1a:gfp background at 2 dpf. In ltv1 ∆14/∆14 mutants, both pH3 and BrdU-labeled ptf1a + cells were significantly reduced after normalizing for total pancreatic cells ( Figures 4A,B,A' ,B'). Thus, the impaired exocrine pancreas in ltv1 ∆14/∆14 mutants would be most likely due to the decreased cell proliferation of the exocrine pancreas progenitor cells.
Consistent with the results observed in the exocrine pancreas, TUNEL assay revealed similar apoptotic level of HSPCs in the CHT between ltv1 ∆14/∆14 mutants and siblings at 2.5 dpf (Supplementary Figures 6B,C). The proliferation of HSPCs was also reduced in ltv1 ∆14/∆14 mutants as indicated by the decreased pH3 and BrdU signals of HSPCs in the CHT at 2.5 dpf (Figures 4C,D,C' ,D'). Thus, defects in the definitive hematopoiesis were most likely attributed to decreased proliferation of HSPCs, instead of cell death.
ltv1 Expression Was Enriched in Digestive Organs During Embryogenesis
To investigate the reason behind tissue specificity of the mutant phenotypes observed, the expression pattern of ltv1 in zebrafish embryos was examined by WISH using the antisense ltv1 RNA Frontiers in Cell and Developmental Biology | www.frontiersin.org probe. The sense probe was used as a negative control. At onecell stage, ltv1 mRNA was easily detected (Figures 5A,E), which suggested that ltv1 was a maternal expression gene. From 50%epiboly to 13 hpf, ltv1 transcripts were distributed ubiquitously (Figures 5B,C), while no positive staining was detected for sense probe (Figures 5F,G). At 24 hpf, ltv1 transcripts were found in the eyes and pharyngeal primordia (Figures 5D,H,I). Between 48 and 72 hpf, ltv1 transcripts were abundant in the eyes, liver, intestine, and pancreas ( Figures 5J,K). At 96 and 120 hpf, ltv1 was highly expressed in the exocrine pancreas (Figures 5L,M). The digestive organ and pharyngeal primordia-specific expression pattern of ltv1 was consistent with the hypoplastic phenotypes of these tissues during embryogenesis in the mutant. HSPCs and differentiated hematopoietic lineages were also affected severely in ltv1 ∆14/∆14 mutants; however, no clear ltv1 mRNA signal was detected in the aorta-gonad-mesonephros (AGM) or CHT by WISH using ltv1 RNA probe from 24 to 120 hpf.
Ribosome Biogenesis Was Disrupted in ltv1 ∆14/∆14 Mutant Embryo
In eukaryotic cells, the 28S, 18S, and 5.8S rRNAs are cleaved by various nucleases from a single primary transcript, known as the pre-rRNA. It was reported that deletion of ltv1 could lead to aberrant processing of 18S rRNA in yeast, fruit fly, and human cells and accumulation of its precursor 20S (yeast) or 21S rRNA (fruit fly and human cells), implying a conserved role of ltv1 in 18S rRNA processing. To test if it were the case in zebrafish, Northern blot was used to analyze rRNA processing using the probes that could hybridize the ETS, ITS1, ITS2 (ETS/ITS: external/internal-transcribed spacer region), and 18S rRNA (Azuma et al., 2006). ETS, ITS1, and ITS2 probes could mark the rRNA precursor and the intermediate and some minor products ( Figure 6A). The ETS and ITS1 probes revealed that the full-length precursor "a" accumulated significantly in ltv1 ∆14/∆14 mutants, indicating the disruption of rRNA processing (Figure 6B). The "d, " which might correspond to the 20S rRNA in yeast or 21S rRNA in human cells, accumulated while the "c" decreased, showing the impaired 18S rRNA processing in the mutants ( Figure 6B). Consistently, the amount of 18S rRNA was declined slightly in the mutants ( Figure 6C). Although the "e" increased slightly, the "b" showed no obvious difference in the mutants (Figure 6B), suggesting the intact 28S rRNA processing in ltv1 ∆14/∆14 mutants. To quantify the amount of 18S and 28S rRNA, E-bioanalyzer analysis was performed and the results showed that the 18S rRNA was reduced obviously in ltv1 ∆14/∆14 mutants at 5 dpf, while the amount of 28S rRNA remained comparable ( Figure 6D). The altered quantity of 18S rRNA therefore caused the imbalance of the 28S/18S ratio in mutants, which is 3.1, compared with 2.0 in siblings ( Figure 6E). Consistent with rRNA quantification data, the ribosome fractionation results showed that the amount of 40S subunits and 80S monosomes decreased, while that of the 60S subunits increased about twofold (Figures 6F,G).
Phenotypes in ltv1 ∆14/∆14 Mutant Were Independent of P53
A growing number of studies suggest that P53 may play a vital role in phenotypes relevant to ribosome dysfunction Representative confocal images of pH3 immunostaining (C) and BrdU labeling (D) in ltv1 ∆14/∆14 ; runx1:en-gfp mutants and siblings at 2.5 dpf. (C',D') The percentage of pH3 + (C', siblings, N = 6; mutants, N = 7) and BrdU + (D', siblings, N = 6; mutants, N = 11) cells within the runx1 + population in ltv1 ∆14/∆14 mutants and siblings at 2.5 dpf. Bars represent means with SD. White arrow: merged cell. Scale bar: 10 µm. (Armistead and Triggs-Raine, 2014). In ltv1 ∆14/∆14 mutants, there was a clear increase in the expression level of p53 at 3 dpf, as indicated by WISH using a p53 probe which can detect both p53 and 113p53 (Figure 7A). In addition, the P53 protein level was upregulated obviously in mutants (Figure 7B). Then mRNA levels of 113p53 and p21, downstream genes of p53, were evaluated by quantitative polymerase chain reaction (PCR). Consistently, both 113p53 and p21 mRNA levels were increased significantly in ltv1 ∆14/∆14 mutant, which further suggested the activation of p53 pathway (Figure 7C). To determine if the downregulation of p53 could rescue the mutant phenotypes, knockdown of p53 was achieved by the p53 ATG morpholino injection. The increased P53 expression was attenuated in the mutant, which validated the efficacy of p53 knockdown (Figure 7B). However, neither the smaller liver nor the reduced HSPC phenotype in ltv1 ∆14/∆14 mutant could be alleviated by p53 knockdown (data not shown), suggesting that the mutant phenotypes were independent of P53.
DISCUSSION
Ltv1 is a non-ribosomal protein essential for 18S rRNA processing in yeast, fruit fly, and human cells (Seiser et al., 2006;Tafforeau et al., 2011;Ghalei et al., 2015;Kressler et al., 2015). In this report, Ltv1 was demonstrated functionally conserved in zebrafish as illustrated by disrupted 18S rRNA processing in the ltv1 mutants. Deletion of zebrafish ltv1 resulted in defective growth of liver, exocrine pancreas, intestine, abnormal craniofacial structures and impaired development of HSPCs, definitive erythrocytes, myeloid cells, and lymphocytes. These phenotypic features resembled some specific ribosomopathy models in zebrafish studies (Provost et al., 2012;Carapito et al., 2017;Oyarbide et al., 2019Oyarbide et al., , 2020. Ltv1 is an assembly factor that can facilitate the incorporation of Rps3 and Rps10 into the small ribosomal subunit in yeast. Ltv1 deficiency led to mispositioned Rps3 in ribosomes (Collins et al., 2018). In zebrafish, knockdown of rps3 could result in morphological defects, including reduced head size, pericardial edema, and erythropoiesis failure (Yadav et al., 2014). These phenotypic features are consistent with those in ltv1 ∆14/∆14 zebrafish mutants. Hence, it is interesting to investigate whether ltv1 functions through rps3 in digestive system development and hematopoiesis. However, it should be noticed that the morphological defects of rps3 morphants could be rescued by knockdown of p53, while the erythroid failure could not be alleviated (Yadav et al., 2014). In ltv1 ∆14/∆14 mutants, none of the defects in morphology, digestive organogenesis, or hematopoiesis could be rescued by p53 knockdown. It was reported that Rps3 could directly interact with P53 and MDM2 (Yadavilli et al., 2009). This may be underlying the P53-dependent recovery of morphological deformities of the rps3 morphants. Further genetic investigation is required to validate the relationship between ltv1 and rps3. Ribosomes from Ltv1-deficient yeast harbored less Rps10 protein (Collins et al., 2018). Rps10 was found to be mutated in 6.4% of patients with DBA (Doherty et al., 2010). To the edge of our knowledge, no zebrafish mutant of rps10 has been constructed. It will be meaningful to analyze the phenotypes of rps10 mutants and investigate the genetic interaction among ltv1, rps3, and rps10 in zebrafish.
What is the justification for dysfunction in a macromolecule as ubiquitous and essential as the ribosome causing ribosomopathies with defects in selective tissues? Xue and Barna (2012) believed that the tissue specificity of gene expression in ribosomal biogenesis was the cause. Agreed with this point, ltv1 was found highly expressed in digestive organs during embryogenesis, which may partially explain the phenotypes of ltv1 mutants in these organs. However, despite of no detectable expression of ltv1 in the AGM and CHT, HSPCs and differentiated hematopoietic lineages were impaired severely. Some zebrafish models of ribosomopathy, such as sbds (Provost et al., 2012), rpl11 (Danilova et al., 2011), rpl24, rpl35a (Yadav et al., 2014), etc., all displayed hematopoietic defects at different levels. While all the respective genes were highly expressed in digestive organs, no description of gene expression in AGM or CHT was reported (Venkatasubramani and Mayer, 2008;Provost et al., 2013), similar to that observed in ltv1. One possible explanation is that these genes deficiencies may lead to impaired hematopoiesis indirectly, probably by impairment of the niche of HSPCs. Like ltv1, zebrafish nol9 encoded a non-ribosomal protein, and nol9 mutants displayed defects in both digestive organs and hematopoiesis. Transmission electron microscopy (TEM) analysis revealed great changes in the CHT niche in nol9 mutants, including extracellular matrix (ECM) and endothelial cells (Bielczyk-Maczynska et al., 2015).
It has been demonstrated here that Ltv1 is essential for ribosome biogenesis and organogenesis of digestive system and hematopoiesis. Among the existing zebrafish models with deficient ribosome biogenesis, most of them exhibited hypoplasia of liver, pancreas, and intestine, including nil per os (npo) (Mayer and Fishman, 2003), titania (tti) (Boglev et al., 2013), bms1-like (bms1l) (Wang et al., 2012), and nucleolar protein with MIF4G domain 1 (nom1) (Qin et al., 2014), while some displayed defects in definitive hematopoiesis, for example, kri1l (Jia et al., 2015). To the best of our knowledge, only one mutant nol9 (Bielczyk-Maczynska et al., 2015) described both phenotypes. Interestingly, in line with ltv1 ∆14/∆14 mutants, nol9 mutants showed arrested development of exocrine pancreas and HSPCs as a result of reduced proliferative rate. Although nol9 and ltv1 are involved in the 28S and 18S rRNA processing, respectively, the similar phenotypes in these two models suggest the conserved function of ribosome biogenesis genes during embryogenesis.
Several studies revealed that excess free ribosomal proteins, while ribosome biogenesis was impaired, could outcompete P53 in binding the E3 ubiquitin ligase MDM2, consequently protecting P53 from degradation (Zhang et al., 2003;. In some ribosome biogenesis-deficient models, phenotypes could be rescued by inhibition of P53 (Zhang et al., 2012;Bielczyk-Maczynska et al., 2015;Ear et al., 2016). However, in some other cases, P53-independent cell apoptosis and cell proliferation arrest have also been described (Provost et al., 2012;Boglev et al., 2013;Qin et al., 2014;Yadav et al., 2014;Jia et al., 2015). Although P53 protein and target genes 113p53 and p21 were upregulated in ltv1 ∆14/∆14 mutants, knockdown of p53 could not rescue the defects of the liver or HSPCs, suggesting a p53-independent mechanism was involved, which agreed with the fact that the abnormal rRNA processing in LTV1deficient human cells was P53 independent (Tafforeau et al., 2011). In addition to P53, some other pathways were reported to be involved in the ribosome-deficient zebrafish models. In zebrafish kri1l mutants, an increased level of autophagy was observed, and blocking autophagy could significantly restore the definitive hematopoiesis (Jia et al., 2015). In contrast, inhibition of autophagy reduced the lifespan of zebrafish mutants of pwp2h gene, which encoded a protein promoting the small ribosomal subunit processing. In pwp2h mutants, autophagy was considered a survival mechanism triggered by ribosomal efficiency (Boglev et al., 2013). The question that whether autophagy is involved in the ltv1 function in zebrafish is required further validation. Rpl35a was mutated in 3.3% DBA (Farrar et al., 2008). In zebrafish rpl35a knockdown embryos, upregulation of mammalian target of rapamycin (mTOR) could rescue the morphological defects and the erythroid failure (Yadav et al., 2014). This case suggested that mTOR functioned downstream of rpl35a. Urb1, a protein promoting the big ribosomal subunit assembly in zebrafish, was demonstrated to play a role downstream of mTOR in digestive organ formation (He et al., 2017). It is also possible that mTOR pathway is involved in ltv1-dependent digestive organ development and hematopoiesis.
In ltv1 ∆14/∆14 mutants, the proliferation was inhibited and p53 was activated. It was reported that activation of p53 could lead to cell cycle arrest via p21 upregulation (Georgakilas et al., 2017). However, it is possibly not the case in ltv1 ∆14/∆14 mutants because inhibition of p53 could not restore the growth of the liver and HSPCs. The P53-independent mechanism underlying the cell cycle arrest might be the key way through which ltv1 functions in zebrafish. Pescadillo was a protein that played an essential role in 28S rRNA processing and the zebrafish pescadillo-deficient embryos displayed underdeveloped liver, gut, and craniofacial cartilage (Allende et al., 1996;Lapik et al., 2004;Provost et al., 2012). Cyclin D1 was indispensable for cell proliferation in cells. As a cyclin-dependent kinase inhibitor, P27 could decrease catalytic activity of cyclin D1 through direct interaction, and so that led to cell cycle arrest (Razavipour et al., 2020). It was reported that the cell cycle arrest in pescadillo-deficient cells was due to cyclin D1 downregulation and activation of P27, which was independent of P53 (Li et al., 2009). In erythroid cell lines, ribosome synthesis defects could lead to decreased level of PIM1, a kinase implicated in cell proliferation. The reduction of PIM1 induced cell cycle arrest via accumulated P27 in a P53-independent way (Iadevaia et al., 2010). It is interesting to investigate that whether the ltv1 ∆14/∆14 mutants share the same P27-regulated mechanism, regardless of P53, underlying the cell proliferation inhibition with these two cases.
It should be highlighted that the phenotypes of ltv1 ∆14/∆14 and sbds-deficient embryos share some similar features such as the affected exocrine pancreas and hematopoiesis (Burroughs et al., 2009). Although the Sbds protein plays a role in 60S ribosomal subunit biogenesis (Menne et al., 2007), Ltv1 is required for the assembly of 40S ribosomal subunit (Ameismeier et al., 2018;Collins et al., 2018). The expression pattern of sbds is very similar to that of ltv1 in zebrafish (Venkatasubramani and Mayer, 2008). In line with ltv1 ∆14/∆14 larvae, morpholino knockdown of sbds in zebrafish causes defects in exocrine pancreas, of which could not be rescued by p53 downregulation, while the endocrine pancreas is normal (Provost et al., 2012). Together with other animal models with ribosome biogenesis dysfunction, ltv1 zebrafish mutant not only provides new candidate genes for the screening of ribosomopathies with unknown genetic deterioration but also serves as a tool to investigate molecular and cellular mechanisms underlying ribosomal genes deficiency phenotypes. As a powerful model for chemical screen, the corresponding zebrafish mutants may be used for the identification of potential compounds for treating specific ribosomopathies.
Genomic DNA Extraction
Embryos or fish scales were lysed in the buffer (10 mM Tris-HCl, 50 mM KCl, 0.3% Tween-20, 0.3% NP40, and 1/10 volume proteinase K, Invitrogen, Waltham, MA, United States) at 55 • C for 12 h and then the reaction was inactivated by increasing the temperature to 95 • C for 20 min. The crude lysate could be used as the template for PCR directly.
Generation of ltv1 Mutants by Clustered Regularly Interspaced Short Palindromic Repeat/Cas9 System
The gRNA was designed to target a site in the exon 7 of ltv1 at the sequence GGACAGTGCTCGGCTGGAGG (PAM site in italics). Zebrafish Cas9 mRNA and the ltv1 gRNA were synthesized as described (Chang et al., 2013;Ear et al., 2016). At one-cell stage, Cas9 mRNA (300 pg) and gRNA (50 pg) were injected into wildtype embryos. At 36 hpf, about 10 embryos were pooled and lysed, and the primers (ltv1 fw: 5 -TGGTAAGGAGTCTGATTATC-3 and ltv1 rv: 5 -CCAATCCATGTGATGCATAC-3 ) were used to amplified DNA fragment harboring gRNA targeted site. PCR products were subjected to sequencing to identify potential indels in the region. Upon detecting mutation, the rest of the embryos were raised to adults (F0). Pooled F1 embryos obtained by crossing F0 with wild-type fish were examined for indels in ltv1 gene using the PCR method described above. The nature of indels could be obtained by sequencing and mutant allele specific primers were then designed according to specific indels.
Mutant Rescue
Zebrafish wild-type ltv1 cDNAs were cloned into pCS2 + vector. Mutant form cDNA was obtained by site-directed mutagenesis. The microinjection was performed at one-cell stage, 0.5 ng in vitro transcribed either zebrafish wild-type ltv1 mRNA or ltv1 ∆14 mRNA was used to try to rescue the mutant phenotypes. At 3 days post injection (dpi), embryos were fixed for WISH using the liver-specific fabp10 or c-myb probe.
Bromodeoxyuridine Labeling
For BrdU labeling, BrdU (Roche Diagnostics, Indianapolis, IN, United States; 1 nl, 30 mM) was injected into the pericardium of embryos. Consequently, the embryos were incubated for 1.5-2 h at 28.5 • C. After three times of washing with PBST, the embryos were fixed using 4% PFA. After being treated with 2 N HCl for 1 h, the embryos were incubated with mouse anti-BrdU (Roche Diagnostics, United States; 1:50) and goat anti-GFP (Abcam, United States; 1:400) antibodies at 4 • C overnight, and finally visualized by Alexa Fluor 555 donkey anti-mouse (Life Technology, Carlsbad, CA, United States; 1:400) and Alexa Fluor 488 donkey anti-goat (Life Technology, United States; 1:400) antibodies.
TUNEL Assay
Whole embryo and cryosectioned samples were prepared for TUNEL assay, and in situ cell death detection kit, TMR Red (Roche Diagnostics, United States) was used following the manuals provided.
Northern Blot
Total RNA was extracted from mutant and sibling embryos at 120 hpf using TriPure Isolation Reagent (Roche Diagnostics, United States). The DIG-labeled DNA probes were PCRamplified using previously described primers (Azuma et al., 2006). Equal amount of total RNAs were subjected to electrophoresis. The probe hybridization and detection process were carried out as previously described (Chen et al., 2005).
Quantification of 18S and 28S Ribosomal RNA
Total RNA was extracted from ltv1 ∆14/∆14 mutants and siblings at 5 dpf. Then, RNA were subjected to E-Bioanalyzer (Agilent 2100) analysis according to the manual.
Statistical Methods
The experimental data were analyzed in GraphPad Prism 6.0. The unpaired Student's t-test was used for comparing the means of two groups.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The animal study was reviewed and approved by Institutional Animal Care and Use Committee in Southwest University, China.
AUTHOR CONTRIBUTIONS
HH and HR designed the project. CZ, RH, XM, JC, and XH performed the experiments. CZ and HH wrote the manuscript. LLi and LLu commented on the manuscript. All authors contributed to the article and approved the submitted version.
ACKNOWLEDGMENTS
We thank Li Jan Lo (Zhejiang University, China) for suggestion on the manuscript, LLi (Southwest University, China) for blood cell markers, and Bo Zhang and Jingwei Xiong (Peking University, China) for plasmids for CRISPR/Cas9 system.
704730/full#supplementary-material
Supplementary Figure 1 | Ltv1 is conserved among human, mouse and zebrafish. Alignment of Ltv1 protein sequence from human, mouse, and zebrafish. | 7,161 | 2021-10-07T00:00:00.000 | [
"Biology"
] |
An Object-Based Bidirectional Method for Integrated Building Extraction and Change Detection between Multimodal Point Clouds
: Building extraction and change detection are two important tasks in the remote sensing domain. Change detection between airborne laser scanning data and photogrammetric data is vulnerable to dense matching errors, mis-alignment errors and data gaps. This paper proposes an unsupervised object-based method for integrated building extraction and change detection. Firstly, terrain, roofs and vegetation are extracted from the precise laser point cloud, based on “bottom-up” segmentation and clustering. Secondly, change detection is performed in an object-based bidirectional manner: Heightened buildings and demolished buildings are detected by taking the laser scanning data as reference, while newly-built buildings are detected by taking the dense matching data as reference. Experiments on two urban data sets demonstrate its e ff ectiveness and robustness. The object-based change detection achieves a recall rate of 92.31% and a precision rate of 88.89% for the Rotterdam dataset; it achieves a recall rate of 85.71% and a precision rate of 100% for the Enschede dataset. It can not only extract unchanged building footprints, but also assign heightened or demolished labels to the changed buildings.
Introduction
Object extraction and change detection are two of the most important tasks in remote sensing [1,2]. Object extraction derives topographic information from one single epoch, whereas change detection compares remote sensing data from two epochs to derive change information. Airborne photogrammetry and airborne laser scanning (ALS) are two widely-used techniques with respect to the acquisition of remote sensing data over urban scenes. Remote sensing data, acquired from different techniques, vary in data dimensionality, accuracy, noise level and data gap level [3,4].
A comparison in the data quality between ALS and airborne photogrammetry was referred to [5,6]. The main product from laser scanning is usually three-dimensional (3D) point clouds. In contrast, airborne photogrammetry produces geo-referenced imagery, point clouds, Digital Surface Models (DSMs) and orthoimages. In photogrammetry, point clouds are generated through dense image matching (DIM) [7,8]. Generally, ALS point clouds are more accurate than DIM point clouds in terms of vertical accuracy. The former usually contains less noise than the latter. However, DIM provides, not only geometric information in point clouds or DSM, but also spectral information in the orthoimage. Previous work suggests that the spectral information from orthoimage is complementary to the geometric information for object extraction or change detection tasks [9][10][11]. That is, even though the accuracy and noise level in DIM data are less satisfying than those of ALS data, the spectral information can fill the gap to some extent.
Extracting topographic objects and detecting topographic changes in urban scenes are fundamental tasks in urban planning and environmental monitoring. This paper aims to extract building footprints and detect building changes between ALS data and photogrammetric data. This is applicable to the situation of several mapping agencies, where laser scanning data are already available as archive data, while aerial images are routinely acquired every one or two years for updates. When the remote sensing data, available at different epochs, are heterogeneous (i.e., with different platforms and sensor characteristics), such heterogeneity makes change detection challenging.
The tasks of building extraction and change detection are closely associated. Building changes include new building, demolished building and heightened building. Tran et al. [1] suggest that most change detection methods apply two steps for change detection: Firstly, extract objects from both epochs; secondly, compare the two epochs for change information. In this case, object extraction is explicitly implemented before change detection. Since change detection aims to detect the change "from object A to object B", it is necessary to identify what the object is in both epochs in an explicit or implicit manner. In the meantime, the accuracy of object detection affects the change detection results in a sequence. Therefore, this paper aims at integrating building extraction and change detection in a single workflow. The contributions are as follows: • We propose an unsupervised method for integrated building extraction and change detection between ALS data and photogrammetric data. The outputs contain not only building footprints, but also building change information. This method fuses geometric and spectral features for object extraction, and applies bidirectional object-based analysis for change detection.
•
We propose Vertical Plane-to-Plane Distance (VPP) measure to indicate the height change between two heterogeneous point clouds. This measure proves effective in indicating vertical building changes.
•
The acquired building footprints and change information are visualized in a single map. The experimental results on two data sets are evaluated at the pixel level and object level. Despite data noise and the differences between multimodal point clouds, the proposed method is capable of extracting buildings and detecting changes with high accuracy.
This paper is organized as follows: Section 2 reviews the related work on point cloud-based semantic segmentation and change detection. Section 3 presents our method. Section 4 provides details on the study areas and experimental settings. Section 5 presents the results and discussion. Section 6 concludes the paper.
Point Cloud Classification
Point cloud classification refers to assigning a category label to each point in a point cloud. Point cloud classification methods can be divided into four categories: Rule-based classification, classification based on handcrafted features, classification with contextual features and deep learning-based classification.
Rule-based classification takes handcrafted features as geometric constraints and statistical rules [12][13][14][15][16][17]. Vosselman et al. [13] extract parameterized shapes (i.e., planes, spheres, cylinders) from the laser points using 3D Hough transform. For example, planes are extracted by 3D Hough transform aided by normal vectors calculated on the point cloud surface. A sequence of surface growing, connected segment merging and majority filtering is applied to cluster the laser points into ground, vegetation and buildings. After parameterized shapes are extracted, the classification is implemented on the extracted shapes instead of individual 3D points. Axelsson [18] classifies the ALS point clouds into ground and non-ground points using geometry-based analysis: First, the lowest point clouds are selected as seed ground points. Then, the neighboring points are added to the initial ground surface if their distances to the surface, and angles to the plane, meet certain criteria.
The supervised classification, based on handcrafted features, is the most widely-used classification method. The principle is to extract multiple features from the point cloud and then use a classifier for recognition. Compared with the rule-based methods, its advantage is that the complex classification rules and thresholds are automatically designed by the classifier. Guo et al. [19] extract echo features and full-waveform features from the laser points, and classify the features with Random Forests. Weinmann et al. [20] comprehensively analyze the effects of different feature combinations, neighborhood sizes for feature extraction, classifiers and feature selection. Hackel et al. [21] propose an efficient point cloud classification method, which takes full account of the randomness of point cloud distribution, use K-nearest point search to determine the optimal proximity distance, and extract features based on feature vectors. However, the main question within this method is understanding how contributive features and proper classifiers are selected.
In order to make the classification map smooth and preserve the details, contextual information is added to the classification model [22]. In this manner, mutual influence of the neighboring objects is explicitly incorporated into the model. Niemeyer et al. [23] use the Conditional Random Field (CRF) framework to classify laser points. The unary term of CRF is calculated by the Random Forest with point features, and the binary term is the relationship features of the extracted neighboring points. The results are improved when contextual features are incorporated into the model. Vosselman et al. [24] propose a classification method, based on a CRF model with the features from a single segment, and the relationship between two segment as inputs. The classification results are better than the point-based classification with Random Forest, due to the full consideration of neighborhood features.
The advantage of deep learning-based classification is that it exempts the process of manual feature extraction, feature selection and classification. Deep learning-based classification is divided into five categories based on different point cloud representations: Multi-view image, 2.5D DSM, voxel, raw point cloud and point cloud graph. Among the five categories, multi-view image, 2.5D DSM and voxel-based methods are indirect methods, where the Convolutional Neural Networks (CNNs) are working on multi-view images [25,26], voxels [27] or 2.5D [28,29] data rather than raw point clouds. The classification results are then transformed to the raw point clouds. Obviously, point cloud transformation requires more computational effort and causes information loss, which hinders accurate classification. Deep learning can also work directly on the raw point cloud or graphs [30,31]. Qi et al. [32] propose PointNet for the classification and recognition of point clouds. The basic idea is to use Multi-layer Perceptron (MLP) to extract the point cloud features layer by layer, and then connect the features for classification. PointNet++ [33] differs from PointNet in that it extracts not only global features but also multi-scale local features.
Although deep learning-based methods exempt the selection of features and classifiers, a large number of training samples are still required and the hyper-parameters in the neural networks should be determined, which are labor-intensive and complicated. This paper aims to extract objects with an unsupervised method based on geometric features.
Point Cloud-Based Change Detection
Change detection is the process of identifying differences in an object by analyzing it at different epochs [34]. Change detection can be performed either between 3D data or by comparing 3D data of a single epoch to a 2D map [12,35]. Zhan et al. [36] classify the change detection methods into two categories, based on the workflow: Post-classification comparison and change vector analysis.
In post-classification comparison, independent classification maps are required for both epochs. Change detection is then performed by comparing the response at the same location between the two epochs. When the data of two epochs are of different modalities, both training and testing have to be performed at each epoch separately, thus requiring a large computational effort. Vosselman et al. [12] Remote Sens. 2020, 12, 1680 4 of 23 propose a method to update 2D topographical maps with ALS data. The ALS data were first segmented and classified. The building segments were then matched against the building objects in the maps to detect the building changes.
Change vector analysis relies on extracting comparative change vectors between the two epochs and fuses the change indicators in the final stage [37][38][39]. Change vector analysis conducts a direct comparison between two epochs, which is different from post-classification comparison. The most widely-used change vector analysis between 3D data sets is DSM surface differencing, followed by point-to-point or point-to-mesh comparison [40][41][42]. However, traditional change vector analysis is sensitive to data problems and usually causes many false detections, especially when the data of two epochs are in different modalities.
Du et al. [43] detect building changes in the outdated DIM data using new laser points, which is the reverse setup compared with our work. Height difference and gray-scale dissimilarity are used with contextual information to detect changes in the point cloud space. Finally, the preliminary changes are refined based on handcrafted features. The limitation of the approach raised by the authors is that the boundary of changed buildings could not be determined accurately. Additionally, the method requires human intervention and prior knowledge in multiple steps. Zhou et al. [44] propose a two-step method to detect and update building changes between ALS data and multi-view images. Firstly, LiDAR-guided edge-aware dense matching is proposed to derive accurate partial changes. Secondly, hierarchical dense matching is applied to derive complete changes and update 3D information. This method omits some new or demolished buildings due to the failure of disparity extraction in the repetitive texture.
Recently, deep CNNs have demonstrated their superior performance in image-based change detection ( [36,45,46]). Our previous work [29] applies Siamese CNN in change detection between ALS data and DIM data. The two types of point clouds are converted to raster images and are then fed into a Siamese CNN for change detection. However, this method can only extract zigzag change boundaries instead of fine object boundaries. This method also requires many training samples, which may not be available in some applications. In contrast, this paper aims for an unsupervised method for integrated building extraction and change detection. There is no need for large training samples and sharp change boundaries can be obtained.
Materials and Methods
This paper aims at an object-based method for integrated building extraction and change detection. The inputs and outputs of our method are shown in Figure 1. The old epoch contains an ALS point cloud. The new epoch contains a DIM point cloud and orthoimage. The output is the integrated map which contains building footprints and change information. To be specific, this paper detects three types of building changes in real scenarios: building heightened, building new and building demolished as shown in the right of Figure 1. Building heightened indicates that a building exists in two epochs but has changed in height due to construction work. Building new indicates that no building exists in one epoch and a new building is newly-built in the new epoch. Building demolished indicates that a building exists in old epoch and demolished in the new epoch.
The proposed method for integrating building extraction and change detection is shown in Figure 2. Suppose that the ALS and DIM point clouds are already registered to the uniform world coordinate [47]. The proposed method is designed based on the characteristics of ALS data and DIM data. ALS point cloud and DIM point cloud show heterogeneous characteristics. Object-based change detection is more robust to point cloud noise and data gaps compared to surface differencing method [41]. The major steps are object extraction and bidirectional object-based analysis.
Remote Sens. 2020, 12, 1680 5 of 23 integrated map which contains building footprints and change information. To be specific, this paper detects three types of building changes in real scenarios: building heightened, building new and building demolished as shown in the right of Figure 1. Building heightened indicates that a building exists in two epochs but has changed in height due to construction work. The proposed method for integrating building extraction and change detection is shown in Figure 2. Suppose that the ALS and DIM point clouds are already registered to the uniform world coordinate [47]. The proposed method is designed based on the characteristics of ALS data and DIM data. ALS point cloud and DIM point cloud show heterogeneous characteristics. Object-based change detection is more robust to point cloud noise and data gaps compared to surface differencing method [41]. The major steps are object extraction and bidirectional object-based analysis. ALS point cloud is usually more precise and contain less noise than the DIM point cloud. Therefore, point cloud segmentation and object extraction are supposed to perform well on the ALS data. Object extraction provides initial building footprints, terrain and vegetation locations. When the ALS data are compared to the DIM data on these building footprints, unchanged buildings, heightened buildings and demolished buildings are detected. The remaining regions may still contain newly-built buildings, where buildings do not exist in the old epoch but appear in the new epoch. Therefore, only in the remaining regions, new buildings are detected by comparing the DIM data to the ALS data. The four major steps are as below: • Step I: The method starts from ALS point cloud and perform point cloud filtering and surface-based segmentation. Terrain ( ), roof segments ( ) and vegetation ( ) are extracted from laser points. Set the whole study area as and set the remaining irrelevant points as : ALS point cloud is usually more precise and contain less noise than the DIM point cloud. Therefore, point cloud segmentation and object extraction are supposed to perform well on the ALS data. Object extraction provides initial building footprints, terrain and vegetation locations. When the ALS data are compared to the DIM data on these building footprints, unchanged buildings, heightened buildings and demolished buildings are detected. The remaining regions may still contain newly-built buildings, where buildings do not exist in the old epoch but appear in the new epoch. Therefore, only in the remaining regions, new buildings are detected by comparing the DIM data to the ALS data. The four major steps are as below:
•
Step I: The method starts from ALS point cloud and perform point cloud filtering and surface-based segmentation. Terrain (T), roof segments (R) and vegetation (V) are extracted from laser points. Set the whole study area as A and set the remaining irrelevant points as O: Step II: By comparing the ALS point-based segments to the DIM points, the unchanged building (UB), unchanged terrain (UT), unchanged vegetation (UV), heightened building (HB) and demolished building (DB) are detected. This step is named "Change detection ALS->DIM" since ALS data are used as referred data. Then calculate the complementary set (CS) of the study area A: The complementary set is uncertain regions where new building might be detected in the following steps.
•
Step III: In the complementary set, the newly-built buildings (NB) are detected by comparing the DIM data to the ALS data. This step is named "Change detection DIM->ALS" since DIM data are used as referred data.
•
Step IV: The newly-built building masks are post-processed by morphological operation. The change map (CM) is acquired by taking the union of heightened building (HB), demolished building (DB) and new building (NB): The four key steps will be explained in details in the following four sub-sections. The proposed method not only extracts unchanged building footprints, but also detect building changes. The heightened buildings (HB) and demolished buildings (DB) are detected in Step II "Change detection ALS->DIM"; The new buildings (NB) are detected in Step III "Change detection DIM->ALS". All the three types of building changes can be detected after this bidirectional change detection.
Object Extraction from ALS Point Cloud
Firstly, terrain, roofs and vegetation are extracted from the ALS points based on their geometric properties. The terrain points are usually low and form a smooth surface. The roofs are high and often planar or smooth surfaces. The vegetation canopy usually forms clusters with unordered normal vectors. The object extraction contains four steps: (1) point cloud filtering; (2) surface-based segmentation; (3) segment-based screening; and (4) connected component analysis. Point cloud filtering is to separate non-ground points from ground points. Surface-based segmentation is applied to extract planar or smooth segments from the non-ground points. Segment-based screening is applied to select roof segments. Connected component analysis is applied to extract vegetation clusters.
Progressive TIN Densification is used for ALS point cloud filtering [18]. The method proves effective and robust in filtering non-ground points and also maintaining the local terrain details. The method first selects some initial topographic points as seed points. Then other points are included to or excluded from the terrain points based on the geometric relationship between the neighboring points and the initial topographic surface. The main steps are as follows: (1) Select the initial seed points. Construct a square grid with side length L, and select the lowest points in each grid as the initial seed points. Construct a Triangular Irregular Network (TIN) based on seed points; (2) Iterate over each laser point until all the non-seed points have been considered. If its distance to the nearest TIN surface d and the included angle α i (i ∈ {1, 2, 3}) to the TIN surface are both less than the thresholds, it belongs to the ground point.
After filtering the ground points from the ALS point cloud, the remaining points are mainly building roofs, walls and vegetation; A small number of remaining points might be cars, railings, or street lights. Since our goal is to detect building footprints and building changes, the root points need to be extracted. In real urban scenarios, most roof points can be characterized geometrically by planes or smooth surfaces [13]. A surface-based segmentation method is used for planar surface extraction. The method not only works on extracting complete planes, but also break curved surface into small planes. This is especially useful for non-planar roofs, for example a dome.
Surface-based segmentation is a "bottom-up" clustering method. Firstly, 3D Hough Transform is used to extract the "seed planes" from the point cloud. These seed planes are point segments located in the same plane, which are mainly roofs or walls. Due to the impact of point cloud noise or data gaps, a roof might be split into multiple plane segments. Then, the surface growing algorithm is used to analyze the points close to each "seed plane". If its distance to the nearest point on the plane is less than D and its distance to the fitted plane is less than D 0 , the neighboring point is added to this plane. After new points are added to the plane, the plane parameters are recalculated before testing the next point. After surface growing, the initial seed planes are expanded and large segments are obtained. Figure 3a is the initial ALS point cloud. Figure 3b shows the segmentation results after surface growing where different colors indicate different planes. Some scattered points or vegetation points are not segmented because they do not belong to any plane. These points are displayed in white.
Remote Sens. 2020, 12, x FOR PEER REVIEW 7 of 23 data gaps, a roof might be split into multiple plane segments. Then, the surface growing algorithm is used to analyze the points close to each "seed plane". If its distance to the nearest point on the plane is less than and its distance to the fitted plane is less than , the neighboring point is added to this plane. After new points are added to the plane, the plane parameters are recalculated before testing the next point. After surface growing, the initial seed planes are expanded and large segments are obtained. Figure 3a is the initial ALS point cloud. Figure 3b shows the segmentation results after surface growing where different colors indicate different planes. Some scattered points or vegetation points are not segmented because they do not belong to any plane. These points are displayed in white. After segmentation, segment-based screening is applied to select roof segments. The planar segments in Figure 3b contains, not only real roof segments, but also wall segments and vegetation points that happen to be located in a plane. Segment-based features are calculated for each segment. Suppose that a segment contains points, the coordinates of points are ( , , ), i ∈ 1, . The 3D coordinates are used to calculate a covariance matrix and three eigenvalues , , (λ ≥ λ ≥ λ ). Some features calculated from the three eigenvalues can characterize the geometry of the segment [5]. These features are used to identify roof segments. In this paper, four features are used to select the roof segments: the segment size , the inclination angle , the normalized height , and the residual of plane fitting (RPF) . Segment size, or number of points in this segment, is used to eliminate small segments. Inclination angle is used to eliminate wall segments. Normalized height is used to distinguish low segments from high segments. RPF is used to eliminate noisy segments, where is the ground height interpolated from the neighboring ground points. While is the distance from each point to the fitted plane. A segment is regarded as roof only if it passes the check based on the four features. The thresholds for the features are set based on trial tests and ALS data quality. The threshold values will be given in the experimental setup section.
In addition, all the points that were not previously segmented are also discarded, i.e., the white points in Figure 3b. Roof extraction results are shown in Figure 3c. Comparison between Figure 3b and Figure 3c shows that both horizontal roofs and gable roofs can be extracted correctly. Small segments, such as car points and small vegetation segments are excluded. After segmentation, segment-based screening is applied to select roof segments. The planar segments in Figure 3b contains, not only real roof segments, but also wall segments and vegetation points that happen to be located in a plane. Segment-based features are calculated for each segment. Suppose that a segment contains N points, the coordinates of points are (X i , Y i , Z i ), i∈ [1, N]. The 3D coordinates are used to calculate a covariance matrix M and three eigenvalues λ 1 , λ 2 , λ 3 (λ 1 ≥ λ 2 ≥ λ 2 ). Some features calculated from the three eigenvalues can characterize the geometry of the segment [5]. These features are used to identify roof segments. In this paper, four features are used to select the roof segments: the segment size N, the inclination angle θ, the normalized height nH, and the residual of plane fitting (RPF) σ. Segment size, or number of points in this segment, is used to eliminate small segments. Inclination angle is used to eliminate wall segments. Normalized height is used to distinguish low segments from high segments. RPF is used to eliminate noisy segments, where Z i0 is the ground height interpolated from the neighboring ground points. While d i is the distance from each point to the fitted plane. A segment is regarded as roof only if it passes the check based on the four features. The thresholds for the features are set based on trial tests and ALS data quality. The threshold values will be given in the experimental setup section.
In addition, all the points that were not previously segmented are also discarded, i.e., the white points in Figure 3b. Roof extraction results are shown in Figure 3c. Comparison between Figure 3b,c shows that both horizontal roofs and gable roofs can be extracted correctly. Small segments, such as car points and small vegetation segments are excluded.
The next step is to extract vegetation points from the unsegmented points in Figure 3b. Canopy points are adjacent, forming a cluster. Canopy clusters can be easily extracted by connected component analysis method [13]. This method takes the neighboring points into consideration, and groups all the points within a certain distance into the same cluster. After that, small clusters with less than a certain number of points are considered other objects and discarded according to the point cloud density.
In the ideal case, each canopy forms a cluster of points. However, the canopy clusters extracted in Figure 3d show that some canopy clusters are fractal due to missing data or low density. Note that even if some vegetation points cannot be grouped and extracted in this step, our bidirectional change detection method can still guarantee a correct building change detection result. This is because in the change detection from DIM to ALS step, these uncertain remaining regions are re-analyzed again.
Change Detection: ALS -> DIM
Object extraction divides the ALS point cloud into terrain, roof, vegetation and other classes. When projecting the roof points onto the ground, building footprints are obtained in the ALS data. When a building exists in the old epoch, it could be unchanged, demolished or heightened compared to the DIM data. In this step, the roof points in the ALS data are taken as reference. The neighboring DIM points are compared to each ALS roof segment to detect possible building changes.
To make the change detection robust to point cloud noise and data gaps, the Vertical Plane-to-Plane Distance (VPP) Dp is proposed as an indicator to measure the change scale between ALS roof segment and its corresponding DIM points. For each point, (X i , Y i , Z i ), i ∈ [1, N] on the ALS roof segment, the elevation on the DIM surface at location (X i , Y i ) is calculated. Since the DIM point cloud is relatively noisy, a cube with a side length of l is constructed at the center (X i , Y i ). The average height of all points in this cube is taken as the DIM elevation Z dim i at (X i , Y i ). Next, calculate the vertical elevation change for each point on the ALS segment, and take the average as the VPP distance: The segments with VPP distance D p greater than threshold are considered as changed building segments, otherwise they are unchanged. Figure 4 shows the schematic diagram of VPP distance measure between an old roof (blue) and a new roof (red). In contrast, point-to-fitted-plane distance (PFP) is also widely used to represent the distance between two planes. Point-to-nearest-point distance (PNP) is calculated by taking the distance from each ALS point to the nearest DIM point and average over all the ALS points. The PNP distance is more prone to point cloud noise and largely affected by the data gaps, since the closest point has to calculate the closest distance. In our case, a building change happens when its height is changed in a vertical direction, so VPP distance can better reflect the practical meaning than PNP distance or PFP distance.
The heightened buildings (HB), demolished buildings (DB) and unchanged buildings (UB) are detected based on VPP distance measure. Then unchanged terrain (UT) is detected based on the PFP distance for each terrain point in the ALS data. If the distance from a terrain point to the fitted DIM plane is less than a threshold, this laser point is classified into UT; otherwise it is uncertain and will be judged in the next step.
(PFP) is also widely used to represent the distance between two planes. Point-to-nearest-point distance (PNP) is calculated by taking the distance from each ALS point to the nearest DIM point and average over all the ALS points. The PNP distance is more prone to point cloud noise and largely affected by the data gaps, since the closest point has to calculate the closest distance. In our case, a building change happens when its height is changed in a vertical direction, so VPP distance can better reflect the practical meaning than PNP distance or PFP distance. The heightened buildings (HB), demolished buildings (DB) and unchanged buildings (UB) are detected based on VPP distance measure. Then unchanged terrain (UT) is detected based on the PFP distance for each terrain point in the ALS data. If the distance from a terrain point to the fitted DIM plane is less than a threshold, this laser point is classified into UT; otherwise it is uncertain and will be judged in the next step.
The unchanged vegetation (UV) is detected by identifying a point as vegetation in both epochs. The previous connected component analysis has detected vegetation in the ALS data. At the vegetation locations, whether the objects are still vegetation is judged with Normalized height Object-based analysis is more robust in vegetation detection than point-based analysis since the former considers a larger area and less prone to data noise. For each vegetation cluster, whether it is changed or not is determined by the following steps: Firstly, the vegetation points from the same cluster are projected to the ground, and a bounding polygon is constructed with these points in horizontal space. The DIM points within this bounding polygon are applied to verify whether they are vegetation or not. The nH and nEGI are calculated for each DIM point in this polygon and then averaged. If nH and nEGI are both larger than their thresholds, the object in the DIM data is classified into vegetation and this is thus unchanged vegetation (UV), where R, G and B indicates red, green and blue value of each pixel, respectively.
Change Detection: DIM -> ALS
The remaining regions are obtained by taking the complementary set (CS) of UT, UV, UB, HB and DB (see Equation (2)). The CS is an uncertain region, where newly-built buildings (NB) are detected, based on the object-based analysis with the DIM data as reference. The key is to extract buildings in the remaining regions. The remaining regions contain, not only newly-built buildings, but also other disturbances. These disturbances are mainly caused by natural vegetation growth or building mis-registration errors. When a vegetation canopy is detected in the ALS data, its boundary is usually larger in the DIM data due to growth. Vegetation change detection in the previous step can only detect the overlapping vegetation regions, but the grown regions cannot be detected. Linear building boundaries might remain in the complementary set due to mis-registration errors between the two point clouds.
The appearance of a complementary set will be shown in the experimental result section. The complementary set suggests that false alarms, such as vegetation boundary change or building boundary change form weak response in the CS map, but a real new building change forms a strong response in the CS map. Therefore, new buildings can be detected on the binary CS map based on point-based features and morphological operation. First, for each undetermined pixel on the CS map, nH and nEGI are applied to judge whether it is a building pixel or vegetation pixel. The VPP distance is also calculated between the ALS data and DIM data to indicate height change. To make the two features and VPP distance more robust to noise, features and VPP distance are calculated with all the neighboring points within a circular neighborhood. Pixels on the CS map are excluded if they meet the following criteria: (1) If a pixel is classified into vegetation in the new epoch based on nH and nEGI, it is impossible to be a newly-built building and thus excluded. (2) If the VPP distance is smaller a threshold, the pixel is not changed and excluded.
In this case, quite many irrelevant pixels are excluded from the CS map. Next, the remaining pixels are further processed with morphological operation to extract newly-built buildings.
Morphological Operation
The remaining CS map contains not only strong response for newly-built buildings, but also false alarms. The false alarms present the following patterns: Small isolated clusters, elongated artefacts along building edges, small holes on the new building masks. A combination of morphological closing and opening is applied to fill holes and eliminate small or elongated artefacts [48]. This proposed workflow is as follows. Firstly, process the binary CS map with morphological closing and opening in sequence. Their thresholds are T close and T open , respectively. Secondly, connect the neighboring pixels in their 8-neighborhood to form a complete changed object. Remove those objects whose length is smaller than T length . T length is determined by the minimum size of the changed buildings we aim to detect. New buildings are detected by morphological operation and disturbances are eliminated.
Descriptions of the Experimental Data
The experiments are implemented on two study areas from The Netherlands. The specifications of the study areas are shown in Table 1. The first study area is located in Rotterdam, which is a densely-built port city mainly covered by residential buildings, skyscrapers, vegetation, roads, and waters. The second study area is located in Enschede, which is also a densely-built urban area. Dense image matching in the two study areas are both performed in Pix4Dmapper [49] to obtain point clouds and orthoimage. The ALS point clouds, DIM point clouds and orthoimages for Rotterdam data and Enschede data are visualized in Figures 5 and 6, respectively.
Descriptions of the Experimental Data
The experiments are implemented on two study areas from The Netherlands. The specifications of the study areas are shown in Table 1. The first study area is located in Rotterdam, which is a densely-built port city mainly covered by residential buildings, skyscrapers, vegetation, roads, and waters. The second study area is located in Enschede, which is also a densely-built urban area. Dense image matching in the two study areas are both performed in Pix4Dmapper [49] to obtain point clouds and orthoimage. The ALS point clouds, DIM point clouds and orthoimages for Rotterdam data and Enschede data are visualized in Figures 5, and 6, respectively. The ALS and DIM data should be registered under the unique coordinate system beforehand. The ALS point cloud was provided under the Dutch national coordinate system (Amersfoort-RD New). The GCP coordinates used in the bundle adjustment were also in the same coordinate system, so the generated DIM point cloud was under the same coordinate system, which guarantees the registration between ALS data and DIM data. The ALS and DIM data should be registered under the unique coordinate system beforehand. The ALS point cloud was provided under the Dutch national coordinate system (Amersfoort-RD New). The GCP coordinates used in the bundle adjustment were also in the same coordinate system, so the generated DIM point cloud was under the same coordinate system, which guarantees the registration between ALS data and DIM data.
Experimental Setup
The hyper-parameters listed in Table 2 are set based on the data quality and trial experiments. In the point cloud filtering, since the terrain of two study areas is generally smooth, the threshold d in the Progressive TIN densification is set to 1 m and α is set to 30 • . Only those candidate points close to the TIN plane with distance smaller than 1 m and with angle smaller than 30 • are classified into terrain. The two thresholds can separate non-ground points and ground points and preserve terrain details. In the segmentation step, D and D 0 are set based on trial experiments to extract planes. Only when the distance from a candidate point to its closest point on the segment is less than 1 m and its distance to the fitted segment plane is less than 0.2 m, this candidate point belongs to this segment.
Then planar segments are screened based on the following rules: (1) When N < 50, this candidate segment is too small and not likely to be a roof segment. (2) when θ > 70, this segment is likely to be a vertical wall or a railing, etc. (3) When nH < 3 m, the segment is too close to terrain and not likely to be a roof. nH indicates the height of the shortest roofs we aim to detect. (4) When the threshold σ for RPF is larger than 0.2, it contains much noise and is not taken as a roof segment.
During change detection, the segments with dH larger than 3 m are taken as roof height change. Considering the DIM data quality, we only detect building changes with height change larger than 3 m. nEGI assists the discrimination between vegetation and non-vegetation in change detection step. When nEGI of a segment or a pixel calculated on the orthoimage is larger than 0.1, it is taken as vegetation.
In morphological filtering, morphological closing and opening are applied to preserve major true positives and eliminate false positives. T close and T open are set based on trial experiments. Considering the DIM data quality, we aim to detect building changes with side lengths greater than 5 m in real scenarios, which is equivalent to 50 pixels on the orthoimages.
In addition, the proposed method is compared with surface differencing as a baseline method. Surface Differencing is a classic change detection method for point cloud-based change detection, which is also used as a baseline method in the previous literature [40][41][42]. Firstly, convert two point clouds into DSMs and subtract one DSM from the other. Then, apply the similar morphological operation from our method to post-process the heightened map and lowered map separately. The locations where the height difference exceeds 3 m is determined as a candidate building change. The morphological operation starts with closing and then performs opening. Then, a small connected change masks, with a length smaller than 100 px (i.e., 10 m) are eliminated. Finally, the heightened and lowered masks are merged into final change map. Note that the thresholds for morphological operation in surface differencing is coarser than those used in the proposed method. Our trial experiments show that this setting brings a proper balance between true positives and false positives.
Evaluation Metrics
The results are evaluated qualitatively and quantitatively. The two missions of building extraction and change detection results are evaluated separately. Our final building footprints and change maps are both 2D products. The ground truth (GT) is prepared by careful visual inspection on the point cloud differencing map aided by the point clouds and orthoimages. Three evaluation measures applied in this paper are taken from the ISPRS benchmark on urban object detection [11]: Recall, Precision, and F 1 -score. Recall indicates the ability of a model to detect all the real changes. Precision indicates the ratio of true changes among all the detected changes. The F 1 -score is a metric to combine recall and precision using their harmonic mean.
For example, considering the pixel-based change detection evaluation, True positive (TP) is the number of changed pixels detected correctly. True negative (TN) is the number of unchanged pixels detected as unchanged. False positive (FP) is the number of pixels detected by the algorithm, which are not changes in the real scene. False negative (FN) is the number of undetected changes.
In addition, the change detection results are also evaluated at the object level. For the evaluation of building extraction, the results are evaluated only at the pixel level but not at the object level. Evaluation of building extraction in the object-level is hard because many buildings in our two study areas are mainly closely adjacent and the boundaries between individual buildings are hard to recognize.
Results and Discussion of Rotterdam Data
Qualitative results: The results of Rotterdam data from the intermediate steps are shown in Figure 7. From the ALS point cloud, progressive TIN densification is performed, and surface-based segmentation is performed to generate 3958 segments. After segment screening, 819 roof segments are valid as shown in Figure 7a. Some roofs are represented by a complete segment, while some roofs are broken into several sub-segments. The VPP distance is calculated for each valid segment. Segments with height change larger than 3 m are considered as changed buildings as shown in Figure 7b. The binary change masks after morphological processing are shown in Figure 7c, which include two types of changes: eliminated and heightened buildings. Figure 7g is further processed by morphological closing and then opening to eliminate small false positives and fill gaps. The final results are shown in Figure 8. Figure 8a is the ground truth for integrated building extraction and change detection. Yellow indicates building masks; Magenta indicates heightened buildings (incl. newly-built buildings); Cyan indicates demolished buildings. Figure 8b are the results of our method. Figure 8c is the visualization of the errors in building extraction. Red indicates false positives; Blue indicates false negatives. In order to visualize the change detection results separately, Figure 8d shows the ground truth for change detection. Figure 8e is the change detection result from our method. Figure 8f is the result from surface differencing. Figure 7g is further processed by morphological closing and then opening to eliminate small false positives and fill gaps. The final results are shown in Figure 8. Figure 8a is the ground truth for integrated building extraction and change detection. Yellow indicates building masks; Magenta indicates heightened buildings (incl. newly-built buildings); Cyan indicates demolished buildings. Figure 8b are the results of our method. Figure 8c is the visualization of the errors in building extraction. Red indicates false positives; Blue indicates false negatives. In order to visualize the change detection results separately, Figure 8d shows the ground truth for change detection. Figure 8e is the change detection result from our method. Figure 8f is the result from surface differencing. Figure 8c shows that the proposed method can successfully extract most building footprints with a few FPs and FNs. Comparing Figure 8d,e shows that most building changes are detected successfully although some FPs appear. It is clear that the detected demolished buildings show sharp boundaries, while some heightened buildings show fuzzy boundaries. The reason is that the boundaries of demolished buildings are determined from the precise ALS data, while the boundaries of newly-built buildings are determined from the relatively noisy DIM data. In contrast, Figure 8f shows that surface differencing brings much more FPs than our method, such as along building edges, on the vegetation surface or shadow.
Six examples of building extraction are visualized in Figure 9. Figure 9a shows that inclined roofs with complicated roof structures are correctly detected. Even though surface-based segmentation breaks the planes into small broken segments, the roof segments are merged into complete roof masks during morphological operation. Figure 9b,c show two bridges incorrectly detected as roofs. This error is caused by mistakes in point cloud filtering. The bridge is mis-classified into non-ground points and thus remain in the following steps. Similarly, Figure 9d shows some containers or sheds mis-classified into buildings. Figure 9e shows some hedges or box-like structures mis-classified into buildings. Figure 9f shows that false negatives occur on a wavelike roofs. The roof segments are small and contain data gaps, so they are eliminated in the segment screening, which leads to omission errors. Six examples of building extraction are visualized in Figure 9. Figure 9a shows that inclined roofs with complicated roof structures are correctly detected. Even though surface-based segmentation breaks the planes into small broken segments, the roof segments are merged into complete roof masks during morphological operation. Figure 9b,c show two bridges incorrectly detected as roofs. This error is caused by mistakes in point cloud filtering. The bridge is mis-classified into non-ground points and thus remain in the following steps. Similarly, Figure 9d shows some containers or sheds mis-classified into buildings. Figure 9e shows some hedges or box-like structures mis-classified into buildings. Figure 9f shows that false negatives occur on a wavelike roofs. The roof segments are small and contain data gaps, so they are eliminated in the segment screening, which leads to omission errors. Four examples of change detection results are shown in Figure 10. Each example from the left to the right shows the ALS point cloud, DIM point cloud, result from our method and result from surface differencing. Figure 10a shows a demolished building-group. Our method detects the change with sharp boundaries while surface differencing takes the neighboring vegetation changes as building changes and also omits one building change. Figure 10b shows that both our method and surface differencing can detect an independent demolished building. Figure 10c shows a building that is partly heightened and partly demolished. Our method can detect the complicated changes correctly, while surface differencing omits the demolished building. The area of demolished building is rather small, and thus, eliminated in surface differencing. Figure 10d shows both methods bring FPs in a courtyard. In this shaded region, DIM point cloud is noisy and its height is usually deviated from the true height, so FPs are more likely to appear. Four examples of change detection results are shown in Figure 10. Each example from the left to the right shows the ALS point cloud, DIM point cloud, result from our method and result from surface differencing. Figure 10a shows a demolished building-group. Our method detects the change with sharp boundaries while surface differencing takes the neighboring vegetation changes as building changes and also omits one building change. Figure 10b shows that both our method and surface differencing can detect an independent demolished building. Figure 10c shows a building that is partly heightened and partly demolished. Our method can detect the complicated changes correctly, while surface differencing omits the demolished building. The area of demolished building is rather small, and thus, eliminated in surface differencing. Figure 10d shows both methods bring FPs in a courtyard. In this shaded region, DIM point cloud is noisy and its height is usually deviated from the true height, so FPs are more likely to appear. Four examples of change detection results are shown in Figure 10. Each example from the left to the right shows the ALS point cloud, DIM point cloud, result from our method and result from surface differencing. Figure 10a shows a demolished building-group. Our method detects the change with sharp boundaries while surface differencing takes the neighboring vegetation changes as building changes and also omits one building change. Figure 10b shows that both our method and surface differencing can detect an independent demolished building. Figure 10c shows a building that is partly heightened and partly demolished. Our method can detect the complicated changes correctly, while surface differencing omits the demolished building. The area of demolished building is rather small, and thus, eliminated in surface differencing. Figure 10d shows both methods bring FPs in a courtyard. In this shaded region, DIM point cloud is noisy and its height is usually deviated from the true height, so FPs are more likely to appear. Quantitative results: Our building extraction results are evaluated at the pixel level. The method achieves precision of 91.94%, recall of 82.64% and F1-score of 87.04%. Although we do not have comparative methods for building extraction, a glimpse at [11] can still give us hints on the performance of our method. The ISPRS benchmark on urban object detection [11] reports that the high-ranking building extraction methods can achieve F1-score of 89.8% and 88.9%, depending on the data quality and applied method. Additionally, multispectral features are also available to them. Without multispectral features, we achieve F1-score of 87.04% with merely geometric features, which is relatively satisfactory.
The change detection results are evaluated at the pixel level and object level as shown in Tables 3, and 4, respectively. In Table 3, the recalls for heightened buildings and demolished buildings from our method are both above 85%, indicating that most of the two types of changes can be detected successfully. The precision of the heightened buildings is 69.26%, which is much lower than the 95.20% of the demolished buildings. As explained before, the demolished buildings are determined on the ALS data which are more precise, while most of the heightened buildings are determined on the DIM data, which are noisier. Quantitative results: Our building extraction results are evaluated at the pixel level. The method achieves precision of 91.94%, recall of 82.64% and F 1 -score of 87.04%. Although we do not have comparative methods for building extraction, a glimpse at [11] can still give us hints on the performance of our method. The ISPRS benchmark on urban object detection [11] reports that the high-ranking building extraction methods can achieve F 1 -score of 89.8% and 88.9%, depending on the data quality and applied method. Additionally, multispectral features are also available to them. Without multispectral features, we achieve F 1 -score of 87.04% with merely geometric features, which is relatively satisfactory.
The change detection results are evaluated at the pixel level and object level as shown in Tables 3 and 4, respectively. In Table 3, the recalls for heightened buildings and demolished buildings from our method are both above 85%, indicating that most of the two types of changes can be detected successfully. The precision of the heightened buildings is 69.26%, which is much lower than the 95.20% of the demolished buildings. As explained before, the demolished buildings are determined on the ALS data which are more precise, while most of the heightened buildings are determined on the DIM data, which are noisier. Considering the results of surface differencing in Table 3, the recall of heightened buildings are much higher than that of demolished buildings. Figure 8f shows that surface differencing brings many FPs in vegetation and shadow, where DIM point clouds are noisier and higher than ALS data due to dense matching errors or natural growth. Then, surface differencing tends to over-classify these possible regions as heightened buildings. The precisions of heightened and demolished buildings are lower than 61.41% and 53.00%, which are much lower than the proposed method. Table 4 shows the object-based evaluation results. Since the areas and shapes of building are versatile, it shows that the object-based evaluation measures are not propagational with the pixel-based measures. There are 27 heightened buildings and 25 demolished buildings in total. Our method misses one change and brings five FPs regarding to heightened buildings. It misses the three demolished buildings and brings one FPs in terms of demolished buildings. False alarms are more likely to occur on heightened buildings. In these areas, the height of DIM data are often higher than the true object height due to dense matching errors, so the method incorrectly detects them as new buildings. In addition, Figure 8e shows that most of the missed buildings are small buildings. The small buildings might be omitted in the step of object extraction or morphological operation. Considering the object-based evaluation of surface differencing, the recall and precision of surface differencing for heightened, demolished and overall are both lower than the measures of the proposed method. The recall of surface differencing is lower than the proposed method by 23.08%, and the precision is lower than the proposed method by 12.29%. Specifically, the recall of demolished buildings is only 52%, indicating that nearly half of demolished buildings are missed. Figure 8f shows that surface differencing also misses many small building changes. Compared to our object-based change detection, surface difference is more vulnerable to noise.
Results and Discussion of Enschede Data
Qualitative results: The intermediate results of Enschede data are shown in Figure 11. Surface-based segmentation is performed on the ALS data to generate 1866 segments. After segment screening, 128 valid roof segments are shown in Figure 11a. Some roofs are represented by a complete segment, while some roofs are broken into several sub-segments. Segments with height change larger than 3 m are considered as changed buildings as shown in Figure 11b. The binary change masks after morphological processing are shown in Figure 11c. Figure The final results are shown in Figure 12. For the definition of each sub-figure and color coding, the readers are referred to Figure 8. Figure 12c shows that the proposed method can successfully extract most building footprints for the Enschede data. A few FPs and FNs lie along the building boundaries or inside the narrow courtyard. Considering change detection, the ALS data and DIM were acquired with an interval of four years, and only four changes are heightened and three buildings are demolished. However, Figure 12b,e still demonstrate that the proposed method is effective in change detection and robust to data noise. In contrast, Figure 12f shows that surface The final results are shown in Figure 12. For the definition of each sub-figure and color coding, the readers are referred to Figure 8. Figure 12c shows that the proposed method can successfully extract most building footprints for the Enschede data. A few FPs and FNs lie along the building boundaries or inside the narrow courtyard. Considering change detection, the ALS data and DIM were acquired with an interval of four years, and only four changes are heightened and three buildings are demolished. However, Figure 12b,e still demonstrate that the proposed method is effective in change detection and robust to data noise. In contrast, Figure 12f shows that surface differencing brings more FPs than our method. The final results are shown in Figure 12. For the definition of each sub-figure and color coding, the readers are referred to Figure 8. Figure 12c shows that the proposed method can successfully extract most building footprints for the Enschede data. A few FPs and FNs lie along the building boundaries or inside the narrow courtyard. Considering change detection, the ALS data and DIM were acquired with an interval of four years, and only four changes are heightened and three buildings are demolished. However, Figure 12b,e still demonstrate that the proposed method is effective in change detection and robust to data noise. In contrast, Figure 12f shows that surface differencing brings more FPs than our method. Six examples of building extraction are visualized in Figure 13. Figure 13a shows that the footprints of a church are correctly extracted, except some elongated FPs and FNs along the building boundaries. Note that some pixels in the middle of the building footprint are missed due to data gaps in the raw ALS point cloud. Figure 13b shows FPs and FNs along the building boundaries. The ground truth for building footprints is delineated on the orthoimage. These might be some mis-registration errors between ALS data and DIM data. Figure 13c shows a shed with a height of 1.7 m between two buildings. This short and small segment is eliminated in segment screening step which leads to a FP. Figure 13d shows that a truck with a height of 3.3 m is mistaken as a building. There is another truck in the same place in the DIM data, so it is taken as an unchanged building instead of a changed building. Figure 13e shows another church with steep roofs. Previously, Figure 11a shows that the steep roofs are already eliminated in the segment-based screening since the roof segments are broken and small. Figure 13f shows that some pixels on a roof is missed because the data are missing in the ALS point cloud.
Four examples of change detection results for Enschede data are shown in Figure 14. Figure 14a shows a demolished building. Our method detects the change with sharp boundaries, while surface differencing misses this changed building. Figure 14b shows two demolished buildings. Our method detects both changes while surface differencing misses one changed building. Surface differencing omits one building because this building is small and eliminated in morphological opening. Figure 14c shows that FPs appear in the narrow courtyard in the surface differencing result when the ALS data contain many data gaps. Figure 14d shows that surface differencing causes FPs in the shaded region adjacent to a wall. The FPs in Figure 10d are heightened buildings, while the FPs here are demolished buildings. This can be explained by that dense matching fails in this area covered with shadow and vegetation so the height represented by the DIM data is lower than the true height. This results into a false demolished building. Six examples of building extraction are visualized in Figure 13. Figure 13a shows that the footprints of a church are correctly extracted, except some elongated FPs and FNs along the building boundaries. Note that some pixels in the middle of the building footprint are missed due to data gaps in the raw ALS point cloud. Figure 13b shows FPs and FNs along the building boundaries. The ground truth for building footprints is delineated on the orthoimage. These might be some misregistration errors between ALS data and DIM data. Figure 13c shows a shed with a height of 1.7 m between two buildings. This short and small segment is eliminated in segment screening step which leads to a FP. Figure 13d shows that a truck with a height of 3.3 m is mistaken as a building. There is another truck in the same place in the DIM data, so it is taken as an unchanged building instead of a changed building. Figure 13e shows another church with steep roofs. Previously, Figure 11a shows that the steep roofs are already eliminated in the segment-based screening since the roof segments are broken and small. Figure 13f shows that some pixels on a roof is missed because the data are missing in the ALS point cloud. Four examples of change detection results for Enschede data are shown in Figure 14. Figure 14a shows a demolished building. Our method detects the change with sharp boundaries, while surface differencing misses this changed building. Figure 14b shows two demolished buildings. Our method detects both changes while surface differencing misses one changed building. Surface differencing omits one building because this building is small and eliminated in morphological opening. Figure 14c shows that FPs appear in the narrow courtyard in the surface differencing result when the ALS data contain many data gaps. Figure 14d shows that surface differencing causes FPs in the shaded region adjacent to a wall. The FPs in Figure 10d are heightened buildings, while the FPs here are demolished buildings. This can be explained by that dense matching fails in this area covered with shadow and vegetation so the height represented by the DIM data is lower than the true height. This results into a false demolished building. changed building. Figure 13e shows another church with steep roofs. Previously, Figure 11a shows that the steep roofs are already eliminated in the segment-based screening since the roof segments are broken and small. Figure 13f shows that some pixels on a roof is missed because the data are missing in the ALS point cloud. Four examples of change detection results for Enschede data are shown in Figure 14. Figure 14a shows a demolished building. Our method detects the change with sharp boundaries, while surface differencing misses this changed building. Figure 14b shows two demolished buildings. Our method detects both changes while surface differencing misses one changed building. Surface differencing omits one building because this building is small and eliminated in morphological opening. Figure 14c shows that FPs appear in the narrow courtyard in the surface differencing result when the ALS data contain many data gaps. Figure 14d shows that surface differencing causes FPs in the shaded region adjacent to a wall. The FPs in Figure 10d are heightened buildings, while the FPs here are demolished buildings. This can be explained by that dense matching fails in this area covered with shadow and vegetation so the height represented by the DIM data is lower than the true height. This results into a false demolished building. Quantitative results: The building extraction results of Enschede data are further evaluated at the pixel level. The precision reaches 96.28%, recall of 86.63% and F 1 -score of 91.20%. The precision of building extraction for Enschede data is higher than that of Rotterdam data by 4.34%. The recall for Enschede data is higher than that of Rotterdam data by 3.99%. Comparing the error maps in Figures 8c and 12c for the two study areas shows that more building footprints are missed in the Rotterdam data such as small wavelike roofs with complicated shapes. This leads to a lower recall rate in the Rotterdam data. Considering the precision, some bridges, containers or trucks are mistaken as buildings, which results into a relatively low precision. In contrast, although there are also some FPs and FNs in the Enschede results, most of the errors are small dots or elongated lines along the building boundaries. Therefore, the measures of building extraction evaluation for Enschede data are better than those for Rotterdam data.
The change detection results are evaluated at the pixel level as shown in Table 5. In Table 5, the recalls for heightened buildings and demolished buildings from our method are both above 85%, indicating that most of the two types of changes can be detected successfully. The precision of the heightened buildings is 69.26%, which is much lower than 95.20% for the demolished buildings. As explained before, the demolished buildings are determined on the ALS data which are more precise while the newly-built buildings are determined on the DIM data, which are noisier. Considering the results of surface differencing in Table 5, the recall of heightened buildings are much higher than that of demolished buildings. Figure 8f shows that surface differencing brings many FPs in vegetation and shadow, where DIM point clouds are noisy and higher than the ALS data due to dense matching errors or natural growth. Then, surface differencing tends to over-classify these plausible regions as heightened buildings. Therefore, the F 1 -score of heightened and demolished buildings are much lower than the proposed method. Table 6 shows the object-based evaluation results of change detection. There are four heightened buildings and three demolished buildings in total. Our method misses one small heightened building and successfully detects all the demolished buildings. Surface differencing misses one heightened building and two demolished buildings; It also brings four heightened FPs and one demolished FP. Figure 12f shows that the FN errors mainly appear on small building changes, while FPs usually occur on the vegetation canopy, in the shadow or along the building boundaries. The recall achieved by surface differencing is 57.14% and the precision is only 44.44%, which are much lower than the measures from our method. Our method is more effective and robust to noise than surface difference.
Discussion on Data Quality and Error Sources
Comparing the building extraction results in the two study areas shows that both of the precision and recall of Enschede data are better than the measures of Rotterdam data. Even though more data gaps exist in the Enschede data, compared with the Rotterdam data (see Figures 5a and 6a), the effect of data gaps can be remedied by morphological operation. However, the scene complexity of Rotterdam data is higher than that of Enschede data concerning the presence of bridges, containers and trucks. Our segment-based building extraction method tends to misclassify these square objects into buildings and omit buildings with small or wavelike roofs.
In our method, building extraction and change detection are performed in sequence so the errors of building extraction are propagated to the change detection. When building extraction is complete and correct, the fine building footprints from ALS data can serve as a good reference to implement "ALS->DIM" change detection. The example in Figure 10c shows that when a demolished building is missed in building extraction, this error will finally result into an omission error in change detection.
Concerning the errors in the change maps as shown in Figures 8e and 12e, FPs often appear on the containers, trucks, vegetation, etc. These objects are often mistaken as buildings in building extraction and thus the errors remain in the final change map. FNs often appear when the roofs are small or broken. When large data gaps exist on the changed buildings in the ALS point clouds, FNs may also occur in the change map (e.g., Figure 13f). In shaded regions, such as a narrow road or narrow courtyard, dense matching may fail with no point cloud or generate inaccurate points whose heights are higher than true values. Therefore, both FPs and FNs may appear on the shaded regions. If a FN happens, our results show that this FN can be a heightened building or a lowered building, depending on whether dense matching failed or is inaccurate.
Comparing our object-based bidirectional method with surface differencing, our method performs better since the change detection is implemented based on each building object instead of individual pixels, which allows to take more contextual information into consideration. Du et al. [39] detect building changes between DIM data and ALS data, and its recall and accuracy rates are 93.50% and 92.15%, respectively. Although their evaluation measures are slightly higher than ours, their workflow is more complicated involving energy minimization and more human intervention. In addition, the outputs of our method contain not only change maps but also building footprints.
In addition, the quality of DIM data affects the change detection results. (1) Concerning the vertical accuracy of DIM data, the vertical accuracy of DIM data determines the minimum height change our method could detect. (2) Concerning the horizontal accuracy of DIM data, the inaccuracy in the DIM data affects the registration between ALS data and DIM data. The mis-registration results into some linear artefacts along building edges during change detection. (3) Concerning the DIM point density, sparse DIM point clouds make accurate localization of building edges impossible, which thus affect the localization of building changes. (4) Concerning the precision of DIM data, the noise in the point clouds hinders accurate representation of the objects. It causes many false positives and false negatives in the final change detection results.
Conclusions
We propose an object-based bidirectional method for integrated building extraction and change detection. Change detection between the heterogeneous ALS data and DIM data is vulnerable to dense matching errors, mis-alignment errors and data gaps. Our method starts from object extraction on the precise ALS data. Firstly, progressive TIN densification, surface-based growing, segment screening and connected component analysis are used to extract terrain, roofs and vegetation. Secondly, change detection is performed in a bidirectional manner; heightened buildings and demolished buildings are detected by taking the ALS data as reference, while newly-built buildings are detected by taking the DIM data as reference in the complementary sets. Vertical Plane-to-Plane Distance is proposed as the indicator of height change. Experiments are implemented on two data sets and the results are evaluated at both pixel level and object level. In the object-based evaluation, the recall reaches 92.31% and the precision reaches 88.89% for the Rotterdam data. The recall reaches 85.71% and the precision reaches 100% for the Enschede data.
There are two advantages within our method: Firstly, the building extraction and change detection are achieved in one workflow, and the results are visualized in one merged footprint-and-change map. By means of proper fusion of geometric and spectral features, the method classifies the changed buildings into heightened or lowered with relatively sharp boundaries. Secondly, the method is object-based. Since change detection is performed on the whole roof segment instead of point-to-point comparison, it is relatively robust to point cloud noise and data gaps.
However, our method also presents some limitations. The change detection results are largely dependent on the quality and resolution of the input data. When the data quality and resolution deteriorate, not only the change detection accuracy decreases, but also the sizes of detected buildings decrease. Additionally, the errors in filtering and building extraction are propagated to the final change detection results. The false positives and false negatives from our method are mainly caused by the dense matching errors on the vegetation or shadow. When the accuracy of object recognition from the DIM data improves, the change detection results also improve. Considering future work, spectral and textural features, derived from the original multi-view imagery, might be used to improve the accuracy of object recognition from the DIM data. Change detection might be conducted between ALS data and multi-view imagery directly, instead of two types of point clouds. The proposed method will be validated on other data sets from various urban scenes.
Author Contributions: C.D. and Z.Z. designed and implemented the experiments. D.L. contributed to the analysis and interpretation of the results. Z.Z. and D.L. wrote the manuscript. C.D. conceived of the paper and revised the manuscript. All authors read and approved the final manuscript.
Funding: This research was funded by the China Scholarship Council and National Natural Science Foundation of China (Grant No. 41501482), which are gratefully acknowledged. | 16,444.4 | 2020-05-24T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
BioWord: A sequence manipulation suite for Microsoft Word
Background The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. Results BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA) as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. Conclusions BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms.
Background
In a relatively short time, editing and processing of DNA and protein sequences have left the realm of molecular biology to become a routine practice for biologists working in myriad different fields. At the same time, the number of tools and servers for performing analyses on biological sequences and related data has exploded, creating a need for resource integration [1]. There have been several attempts to reconcile this vast and expanding array of services with data and service integration. Many of these approaches have relied on the creation of web-based service portals that seek to integrate and simplify data collection analysis with a wide variety of available tools [2][3][4], while other efforts have focused on service and data integration through the use of browser-enabled interoperability between services, data providers and even desktop applications [5][6][7].
The sheer scope and power of data and service integration portals and browser add-ons is also one of the main obstacles to their wide acceptance, since many users rarely need to use more than one or two services (e.g. BLAST and Entrez search) and lack the necessary training in bioinformatics to navigate easily through interconnected repositories of data and services [1]. Still, a wide range of practicing biologists must routinely perform relatively simple manipulation, editing and processing of DNA and protein sequences on a daily basis. To perform these routine manipulations, this substantial segment of users has resorted to proprietary desktop software, like DNAStar or the GCG Wisconsin Package [8,9], ingenious bookmarking of specific web servers, or to services that integrate several tools for sequence manipulation, like the Molecular Toolkit or the Sequence Manipulation Suite (SMS) [10,11].
Web-based sequence editing toolkits like SMS have enjoyed wide acceptance because they provide a simple interface for many routine sequence manipulation tasks and because, running on JavaScript, they are essentially platform independent. Nonetheless, the use of JavaScript results also in some limitations, like the inability to access files on the client computer, which forces the user to rely on copying and pasting data in text format. This does not only add overhead and complicates the organization and storage of data and analysis results, but it also requires that the user have access to raw text data, which may not be the case due to the specific handling of native file formats by the operating system. Last, but not least, the use of JavaScript requires embedding in a HTML file, which many users may find difficult to implement, thus reducing the likelihood of community-based code expansion. To address these shortcomings here we introduce BioWord, an extensive suite of sequence manipulation tools integrated within the familiar Microsoft Word interface. Using a macroenabled document template, BioWord provides direct and easy access to an array of tools for sequence manipulation, allowing the integration of functionality and data storage within a single interface. Its object-oriented design, implemented in the standard scripting Visual Basic for Applications (VBA) language, facilitates customization, and its integration into a well-known interface provides the means for efficient code-sharing and development.
Class structure
The object-oriented implementation of BioWord is based on two main classes that handle the key elements BioWord is designed to process: sequences and collections of sequences ( Figure 1). The Sequence class is used to hold and process DNA, RNA and protein sequences. To simplify the architecture, an instance variable in the class determines sequence type (either DNA/RNA or amino acid sequence) and the sequence itself is stored as a character string. During instantiation, the Sequence object determines its type according to a user-specified percentage of nucleic acid characters [A, C, G, T/U]. The class thus consolidates access to the methods and properties that can be used to process biological sequences and cross-checks their applicability according to the specific sequence type. The ColSequences class is designed to handle the serial manipulation of sequences and those applications requiring the simultaneous processing of more than one sequence, such as sequence alignments. Based on the native VBA Collection object, the ColSequences class is used to store multiple Sequence objects and define processing methods for them. The ColSequences class thus implements generic methods to serialize single-sequence processes (e.g. reverse) and methods to process the collection as a whole, such as computing a position-specific frequency matrix (PSFM) or implementing a greedy pattern search on a collection of sequences. Because single sequences are instantiated as unitary ColSequences objects, this class effectively centralizes all interactions with Sequence objects. This primary class outline is complemented by three additional classes that define generic objects used in sequence processing. The GCode class implements a variable genetic code model able to incorporate codon usage data, and is used in any operations involving DNA-protein translation or the use of codon usage tables (e.g. detection of Open Reading Frames (ORF)). The AlignmentCell class is designed exclusively for use in alignment algorithms and provides the means to define all the relevant fields in a dynamic programming alignment matrix. Finally, the ScoreMatrix class consolidates the different scoring rules used by pattern matching and alignment algorithms into a single type of object (the scoring matrix) which defines the methods used to set and use scoring matrices in these different settings.
Module structure
The class structure is functionally wrapped within a module structure that basically handles the interface with Microsoft Word document objects. This design strategy is aimed at decoupling the basic BioWord objects from their running environment, thus avoiding the need for derivation of specialized classes when, for instance, specific output formats are desired. The Rib-bonControl module handles basic communication between the ribbon, the ColSequences objects and the document. It contains the methods the ribbon buttons are linked to, thereby defining the functionality of the ribbon. Upon capture of a button-click event, the Rib-bonControl parses the user selection, instantiates the necessary ColSequences object and calls the appropriate ColSequences method to process the selected sequences, thus implementing the fundamental control flow of BioWord ( Figure 1). The RibbonControl module also centralizes reception of ColSequences methods results and calls the appropriate method to handle their output according to sequence type and formatting options. Methods for output generation are stored in the Resources module, which handles both the specific format (e.g. FASTA or table) and destination of the output. BioWord allows output to be redirected to the clipboard, a new document, immediately following the selection or overwriting it. In addition, the Resources module defines a broad set of handy functions to manipulate both sequence and non-sequence objects, like sorting or removing duplicates from a collection. Two additional modules complement this basic module architecture. The XMLHandler module manages the interaction with the XML Options file (which defines the option fields for BioWord) and handles the loading, saving and updating of the option fields available in the ribbon.
Integration, editing and distribution
BioWord is written fully in VBA and is compliant with the Visual Basic 6 standard, thus maintaining backwards compatibility with earlier versions of Microsoft Office. Due to its explicit detachment of basic Sequence and ColSequences classes, which encode sequence processing functionality, from the document interface, the core of the code is readily adaptable to all versions of Microsoft Word supporting VBA, as well as to other Microsoft Office programs, such as Excel. BioWord is fully encapsulated within a macro-enabled (.dotm) template facilitating its distribution and installation through the use of the Open XML format [12]. The code and the XML Options file are embedded within the .dotm structure, which also contains the ribbon stored as a XML file. BioWord code can be edited with any text editor or, more conveniently, within the integrated VBA editor of Microsoft Word. The XML Options file and the XML ribbon can be edited also with any text/XML editor. For convenience, the XML ribbon can also be edited with the freely available Open XML Custom UI Editor [13].
Results and discussion
BioWord provides an easily accessible and expandable toolkit for the manipulation and editing of biological sequences embedded within a Microsoft Word ribbon ( Figure 2). To facilitate user interaction, the ribbon is divided into several functional groups that are discussed in the following sections.
Format and sequence manipulation
In its current implementation, BioWord can parse and convert to and from three widespread formats for biological sequences: FASTA [14], GenBank Flat File [15] and bare/raw sequence. Conversion buttons are available in the Manipulation group, along with reverse and complement (DNA/RNA) buttons, but output conversion can also be made implicit by setting the Format option of the Basic Options group to the desired format.
Translation and sequence statistics
BioWord features frame-dependent DNA to protein translation and translation maps using different genetic codes, as well as reverse translation using a variety of approaches ( Figure 3). Reverse translation can be performed assuming a uniform codon distribution and using IUB characters to encode redundancy, or following a codon usage table, provided by the user in GCG Wisconsin Package format, as generated by the Codon Usage Database [8,16,17]. Basic statistics for DNA and protein sequences are also implemented in this distribution of BioWord. Among other, the toolkit can provide n-gram statistics and window-based analyses of DNA % GC content, as well as protein-specific indices, such as the GRAVY score [18]. The output for these analyses is generated in table format and can be readily pasted into spreadsheet software for graph generation.
Search methods and consensus logos
String and pattern-based search methods comprise a significant part of BioWord's functionality. The output for search methods can be overlaid on the sequence (highlighted) or provided in table format. BioWord provides a simple-to-use ORF search tool, which can maximize ORF length alone or combined with a supplied codon usage table from a reference genome. Basic string search methods (Substring Search) enable mismatch-based search for sequences and the ability to specify variable spacers in Gapped search. Mismatch-based search can operate on DNA sequences incorporating IUB redundancy codes or apply standard (e.g. BLOSUM62) scoring matrices to weigh matches in amino acid sequences. Pattern-based methods (Site Search) provide a more robust approach to sequence search by incorporating PSFM models and using Shannon's mutual information or relative entropy derived methods to score putative
AA K A L A R K G V I E I V S G A S R G I R L L Q RT CUT
AAAGCGCTGGCGCGCAAAGGCGTGATTGAAATTGTGAGCGGCGCGAGCCGCGGCATTCGCCTGCTGCAG RT UNIF AAGGCCCTTGCGCGGAAGGGCGTGATTGAGATTGTGTCCGGGGCGTCCCGGGGCATCAGGTTACTTCAG DNA AAGGCGCTGGCACGCAAAGGCGTTATTGAAATTGTTTCCGGCGCATCACGCGGGATTCGTCTGTTGCAG Figure 3 Comparison between reverse translation of the Escherichia coli K-12 MG1655 LexA protein (NP_418467) assuming a uniform codon distribution (RT UNIF ) and using the E. coli codon usage Table format results for a Gapped substring search using with CTGW and WCAG as substrings, maximum mismatch of 2 and 6-10 variable spacer. The overall score is the sum of dyad mismatch scores. (Bottom) Superimposed results for a pattern Search using the LexA-binding motif. In this output mode, the grey-scale shading intensity that highlights located sites is based on the information score (R i ), with darker shades indicating higher-scoring sites.
sites [19][20][21]. PSFM models are built from collections of sites and/or IUB consensus sequences provided by the user either in raw or FASTA sequence format. Like mismatchbased methods, pattern-based methods allow (Dyad Pattern) searching for variable spacer motifs based on direct or inverted repeats of a provided pattern (Figure 4).
BioWord also exploits the ability to handle PSFM models to address a pressing need in the representation of sequence motifs. It is well known that consensus sequences are an unsuitable representation of sequence motifs because they omit information on the importance of consensus bases and the relative frequency of non-consensus bases at each position of the motif [23]. Sequence logos are able to integrate these two missing elements, together with the consensus, in an encapsulated representation and are therefore a superior and preferred method for the representation of sequence motifs [24]. Unfortunately, sequence logos are graphic elements and many authors continue to use consensus sequences to represent motifs in order to avoid the need for additional figures or to allow in-text discussions about the motif. BioWord provides a solution to this problem by allowing the representation of sequence motifs in text format using the consensus sequence, but depicting simultaneously its information content. For instance, the LexA-binding motif of Escherichia coli [22] would be represented as (2 bits)|TACTGTATATATATACAGTA . In this representation (the consensus logo), the vertical bar character is used to represent the y-axis scale, with the maximum value, in bits, provided next to it. The height of the consensus letter at each position corresponds to the positional information content of that position (using either mutual information or relative entropy measures). This representation does not provide frequency information of non-consensus bases and, therefore, a sequence logo should be used preferentially whenever possible. Nonetheless, the consensus logo provides the means to convey information about positional conservation in text format and its use of information theory units allows straightforward comparison of motifs (e.g. the LexA-binding motif of E. coli (2 bits)|TACTGTATATATATACAGTA can be directly compared to that of the α-Proteobacteria (2 bits)|AAGAACAAAACAAGAACAT [25]).
Motif discovery and alignment
BioWord supports several methods for motif discovery. The user can apply a greedy search strategy or Gibbs sampling to a collection of unaligned DNA or protein sequences [26,27] in order to locate underlying motifs of a given length ( Figure 5). Both greedy search and Gibbs Figure 5 (Top) Motif discovery with Gibbs Sampling on a set of LexA protein sequences from different bacterial phyla. Instances of the discovered motif are highlighted on the sequences using the superimposed output option. The detected 10 amino acid-long motif shown in the consensus logo is centered on the well characterized Ala-Gly cleavage site of LexA [31]. (Bottom) Dyad Motif search on the E. coli K-12 MG1655 lexA (b4043) promoter region (see Figure 4), with 4±1 bp dyad, 8±1 bp spacer and 2 allowed mismatches. The reported score is the sum of dyad mismatch scores.
(4 bits)|GRVAAGEPIL
sampling are initialized randomly and iterated as many times as specified by the user. The reported motif is the one yielding larger information content across all iterations. The current distribution of BioWord also incorporates a Dyad Motif search tool. This is a string-based motif search tool for bipartite motifs that reports all the occurrences of direct or inverted repeats with a maximum number of mismatches on the dyad and variable spacing ( Figure 5). In addition, the package incorporates global and local pair-wise sequence alignment by implementing the Needleman-Wunsch and Smith-Waterman algorithms [28,29]. Memory management and computing power are constrained in BioWord by the use of Microsoft Wordembedded VBA code. As a result, computationally or memory intensive methods in BioWord, such as motif discovery cannot match the capabilities of equivalent specialized resources, like MEME [30]. Nonetheless, benchmarking of the BioWord greedy search algorithm on several known E. coli transcription factor-binding motifs indicates that BioWord motif discovery algorithms can provide results that are qualitatively comparable to those obtained by MEME, locating the known motif in nearly all instances ( Figure 6), and alignment of relatively long sequences (e.g. 2,500 aa) can be performed seamlessly within BioWord.
Conclusions
BioWord integrates many commonly used methods for sequence manipulation and editing in a single add-on for Microsoft Word, providing a powerful and easilyaccessible toolkit for biological sequence processing in an environment familiar and accessible to most practicing biologists. Among other functions, the current version of BioWord implements bi-directional translation, ORF detection, consensus logos, Gibbs sampling and several powerful sequence search methods. Its simple class structure and modular design based on an accessible objectoriented language (VBA) facilitate customization, code expansion and sharing. Together with its encapsulation Figure 6 Benchmark of BioWord and MEME motif discovery against E. coli transcription factor binding sites downloaded from the Prodoric database [30,32]. Each binding site was expanded 50 bp on each side using adjacent E. coli genome sequence to generate motif discovery input data. Motif discovery results for BioWord are from the greedy search algorithm. MEME searches were conducted using the San Diego Supercomputing Center (SDSC) MEME web service. For both MEME and BioWord, parameters were made as similar as possible: Prodoric site length, one site per sequence, search given strand only, 3 reported motifs. In BioWord, the iteration number was set to 100. For both methods, the motif shown corresponds to the best fit with the Prodoric motif. The transcription factor (TF) and length of its binding motif are provided in the leftmost columns. In each block, the number of sites (available in the database or reported by the method), the consensus logo and the information content (IC) of the motif are shown. The rank of the best-fitting motif (based on e-value for MEME, information content for BioWord) among the three reported motifs is also indicated. All logos are in the same scale, with cell height corresponding to 2 bits of information. Input sequences for motif discovery and site sequences for all reported motifs can be found in Additional file 1. | 4,268.6 | 2012-06-07T00:00:00.000 | [
"Biology",
"Computer Science"
] |
SUPERCONVERGENCE AND POSTPROCESSING OF THE CONTINUOUS GALERKIN METHOD FOR NONLINEAR VOLTERRA INTEGRO-DIFFERENTIAL EQUATIONS
. We propose a novel postprocessing technique for improving the global accuracy of the continuous Galerkin (CG) method for nonlinear Volterra integro-differential equations. The key idea behind the postprocessing technique is to add a higher order Lobatto polynomial of degree 𝑘 + 1 to the CG approximation of degree 𝑘 . We first show that the CG method superconverges at the nodal points of the time partition. We further prove that the postprocessed CG approximation converges one order faster than the unprocessed CG approximation in the 𝐿 2 -, 𝐻 1 - and 𝐿 ∞ -norms. As a by-product of the postprocessed superconvergence results, we construct several a posteriori error estimators and prove that they are asymptotically exact. Numerical examples are presented to highlight the superconvergence properties of the postprocessed CG approximations and the robustness of the a posteriori error estimators.
Introduction
In this paper, we consider the nonlinear Volterra integro-differential equation (VIDE) of the form VIDEs arise widely in mathematical modelling of physical, biological, engineering and other phenomena that are governed by memory effects [9,19,26].During the past few decades, various numerical methods have been studied for VIDEs, such as Runge-Kutta methods [4,20,27,32], collocation methods [8,14,24,25,28], continuous Galerkin (CG) methods [17,18,[29][30][31] and discontinuous Galerkin (DG) methods [7,21].We refer the readers to the monographs [5,6] and the literature given therein.Among the above mentioned numerical methods, the Galerkin type methods have received considerable attention due to the (possibly) arbitrary high-order convergence.In the context of Galerkin methods, postprocessing techniques are attractive ways to improve the accuracy of an already obtained Galerkin approximation.Several postprocessing techniques have been introduced for the VIDEs.For example, defect correction methods (based on interpolation and iteration) [18,33] and Richardson extrapolation method [34] were studied for the CG approximations of the nonlinear VIDEs with smooth kernels; superconvergence extraction technique based on Lagrange interpolation was developed in [22] for the DG approximations of the linear VIDEs with smooth and non-smooth kernels.For other types of integral equations, some postprocessing techniques for Galerkin approximations were also investigated, see, e.g., [15,16] and references therein.
The aim of this paper is to propose and analyze a novel postprocessing technique to improve the accuracy of the CG method for the VIDE (1.1) with regular solutions.The key idea of our postprocessing technique is to add a higher order Lobatto polynomial of degree + 1 to the CG approximation of degree (see 4.3), which can be regarded as a simple correction for the CG approximation.The main contributions and features of this paper can be summarized as following: -We show that the CG method superconverges at the nodal points of the time partition with respect to the step-size.-We prove that the proposed postprocessing technique improves the convergence rates of the CG method in the 2 -, 1 -and ∞ -norms by one order.-Based on the postprocessed superconvergence results, we construct asymptotically exact a posteriori error estimators for the CG method as the step-size decreases.-The postprocessing is local, in the sense that it can be done independently on each local time interval, which enables the design of parallel numerical algorithms.-The postprocessing is very easy to implement and can achieve global superconvergence with a small cost which only requires to compute an integral on each local time interval.
This paper is organized as follows.In Section 2, we introduce the CG scheme for the VIDE 1.1 and state the a priori error estimates.In Section 3, we prove the nodal superconvergence estimates and obtain some superclose results.In Section 4, we describe the postprocessing technique and analyze its superconvergence properties.In Section 5, we construct several a posteriori error estimators and prove that they are asymptotically accurate.In Section 6, we present some numerical experiments to validate the theoretical results.Finally, we give some concluding remarks in Section 7.
Continuous Galerkin method
Let ℎ be a partition of [0, ] into time intervals We define the length of by ℎ = − −1 and the maximum step-size by ℎ = max 1≤≤ {ℎ }.For simplicity, we assume that the time mesh is quasiuniform, i.e., there exists a positive constant such that although the postprocessing technique proposed in this work can be applied to an arbitrary mesh.
The trial and test function spaces are given by respectively.Here, ( ) denotes the space of polynomials of degree at most on and the space −1 ( ) is defined analogously.
To describe the CG method, we define an integral operator : ([0, ]) → ([0, ]) by The CG approximation of the VIDE (1.1) can be read as: find for any ∈ −1,0 ( ℎ ).Since the CG scheme (2.1) employs different trial and test function spaces, it can be regarded as a Petrov-Galerkin scheme.The well-posedness of the CG solution defined by (2.1) has been proved in [18,29].
Due to the discontinuous character of the test space −1,0 ( ℎ ), the CG scheme (2.1) can be decoupled into local problems on each time step, namely, if is given on for all ∈ −1 ( ).Here, we set the initial value | 1 ( 0 ) = 0 .
The following a priori error estimates have been established in [18,29].
Natural superconvergence of the CG method
In this section, we show that the CG method for the VIDE (1.1) superconverges at the nodal points of the time partition.We further prove that the projection Π (see (3.12)) is superclose to the CG solution .
Preliminaries
Let be the Legendre polynomial of degree on [−1, 1].It is well-known that there holds the orthogonality where , is the Kronecker symbol.Let be the Lobatto polynomial of degree on [−1, 1], namely (see, e.g., [11]) It is easy to verify that For our purpose, we also define the shifted Legendre and Lobatto polynomials on by and for ∈ , respectively.Combining (3.1)-(3.5),we can obtain the following properties of the shifted Legendre and Lobatto polynomials , and , in a straightforward way.Lemma 3.1.For the polynomials , () and , (), there hold Let ℎ be a given partition of [0, ] with subintervals { } =1 .For any ∈ 1 ( ), we have ′ ∈ 2 ( ).Since the Legendre polynomials form a complete orthogonal basis of the 2 space, by using the Riesz-Fischer Theorem (see Theorem 3 in Chapter 7.3 of [23]), we can expand ′ ∈ 2 ( ) into the Fourier-Legendre series with û, = 2+1 ℎ ∫︀ ′ , d be the Fourier coefficients of ′ , here the "=" means that the partial sums of the Fourier-Legendre series of the function ′ converge to ′ in the sense of the metric in 2 ( ), namely, lim Moreover, due to the generalized Parseval's identity (see Corollary of Theorem 1 in Chapter 7.3 of [23]), the Fourier-Legendre series (3.7) can be integrated term by term over the interval ( −1 , ) ⊂ , which implies that where 0, = ( −1 ), 1, = ( ) and We now define an projector : 1 ( ) → ( ) with ≥ 1 by It is worth noting that the projection has been frequently used for the superconvergence analysis of the finite element methods and finite volume methods for various partial differential equations; see, e.g., [11,12] and the references therein.In this paper, we shall also use this projection for superconvergence analysis of the CG method for VIDEs.
The constants > 0 are independent of ℎ .
Superconvergence at the nodes
In this section, we will prove that the CG method superconverges at the nodes of the time partition ℎ with respect to the step-size ℎ.
Postprocessed superconvergence analysis of the CG method
In this section, we propose a postprocessing technique for the CG method.Based on the superclose results established in Theorem 3.5, we prove that the postprocessed CG solution * is superconvergent to the exact solution .
Postprocessed superconvergence results
The main results of this section are stated in the following theorem.
Remark 4.2.According to Lemma 2.1 and Theorem 4.1, we observe that the convergence rates of the 2 -, 1 -and ∞ -error estimates are improved by one order, namely, and Moreover, using (4.3), the fact +1, ( ) = 0 and (3.16), we have which implies that the postprocessed CG solution * keeps the same nodal superconvergence as the CG solution.
Asymptoticlly exact a posteriori error estimators
In this section, we construct several asymptotically exact a posteriori error estimators for the CG method based on the postprocessed superconvergence results.
In view of the definition (4.3) of the postprocessed solution * , we define the a posteriori error estimators by (5.1) For convenience, we also define the unprocessed and postprocessed 2 -, 1 -and ∞ -errors (5.2) In the following theorem, we show that the a posteriori error estimators , = 0, 1, ∞, are asymptotically exact as ℎ → 0 under reasonable assumptions.
Remark 5.2.Proving the assumption (5. 3) remains an open issue in our situation, which is beyond the scope of the present paper.However, let us make some comments.First, for one-dimensional elliptic problems, similar estimate as (5.3) has been proved in [2].Second, in order to prove the asymptotic exactness of a posterior error estimators, such assumption has been frequently used in the existing literature (see, e.g., [1] and the reference therein).Third, numerical results show that the convergence rates as predicted by Lemma 2.1 are sharp, i.e., = (ℎ +1 ) with = 0, ∞ and 1 = (ℎ ), which also imply the lower bounds as stated in (5.3).In that sense, the assumption (5.3) is reasonable.
Numerical experiments
In this section, we present some numerical results to verify the theoretical findings.The test problems are solved by the CG method with uniform degree on uniform or nonuniform time partitions.Let be the CG solution and * be the postprocessed CG solution.We denote by := − and * := − * the error functions, and by 'order' the actual rate of convergence of the unprocessed or postprocessed CG method with respect to , where is the number of elements in the time partition.Clearly, ≃ 1/ℎ for the uniform and quasi-uniform time partitions.Throughout, all integrals (including the 2 -and 1 -error norms) are numerically evaluated by the Gauss-Legendre quadrature formula with + 1-points unless otherwise specified.Let { } =0 be the nodes of a given time partition of [0, ].We calculate the ∞ -errors by where , = −1 + ℎ /20 with 0 ≤ ≤ 20.
For measuring the efficiency of the a posteriori error estimators, we define the global effectivity indices by where and are defined by (5.1) and (5.2), respectively.
Consider the problem (6.1) with = 1.Following [13], we use the nonuniform time partition of [0, ] with nodal points given by where () returns a uniformly distributed random number in (0, 1).Obviously, this mesh is generated based on a random perturbation on the uniform mesh.We first consider the maximum nodal errors.In Table 1, we list the maximum nodal errors and convergence rates for = 1, 2, 3 and 4. Here, we denote by max |( )| the maximum absolute errors at nodal points { } =1 of the time partition.It can be seen that the CG method superconvergences at the nodal points with order 2, which confirms the theoretical result in Theorem 3.3.
We next consider the performance of the postprocessing technique as defined by (4.3).In Tables 2-4, we list the unprocessed and postprocessed 2 -, 1 -, ∞ -errors and their convergence orders for = 1, 2, 3 and 4. Clearly, the convergence of the 2 -and ∞ -errors are improved by one order for ≥ 2, while the convergence of the 1 -errors is improved by one order for ≥ 1, which coincides well with Theorem 4.1.We also observe that the the global effectivity indices 0 and ∞ always approach 1 for ≥ 2 and the index 1 approaches 1 for ≥ 1, which implies that the error estimators are asymptotically exact and confirms the results in Theorem 5.1.Moreover, we find that although the convergence orders of the 2 -and ∞ -errors for = 1 are not improved after postprocessing, the errors are slightly improved, and the effectivity indices 0 and ∞ are around 1.63 and 1.33, respectively, which are within a reasonable range.
Example 2
We consider the nonlinear VIDE (cf.[3]): where (, ()) = − ln(1 + ) − 1 (1+) 2 − () and the solution of (6.2) is () = 1 1+ • Consider the problem (6.2) with = 10.The uniform time partitions which consist of elements are used for this example.In Table 5, we show the numerical convergence rates of the maximum nodal errors.It can be seen that the convergence rates of order 2 are observed in Table 5, thereby confirming the theoretical result in Theorem 3.3.
In Tables 6-8, we list the unprocessed and postprocessed 2 -, 1 -, ∞ -errors and their convergence orders, as well as the global effectivity indices for different .We observe the same superconvergence results of the postprocessed CG approximations as those presented in Example 1.Additionally, we note that, the global effectivity indices with = 0, 1, ∞ always approach 1 (except the case of = 1 for 0 ), which implies that the error estimators are asymptotically exact.
In Table 9, we list the CPU time (in seconds) for obtaining the unprocessed and postprocessed CG approximations, where CPUT and CPUT-P denote the CPU time cost of the unprocessed and postprocessed CG approximations, respectively.It can be seen that, the CPU time cost of the postprocessing process is far less than the cost of the original CG approximation, which is almost negligible.
Example 3
Although the present theory does not apply to weakly singular VIDEs with nonsmooth solutions, we expect good results for the postprocessed CG approximation when (nonuniform) graded meshes are used.Thus, we consider the linear VIDE with weakly singular kernel: We choose the right-hand side such that the solution of (6.3) is given by () = 3 2 − .Obviously, ∈ 2− (0, )(with arbitrary > 0) and the second-order derivative of is unbounded near = 0.
Consider the problem (6.3) with = 1.We use the ℎ-version of the composite Gauss-Legendre quadrature developed in [35] to evaluate the involved weakly singular integrals.Moreover, in order to capture the initial singularity of the solution at = 0, we use the graded mesh with nodes given by = (︁ )︁ , 0 ≤ ≤ .
Throughout this example, we set the grading parameter = + 1.It is worth pointing out that, the selection of the optimal grading parameter has been studied for the collocation method [8] and DG method [21] for weakly singular VIDEs.However, in our situation of the CG method, it still needs further investigation.In Tables 10-12, we list the unprocessed and postprocessed 2 -, 1 -, ∞ -errors and their convergence orders, as well as the global effectivity indices for different .From these tables, we observe the same superconvergence results of the postprocessed CG approximations as those reported in Examples 1 and 2 for smooth solutions.In addition, we see again that the global effectivity indices with = 0, 1, ∞ always approach 1 (except the case of = 1 for 0 and ∞ ).
Concluding remarks
In this paper, we propose and analyze a simple but efficient postprocessing technique for the CG method of nonlinear VIDEs with smooth kernels.We prove that the postprocessing improves the convergence of the CG method in the 2 -, 1 -and ∞ -norms by one order for regular solutions.As a result, we construct several asymptotically exact a posteriori error estimators as the step-size approaches zero.Numerical results show that, for weakly singular VIDEs with nonsmooth solutions, after postprocessing the convergence rates of the CG method with graded meshes can be also improved by one order, but this is not covered by our theoretical results.Thus, the superconvergence analysis of the postprocessed CG method for weakly singular VIDEs will be a topic of our future research.
The a posterior error estimators developed in this paper can be used for adaptive implementation of the CG time stepping method for VIDEs, although the efficiency of the local error estimators still need further study.
Table 5 .
Example 2: maximum nodal errors and convergence orders.
Table 9 .
Example 2: CPU time (in seconds) of the unprocessed and postprocessed CG approximations. | 3,802.4 | 2022-12-08T00:00:00.000 | [
"Mathematics",
"Engineering"
] |
Measuring the Efficiency of School System in all Provinces in Indonesia
Human resource plays an important role for the economy. How to obtain human resource quality is by implementing the quality of education system. Education is one of the important considerations sought by the government, as proved by the size of its allocation on budget. Therefore, evaluating the efficiency of its implementation in Indonesia is needed by using the Data Envelopment Analysis (DEA) method. This paper attempts to develop a new efficiency model of Indonesian education system and implement it to all school’s levels: primary school, junior high school, senior and vocational high school, in 34 provinces in Indonesia. The results show provinces that already have achieved cost, technical and overall efficiency are only 1 and 2 provinces at each levels of education. Regarding the managerial implications, teacher’s equity is a top priority in improving the quality of education system in Indonesia.
INTRODUCTION
Education is one of the top from public sector decision makings, which currently gains much attention. In the national level, some of the economic models are linked with education for the sake of its growth. In Indonesia, education is considered to be important by the government, as stated in the 1945 Constitution; "Every citizen has the right to education". It indicates that efforts are needed to expand more access and equity. The Ministry of Education and Culture of the Republic of Indonesia has released its achievement, called Program for International Student Assessment (PISA) 2018. Indonesia is ranked 74th (score 379, average: 489). The PISA ranking is a study conducted by the Organization for Economic Cooperation and Development (OECD) whose evaluates the ability of 15-year-old students in the field of mathematics, science and reading. Based on this result, it shows that the quality of education in Indonesia is not optimal yet. Therefore, evaluating the education system in Indonesia is highly needed as a further action. This research is expected to represent the educational assessment in developing countries. The law on national education (No.20/2003) describes school system in Indonesia consists of three stages: basic, secondary, and higher education. The first stage is basic education that consists of primary school (PS) and junior high school (JHS). The second represents secondary education or senior high school (SHS) which can be chosen either general high school or vocational high school. The final stage is higher education (HE) which can be formed as polytechnic, institute, or university. Our study will be focused in the primary and secondary education stages. This article attempts to create a model for education system in Indonesia to be more efficient using Data Envelopment Analysis (DEA). Efficiency is one of the performance parameters that underlies the entire performance of an organization. Performance capability is used to produce maximum output from existing inputs [1]. When measuring efficiency, it is done by calculating the optimal level of output with the available input levels, or by assessing the minimum level of input to produce a certain output. During the process, it will also identify the cause of the inefficiency from the activity. The DEA method has been widely used to model school production process. Huguenin [2] Fatimah & Mahmudah [3] has evaluated the efficiency of primary schools in Switzerland and Indonesia. Meanwhile, Badri et al [4], Yuan & Shan [5] and Minuci et al. [6] performed the efficiency measurement of secondary schools in Abu Dhabi, China and West Virginia. DEA is a non-parametric methodology based on linear programming, through a mapping of the production frontier that is also used to analyze the functions of production [7]. The DEA model was first introduced by Charnes et al. [8]. This DEA model uses to make assumption of a constant return to scale (CRS) condition which assumes that each DMUs has been operated at an optimal scale. This model was later developed by Banker et al [9]which known as DEA-BCC. The DEA-BCC model assumes that the comparison of company inputs or outputs will affect changes in productivity which called Variable Return to Scale (VRS). The work at hand aims to develop an efficiency model of c. To provide improvement target for the inefficient provinces. The focus of efficiency measurement includes various education level starting primary school, junior high school, senior and vocational high school in to captured a real picture of education efficiency model at the national level.
EFFICIENCY MODEL OF INDONESIAN EDUCATION SYSTEM
Related studies that evaluate efficiency in education differ broadly with respect to its variables and methods. Some proxies are derived from the objective statement of the Indonesian government to increase the availability, quantity, service level of national educational infrastructure. Consideration for the formulation of DEA model in this studyrefers to the vision and mission of the Indonesian Ministry of Education and Culture, also from prior researches which adjusted by the data availability. From the input side, school budget allocation is used in this study since it has large amount of portion from the government. However, each province has to obtain different amount of budget allocation. Some studies are used education cost (EC) as an important resource [10]; [11]; [12]. Input variables in this study is used to measure the cost efficiency. It can not be denied that the education cost is the initial capital to operate the whole facilities. This variable selection is also strengthened by the obligation from the government which is to allocate education budget of at least 20 percent. The other variable, intermediate output, is used to facilitate the indirect relationship between input variables and output variables. This variable involves number of teachers, number of classrooms and number of schoolage population (SAP) [2]. Since each province in Indonesia differs in terms of number of schools and its sizes, this study controls both factors by dividing each measure with the number of students. Teacher quality and effectiveness measured by teacher-student ratio (TSR) and classroom-student ratio (CSR). Intermediate output is a manifestation of education funding allocations input into an educational facility obtained by the students. The number of students and teachers are two main actors of education, and the classroom is a physical place of the educational process. Therefore, the students attendance, available teachers and lassrooms are included in the intermediate output. From the output side, some studies also used number of graduates [13] whilst many others also included student [10]; [14]. We follow Haerlermans and Ruggiero [12] and Brennan et al. [11] that used number of students as the output of the educational process. The number of students is calculated by reducing the percentage of students whose being dropped out (100-DR). In addition, pursuing rate (PR) that indicates the number of graduates who pursue higher school level, is also taken into account as an intermediate factor of each educational level, except for the final stage. Number of graduates (NG) is defined as the output of the last stage (senior and vocational high school). Number of students, the pursuing rate and number of graduates are concordant with the mission of the Indonesian Ministry of Education and Culture that realize the widespread and equitable access, and the fair of educational process. The school production process in Indonesia's education system are summarized in the following Figure 1.
MODEL IMPLEMENTATION
We implement the newly built model to evaluate Indonesia's education system using the last school year of 2018/2019. The data were obtained from the official website namely 'Indonesia Educational Statistics in Brief', which are published annualy by The Center for Educational Data and Statistics, Ministry of Education and Culture. Decision Making Units (DMUs) are defined as units to be analyzed in this study. The number of DMU is determined based on total provinces in Indonesia, which is 34 provinces. The efficiency of each province will be analyzed at each level of education, starting from primary school, junior high school, senior high school and vocational school. We use DEA output-oriented with the aim to optimize the existing input variables. On the other hand, it is not ruled out the possibility that changes in input variables could be recommendations for improvement. DEA-VRS also selected in this study by the assumption from the scale of production which may affect efficiency.
The efficiency scores
In this study, we use the output-oriented DEA method with the calculation of cost efficiency (CE), technical efficiency (TE), overall efficiency (OE) and scale efficiency (SE). The DMU is efficient when the value of
Improvement Targets
The determination of these improvement use two types of targets that refers to strong efficient frontier and the other one refers to weak efficient frontier. Improvement targets for all inefficient provinces are obtained, but in this paper one example of improvement targets for overall efficiency for the province Gorontalo will be given as seen on Tabel 2. In Table 2, it can be seen that Gorontalo can increase its efficiency by projecting a strong efficient frontier by reducing the Classroom-Student Ratio to 45.52 but this certainly needs to be considered, then raising the pursuing rate target to 88.56 and minimize the dropout rates as much as 0.31. In a weak projection it is necessary to increase the pursuing rate value to 81.95 and minimize the dropout rates as much as 0.37 since this projection is considered to be more realistic.
CONCLUSION
This study theoretically contributes to research education efficiency by adopting DEA at the national level. The overall efficiency for each province which achieved maximum efficiency in a row for 16 provinces in primary schools, 15 provinces in junior high schools, 20 provinces in senior high schools and 14 provinces in vocational high schools. Inefficiency which derived from the cost value is relatively low. It indicates that technical efficiency is better than cost efficiency. This empirical finding leads to a conclusion that extra attention should be given to the teacher-student ratio, in order to equalize the number of teachers for each region.
Further national best performance variables are a number remain in school with a relatively small improvement value, and therefore a good condition needs to be maintained. For further research can be expanded other variables such as the quality of students using school grades and the quality of teachers based on recent education, besides the use of other methods combined with DEA can realize future comprehensive research. | 2,381.4 | 2021-01-01T00:00:00.000 | [
"Education",
"Economics"
] |
A Dominant-negative Receptor for Type f? Transforming Growth Factors Created by Deletion of the Kinase Domain*
To prove the postulated role of type P transforming growth factors (TGFP) in cardiac development and other events, specific inhibitors of TGFP signal transduction are needed. We truncated the type I1 TGFP receptor cDNA (AkTPRII), to delete the predicted serinekhreonine kinase cytoplasmic domain. AkTPRII was co-transfected into neonatal cardiac myocytes, together with reporter constructs for two cardiac-restricted genes that are regulated antithet-ically by TGFP. AkTPRII impaired activation of the skeletal a-actin promoter by TGFP1, -2, and -3 and, conversely, impaired TGFP inhibition of a-myosin heavy chain transcription. Thus, a kinase-defective TPRII blocks signaling by all three mammalian TGFP isoforms, and can disrupt both positive and negative control of transcription by TGFP. a-actin -39#+24SkALuc, constructed by subcloning nucleotides -394 to +24 of the chicken SkA gene as an RsaI-Hind111 fragment between the SmaI and HindIII sites of the fire- fly luciferase reporter expression vector The aMHC-lu- ciferase reporter, 5500aMHCLuc, by subcloning the in-tergenic region between the murine pMHC and aMHC gene loci as a 5.5-kilobase pair Kpn I-Hind111 fragment into pXPl fragment was selected because previous studies had proven its fidelity to tissue-restricted, stage-specific, and thyroid hormone-dependent expression of endogenous aMHC (33), and because TGFP-inhibited ele- ments cannot not yet be ascribed to smaller portions of the gene. The constitutive P-galactosidase expression vector, pCMVp, places the Es- cherichia lac2 gene under the transcriptional control of the cytome-galovirus immediate-early
(ll), and to control the expression of at least six cardiac-restricted genes (12,13). Unlike the global suppression of differentiated gene expression by TGFp in skeletal muscle (14-161, neonatal cardiac myocytes possess a continuum of responses to TGFp1: up-regulation of a gene ensemble, including skeletal a-actin (SkA), expressed preferentially in fetal myocardium, concurrent with down-regulation of genes including a-myosin heavy chain (aMHC) that are associated with adult ventricular muscle (12,13), dichotomous responses which correspond to the generalized "fetal" phenotype produced by mechanical load (4,17). Positive and negative control of developmentally regulated genes thus coexist i n this system, making the cardiac myocyte particularly intriguing as a model for studies of TGFO signal transduction. Investigations of pluripotent cell lines (18), amphibian cardiac progenitor cells (191, and avian cardiac endothelium (20) also suggest that TGFP-related peptides might regulate cardiac organogenesis itself. However, mechanistic tests of this hypothesis require a suitable inhibitor of the TGFp signaling cascade and TGFP-dependent gene expression.
Neonatal cardiac myocytes possess all three of the characteristic cell surface receptors for TGFp (TOR) visualized by receptor cross-linking (21). Expression cloning proved the type I1 TPR, a 75-kDa glycoprotein, to possess an intracellular domain distinct from the four classes of tyrosine kinase found in the receptors for platelet-derived, epidermal, insulin-like, and fibroblast growth factors (22). Instead, TpRII resembles the type I1 receptor for activin, a distant member of the TGFp superfamily, and Daf-1, a protein controlling larva formation in Caenorhabditis elegans; all three constitute a novel class of transmembrane protein with a consensus serinehhreonine kinase as the predicted cytoplasmic signaling domain (22)(23)(24)(25).
TpRII is fully functional in the absence of the type I11 receptor @-glycan), illustrated by the absence of this proteoglycan from L6 myoblasts (14,26,27). TpRII is competent to bind TGFp in the absence of the type I TpR, a 53-kDa protein whose structure has not yet been defined, but signal generation apparently requires a heteromeric protein complex involving both receptors I and 11 (28).
Kinase-defective mutations of receptor tyrosine kinases are known to inhibit the function of wild-type receptors, possibly by a block to the intermolecular autophosphorylation that follows ligand-induced dimerization (29-31). Although the corresponding initial aspects of TGFp signal transduction are less completely understood, we reasoned that a truncated TpRII, lacking the serinekhreonine kinase domain, would function as a dominant inhibitor of TGFP-dependent transcription. We have used the cardiac myocyte model to demonstrate that the truncated TpRII confers resistance to TGFp control of developmentally regulated cardiac genes.
EXPERIMENTAL PROCEDURES
Plasmids-To generate the truncated human TPRII by PCR amplification, each 100-pl reaction mixture contained 10 ng of TPRII clone H2-3FF (22), 600 ng of the primers shown, 200 VM of each dNTP, 50 m M KCl, 1.5 m M MgCl,, 10 m M Tris-HC1, pH 8.0, and 5 units of Taq polymerase (Promega). Amplification comprised 5 min of initial denaturation at 94 "C, then 30 cycles (1 min at 94 "C, 1.5 min at 72 "C, and 1 min at 60 "C) using a Perkin-Elmer Cetus DNA thermal cycler. The final extension reaction was for 7 min at 72 "C. The resulting PCR product was analyzed on an 8% polyacrylamide gel and had the expected size of 883 nucleotides. For directional subcloning, the products of three PCR reactions were combined, purified with a Centricon 100 spin column, digested with EcoRI and HindIII, and loaded on a 1.2% agarose gel. The DNA band was excised, and DNA was isolated with the Quiaex gel extraction kit (Qiagen). For expression in eukaryotic cells, AkTpRII was subcloned between the EcoRI and HindIII sites of pSV-Sport1 (GIBCO/ BRL), under the control of the SV40 early promoter and enhancer. The truncated activin receptor cDNA comprised nucleotides 36-787 of pmActR2 (23). Clones were sequenced by the dideoxy method using Sequenase 2.0 (U. S. Biochemical Corp.).
The skeletal a-actin reporter, -39#+24SkALuc, was constructed by subcloning nucleotides -394 to +24 of the chicken SkA gene as an RsaI-Hind111 fragment between the SmaI and HindIII sites of the firefly luciferase reporter expression vector pXPl (32). The aMHC-luciferase reporter, 5500aMHCLuc, was prepared by subcloning the intergenic region between the murine pMHC and aMHC gene loci as a 5.5-kilobase pair Kpn I-Hind111 fragment (33) into pXPl (Fig. 2 B ) . This fragment was selected because previous studies had proven its fidelity to tissue-restricted, stage-specific, and thyroid hormone-dependent expression of endogenous aMHC (33), and because TGFP-inhibited elements cannot not yet be ascribed to smaller portions of the gene. The constitutive P-galactosidase expression vector, pCMVp, places the Escherichia coli lac2 gene under the transcriptional control of the cytomegalovirus (CMV) immediate-early promoter (34). lated as previously described from 1-2-day-old rats (12, 13). Myocytes Cell Culture and Zhnsfection-Neonatal cardiac myocytes were isowere purified by density centrifugation through a Percoll step gradient (1.050 g.ml", 1.060 g.ml-' and 1.082 gml" Percoll in 116 mM NaCl, 406 p~ MgCl,, 11 mM NaH2P04, 5.5 mM glucose, 39 mM HEPES, pH 7.3, 0.002% phenol red) and were plated a t a density of 1 X lo6 cells/35-mm dish (Primaria, Falcon). Cells were cultured overnight in Dulbecco's modified Eagle's m e d i u d a m ' s n u t r i e n t mixture F-12 (l:l), 17 mM HEPES, 3 mM NaHCO,, 2 mM L-glutamine, 50 pg.ml-l gentamicin, 10% horse serum, and 5% fetal bovine serum. Cells were transfected 24 h after plating by a diethylaminoethyl-dextran sulfate method (10 pg of of CMV-ZacZ). Cells were incubated with the DNA-DEAE-dextran com-AkTpRII or pSV-Sportl, 7.5 pg of a luciferase reporter construct, 2.5 pg plex for 3 h, then for 60 s with 10% dimethyl sulfoxide in Dulbecco's modified Eagle's medium. Cells were cultured overnight in the medium described above, which was replaced on the following day by serum-free medium containing 1 pgml-' insulin, 5 pg.ml-' transferrin, 1 nM Na,Se04, 1 nM LiCl, 25 pg,ml-' ascorbic acid, and 0-1.0 nM thyroxine (12,13). Porcine TGFpl and -2 and chicken TGFp3 (R&D Systems) were added at final concentration of 1 ngm-', and the medium and growth factor were replaced at 16 h.
Luciferase and p-Galactosidase Assays-Cells were harvested after 36 h in the presence or absence of TGFp, in 150 pl of 25 mM Tris phosphate, pH 7.8, 2 mM dithiothreitol, 2 mM CDTA, 10% glycerol, 0.1% Triton X-100. Luciferase activity was monitored as the oxidation of luciferin in the presence of coenzyme A (35), using an Analytical Luminescence model 2010 luminometer. For lac2 determinations, extracts were incubated with 4.85 m g m -' chlorophenol red-P-D-galactosidase (Boehringer Mannheim), 62.3 mM Na,HPO,, 1 mM MgCl,, 45 mM p-mercaptoethanol for 1 4 h at 37 "C (36) and activity was measured as absorbance at 575 nm. Results were compared by Scheffe's multiple comparison test for analysis of variance and the unpaired two-tailed t test, using a significance level o f p < 0.05.
RESULTS
To construct the truncated TPRII (AkTpRII), we subjected the human TORI1 cDNA H2-3FF (22) to PCR amplification, using primers that correspond to nucleotides 306-326 and 1153-1172 and incorporated asymmetric linkers for directional cloning (Fig. 1) To determine whether AkTPRII could prevent up-regulation of a "fetal" cardiac gene by TGFp1, we co-transfected neonatal rat cardiac myocytes with (i) AkTpRII or the equivalent vector lacking insert, to control for promoter competition by SV40 sequences; (ii) the SkA-luciferase reporter, -394/+24SkALuc; and (iii) the constitutive lac2 gene, pCMVp, a to correct for transfection efficiency, cell recovery, and potential global effects of the growth factors (Fig. 2). Numerical values for expression of the actin and myosin reporter genes shown below (luciferase, corrected for lacZ) are normalized to that in simultaneous cultures in the absence of TGFP and AkTpRII. In overall agreement with previous results using a related SkA reporter (13), the -394/+24SkA-luc construct was expressed at levels at least 100-fold greater than in parallel cultures of cardiac fibroblasts' and was up-regulated by 1 ngm-' TGFpl (2.748 5 0.382, compared to the vehicle control; p = 0.0005; Fig. 2.4). AkTpRII reduced SkA expression in the presence of TGFpl to the basal level found in vehicle-treated cells ( p = 0.0005, uersus the vector control). By contrast to expression triggered by exogenous TGFpl, whose suppression by the truncated TGFp receptor was virtually complete, AkTPRII had little effect on basal, tissue-specific transcription of the SkA promoter. The modest inhibition which was observed was reproducible and significant (0.670 2 0 . 0 7 2 ;~ = 0.0156), consistent with findings by Roberts et al. (11) that TGFp is secreted in culture by neonatal rat cardiac myocytes and acts in an autocrine fashion on the cells. The block to SkA induction was specific to AkTPRII; no inhibition of basal or TGFp-induced transcription resulted from AkAcRII, a corresponding truncation of the murine type I1 activin receptor. By contrast, as shown in Fig. 3, exogenous fulllength TPRII amplifies the induction of SkA transcription by TGFp1. Thus, the block to TGFP-dependent transcription by AkTpRII is contingent on truncation of the cytoplasmic domain. One stringent test for the specificity of dominant-negative mutations is whether exogenous wild-type protein can rescue the mutant phenotype. Increasing the amount of fulllength TpRII cDNA progressively restored the responsiveness of cardiac muscle cells to TGFP1, despite a constant amount of the expression vector encoding AkTpRII. As was true for the truncated activin receptor (37), complete rescue required less than stoichiometric amounts of the wild-type receptor (cf, . Biological actions of the TGFp isoforms, while often similar, differ drastically in some systems. As one illustration, only TGFP3 is implicated in the epithelial-mesenchymal transformation required for creation of cardiac valves (20). Therefore, to ascertain whether AkTPRII might disrupt signaling by more than one form of TGFP, we first tested if neonatal rat cardiac myocytes in fact possess transcriptional responses to TGFp2 and 4 3 (Fig. 2). TGFP2 and $3 each induced the SkApromoter to at least the same extent as TGFpl(3.882 2 0.506 and 3.910 To facilitate analysis of a TGFP-inhibited pathway in cardiac muscle cells, we generated an cYMHC-luciferase construct, since aMHC is the cardiac gene whose expression, at the mRNA level, is repressed most completely by TGFPl (12,13). As shown in Fig. 4, cYMHC-luciferase activity was highly dependent on thyroid hormone (1.000 t 0.117 uersus 0.093 2 0.028 at 1 and 0 nM; p = 0.0017) and was inhibited nearly 70% by TGFPl (0.344 ? 0.088; p = 0.0110). AkTPRII specifically abolished down-regulation by TGFP1, with no effect on up-regulation by T3. Thus, AkTpRII impairs TGFP-dependent signals for both negative and positive control of gene expression, without spurious effects on a TGFP-independent pathway. In agreement with this evidence that AkTPRII specifically disrupts TGFP-dependent transcription, activity of the CMV-driven lac2 gene was indistinguishable in AkTPRII-and vector-transfected cells.3 DISCUSSION Identification of the cDNA sequence for TpRII has provided a critical opportunity to construct a truncated receptor variant, as a reagent to interdict TGFp signal transduction at the level of the receptor itself. These experiments indicate that deletion of the serindthreonine kinase domain generates a trans-dominant inhibitor of TGFP signal transduction. In overall agreement with this conclusion, Melton and colleagues (37) have recently shown that a kinase-defective form of the homologous type I1 activin receptor can block activin-dependent events in early Xenopus embryos. Thus, alteration of the cytoplasmic signaling domain may be a generic strategy for producing lossof-function mutations, not only in receptor tyrosine kinases but also in those whose action depends on a serindthreonine kinase domain. An additional inference to be drawn from both studies is that mutation of the respective type I1 receptors is sufficient to repress signal transduction with no need for concomitant mutation of other proteins in the ligand-binding complex (25, 28). By analogy t o receptor tyrosine kinases, mutations of TpRII and the type I1 activin receptor might be expected to act through a block to autophosphorylation in trans. Recently, Massague and colleagues (28) have confirmed the prediction that kinase activity is essential for signaling but is superfluous for ligand binding by TPRII. Thus, other mechanisms are plausible to account for the dominant-negative action, including sequestration of TGFp and activin by the truncated receptors or, conceivably, impaired expression of normal type I1 receptor. An additional caveat is the potential, for which credible support exists (381, that unexpected tyrosine kinase activity could also be inherent to this class of transmembrane protein. The ability of all three isoforms of TGFp t o activate the SkA promoter in neonatal rat ventricular myocytes concurs with their shared ability to antagonize depressive effects of interleukin-1B on beating rate and equivalence for binding to cardiac cells (11). Analogously, the fact that AkTpRII blocks gene activation by all three peptides agrees with their equal potency for inhibition of DNA synthesis in receptor-defective DR-27 mink lung cells transfected with the full-length human TpRII (28). Thus, our results with the dominant-negative TpRII corroborate the conclusion that TpRII acts as a receptor for all three mammalian TGFp isoforms (28).
In contrast to other TGFP-regulated genes, activation of SkA transcription by TGFpl appears to be mediated largely via a proximal serum response factor (SRF)-binding element (SRE) and a potential TEF-1 site, which are indispensable as well for basal, tissue-restricted expression.2 However, the 3' arm of this SRE possesses a n overlapping recognition site for a second SRE-binding protein, the bifunctional transcription factor W1 (39), a competitive antagonist for SRF at this location (40, 41). It is unknown whether TGFp acts through up-regulation of SRF, modification of SRF, or, conceivably, decreased W1 activity. cis-Acting sequences for TGFp repression of aMHC have not yet been delineated, but candidate elements within the 5"flanking region that was required for cardiac-specific expression in vivo include consensus sites both for SRF and the SRFrelated MADS box protein, 42).
Genetic methods to obtain a mechanistic understanding of growth factor signal transduction can be confounded by ambiguous of counterintuitive results from conventional techniques used to create gainor loss-of-function mutations. For example, induction of endogenous Fos and Jun by mechanical stress is T. Brand, W. R. MacLellan, and M. D. Schneider, unpublished observations. associated with up-regulation of atrial natriuretic factor in ventricular muscle cells (171, not repression as seen with forced expression (43). Similarly, despite the importance of homologous recombination, an increasingly encountered shortcoming of this strategy is the risk of a misleading or false-negative outcome after disrupting only one member of a redundant multi-gene family. This may be the case for knock-out mutations in TGFpl (44); other recent examples include Fos (4.9, MyoD (461, and E2A proteins (47). Thus, dominant-inhibitory genes such as AkTPRII offer a crucial alternative to other procedures for generating loss-of-function mutations. Indeed, there exist corresponding dominant-negative forms of TGFP itself (48). The finite growth capacity of cardiac muscle cells in culture precludes stable transfection as a means to uniformly modify ventricular myocytes, which would be required to assess the impact of AkTPRII on other aspects of the cardiac phenotype such as endogenous genes and gene products, signaling intermediaries, or DNA synthesis. This limitation can be overcome using replication-defective recombinant adenovirus to achieve efliciencies for gene transfer that approach 100% in neonatal and even adult ventricular muscle cells.4 However, cell culture model systems substitute only partially for investigations of cardiac organogenesis itself. By analogy to their use in Xenopus oocytes (31, 371, dominant-negative genes like AkTPRII might serve as a generic approach, complementary to gene ablation, to create loss-of-function mutations in transgenic mammals. | 3,894.8 | 1993-06-05T00:00:00.000 | [
"Biology"
] |
Modelling the Woven Structures with Inserted Conductive Yarns Coated with Magnetron Plasma and Testing Their Shielding Effectiveness
: The paper proposes the analytic modelling of flexible textile shields made of fabrics with inserted conductive yarns and metallic plasma coating in order to calculate their electromagnetic shielding effectiveness (EMSE). This manufacturing process is highly innovative, since copper plasma coating improves EMSE on the fabrics with inserted conductive yarns of stainless steel and silver with 10–15 dB in the frequency range of 0.1–1000 MHz, as shown by the measured EMSE values determined according to the standard ASTM ES-07 via the Transverse Electromagnetic (TEM) cell. On the other hand, modelling of EMSE for such conductive flexible shields gives an insight on estimating EMSE in the design phase of manufacturing the shield, based on its geometric and electrical parameters. An analytic model was proposed based on the sum of EMSE of the fabric with inserted conductive yarns and EMSE of the copper coating. The measurement results show close values to the proposed analytic model, especially in case of fabric with conductive yarns having stainless steel content.
Introduction
The shielding of electromagnetic non-ionizing radiation by means of flexible textile materials is a well-established field of research in the current context. The use of various electronic devices, mobile phones and other gadgets has yielded significant pollution from electromagnetic (EM) radiation in our environment [1]. Shielding is needed in many applications since non-ionizing radiation from various sources may cause interference (EMI) with other electronic devices or even cause harmful effects on human health [2,3]. Due to their advantages when compared to metallic shields, such as low weight, good mechanical strength, adaptability to various shapes of objects for shielding, as well as costeffectiveness, textile materials with electric conductive properties offer a proper solution on these aspects [4].
Several papers tackle this innovative field of research and contributions in this regard may be grouped in various topics. The main topic of research is new manufacturing methods for EMI shielding textiles. Within the cited review paper [5], the following main manufacturing technologies for EMI shielding fabrics are presented: applying intrinsic Textiles 2021, 1 5 conductive polymers, incorporating metallic nanoparticles in coatings [6], embedding conductive ingredients into the spinning solutions of fibers and interweaving metallic yarns (silver, copper, steel) with other conventional textile yarns [7,8]. A second topic is given by imparting additional functionalities to EMI shielding: electroless plating was used to deposit Co and Ni coating on Tencel fabrics for enhanced EMI shielding properties and corrosion resistance properties [9]. Another new manufacturing method integrates silver nanowire networks and polyurethane protective layers into the fabrics structure, with outstanding washing durability and chemical stability properties [10].
A third topic of research provides improved or adapted methods for the determination of electric properties of conductive textiles, including electromagnetic shielding effectiveness (EMSE) [11].
A final identified topic of research in this field is modelling of the EMSE for various conductive textile structures. Shielding of EM radiation is an important topic in the field of electromagnetic compatibility [12][13][14]. The main analytic relations for modelling the shielding effectiveness are based on the models originating from the circuit method [15] and the impedance method [16]. In order to fulfill specific conditions occurring in the practical situations where electromagnetic shielding is required, additional analytic relations have been developed and adapted for different geometric shapes of electromagnetic shields under various physical premises.
Estimation of the electrical properties in the case of fabrics is of great importance for the design of applications in relation to end-user requirements [16]. Since the process of manufacturing fabric samples involves a series of preparatory processes, modelling of electromagnetic shielding effectiveness (EMSE) means savings in terms of design duration, material resources and working time [17]. Two main technologies may be distinguished for imparting electrically conductive properties to textile materials. According to [18], the insertion of conductive yarns within the fabric structure (woven, knitted and nonwoven fabrics) and coating with conductive pastes of the plain fabrics.
Various analytic relations for estimating the shielding effectiveness (EMSE) have been adapted for both types of technologies. For woven fabrics with inserted conductive yarns, due to their mesh grid structure, the impedance method with correction factors was adapted [19]. Moreover, another analytic relation establishes a weighted sum between the EMSE of the layer and the EMSE of the grid [20]. These relations were applied for Transverse Electromagnetic (TEM) cell measured fabric samples by [21], taking into consideration reflection as the main component of EMSE.
Another shielding model was developed for mesh grid structures, based on the analogy with an RLC electric circuit with lumped elements [22]. Research work of estimating EMSE of mesh grid structures was accomplished by analogy with small aperture antennas [23]. A contribution to model EMSE for woven fabrics in shielding the near EM field based on the circuit method (introduced by H. Kaden [12,15]) was provided within [24]. Regarding the estimation of EMSE for coated fabrics, the main research direction goes for calculating the permittivity coefficient of the coating [18]. Various coating technologies and related analytic methods for estimation of EMSE were provided by [19].
In our research, both technologies for imparting conductive properties to fabrics were combined: fabrics with inserted conductive yarns were coated by magnetron plasma sputtering from a metallic target. Silver and stainless steel yarns were inserted in cotton woven fabrics and the as-obtained textiles were coated with copper thin films. The aim of our research is to model EMSE for this new type of conductive fabric with inserted conductive yarns in warp and weft direction and conductive plasma coating based on the sum of each conductive structure contributing to EMSE, namely the woven fabric with inserted conductive yarns and the copper coating on both sides of the fabric. The validation of these proposed analytic relations was conducted through electric sheet resistivity measurements and EMSE measurements via the TEM cell according to ASTM ES-07 standard.
Materials and Methods
The stainless steel yarns of type Bekinox BK 50/2 were purchased from Bekaert and the silver yarns of type Statex 117/17 dtex were purchased from Statex Produktions-und Vertriebs GmbH companies and used for fabric weaving. Cotton was used as a base material for the textiles. A copper target of 8 × 4 × 0.5 inches of purity 99.999% was purchased from K.J. Lesker and used in the magnetron sputtering system at The National Institute for Laser, Plasma and Radiation Physics (INFLPR).
Materials-Weaving
The woven fabrics based on cotton yarns with inserted conductive yarns were manufactured at SC Majutex SRL, Barnova Iasi. Stainless steel yarns (Bekinox BK 50/2) and silver yarns (Statex 117/17 dtex) were inserted both in warp and weft system on the weaving loom of type SOMET width 1.90 m. The woven fabrics were designed with plain weave for a simple and efficient structure of EM shields, while the basic support yarn was of 100% cotton Nm 50/2. Two types of woven fabrics with inserted conductive yarns of stainless steel (F1) and silver (F3) resulted, having a mesh grid distance of 5 mm.
Materials-Magnetron Plasma Coating
The copper coating onto the textile fabrics was performed at INFLPR into a dedicated stainless steel spherical vacuum chamber (K.J. Lesker, East Sussex, UK), pumped out by an assembly of a fore pump and turbomolecular pump (Pfeiffer, Memmingen, Germany), which allowed the obtaining of a base pressure down to 3 × 10 −5 mbar. A constant argon flow (purity 6.0) of 50 sccm was continuously introduced into the chamber by means of a Bronkhorst mass flow controller, which allowed to establish the processing pressure around 5 × 10 −3 mbar. The chamber is provisioned with a rectangular magnetron sputtering gun from K.J. Lesker, accommodating the high purity copper target. The discharge was ignited by means of a radio frequency generator (13.56 MHz) provisioned with an automatic matching box for adapting the impedance, and the deposition time was set to ensure coating thicknesses in the range 1200-10,000 nm on each side of the textile fabrics. Enhanced deposition uniformity was achieved by rotating the samples during the deposition process (200 rotations/min). Figure 1 presents a sketch of the experimental set-up of the magnetron plasma equipment of INFLPR. Sample F2 resulted by plasma coating of F1 (stainless steel yarns) on both sides with 1200 nm of Copper, while samples F4, F5, F6 and F7 resulted by coating F3 (silver yarns) on both sides with 1200 nm, 1750 nm, 5600 nm and 10,000 nm of copper, respectively. More details regarding the experimental plan considered for the validation of the model is summarized in Figure 2.
Materials and Methods
The stainless steel yarns of type Bekinox BK 50/2 were purchased from Bekaert and the silver yarns of type Statex 117/17 dtex were purchased from Statex Produktions-und Vertriebs GmbH companies and used for fabric weaving. Cotton was used as a base material for the textiles. A copper target of 8 × 4 × 0.5 inches of purity 99.999% was purchased from K.J. Lesker and used in the magnetron sputtering system at The National Institute for Laser, Plasma and Radiation Physics (INFLPR).
Materials-Weaving
The woven fabrics based on cotton yarns with inserted conductive yarns were manufactured at SC Majutex SRL, Barnova Iasi. Stainless steel yarns (Bekinox BK 50/2) and silver yarns (Statex 117/17 dtex) were inserted both in warp and weft system on the weaving loom of type SOMET width 1.90 m. The woven fabrics were designed with plain weave for a simple and efficient structure of EM shields, while the basic support yarn was of 100% cotton Nm 50/2. Two types of woven fabrics with inserted conductive yarns of stainless steel (F1) and silver (F3) resulted, having a mesh grid distance of 5 mm.
Materials-Magnetron Plasma Coating
The copper coating onto the textile fabrics was performed at INFLPR into a dedicated stainless steel spherical vacuum chamber (K.J. Lesker), pumped out by an assembly of a fore pump and turbomolecular pump (Pfeiffer), which allowed the obtaining of a base pressure down to 3 × 10 −5 mbar. A constant argon flow (purity 6.0) of 50 sccm was continuously introduced into the chamber by means of a Bronkhorst mass flow controller, which allowed to establish the processing pressure around 5 × 10 −3 mbar. The chamber is provisioned with a rectangular magnetron sputtering gun from K.J. Lesker, accommodating the high purity copper target. The discharge was ignited by means of a radio frequency generator (13.56 MHz) provisioned with an automatic matching box for adapting the impedance, and the deposition time was set to ensure coating thicknesses in the range 1200-10000 nm on each side of the textile fabrics. Enhanced deposition uniformity was achieved by rotating the samples during the deposition process (200 rotation/min). Figure 1 presents a sketch of the experimental set-up of the magnetron plasma equipment of INFLPR. Sample F2 resulted by plasma coating of F1 (stainless steel yarns) on both sides with 1200 nm of Copper, while samples F4, F5, F6 and F7 resulted by coating F3 (silver yarns) on both sides with 1200 nm, 1750 nm, 5600 nm and 10,000 nm of copper, respectively. More details regarding the experimental plan considered for the validation of the model is summarized in Figure 2.
Textile Samples
The structural and physical properties of textile samples subjected to modelling and validation of electromagnetic shielding effectiveness (EMSE) are presented in Table 1 for the yarns and Table 2 for the fabrics, emphasizing the data of particular significance for the modelling. The corresponding scheme of the textile samples subjected to modelling and validation of electromagnetic shielding effectiveness (EMSE) is presented in Figure 2.
Textile Samples
The structural and physical properties of textile samples subjected to modelling and validation of electromagnetic shielding effectiveness (EMSE) are presented in Table 1 for the yarns and Table 2 for the fabrics, emphasizing the data of particular significance for the modelling. The corresponding scheme of the textile samples subjected to modelling and validation of electromagnetic shielding effectiveness (EMSE) is presented in Figure 2.
Morphology and Structure of the Textile Sample
The scheme of woven fabric with inserted metallic yarns and plasma coating is presented in Figure 3. The copper coating on the fabric with inserted metallic yarns does not create a continuous surface layer, but rather an additional electrically conductive grid on metallically the fabric yarns, increasing the fabric's conductivity and its shielding properties.
Morphology and Structure of the Textile Sample
The scheme of woven fabric with inserted metallic yarns and plasma coating is presented in Figure 3. The copper coating on the fabric with inserted metallic yarns does not create a continuous surface layer, but rather an additional electrically conductive grid on metallically the fabric yarns, increasing the fabric's conductivity and its shielding properties. SEM images were meant to evidence the stainless steel and the silver yarns in the fabric structure ( Figures 5 and 6). The investigation of Cu coated samples, presented in Figure 7, shows that the film is compact and covers the yarns uniformly. A rupture of the film along one fiber allowed evaluating the film thickness, with variations caused by the actual positioning into the field of view. Also, the copper coating seems to present a columnar structure of the deposit.
Morphology and Structure of the Textile Sample
The scheme of woven fabric with inserted metallic yarns and plasma coating is presented in Figure 3. The copper coating on the fabric with inserted metallic yarns does not create a continuous surface layer, but rather an additional electrically conductive grid on metallically the fabric yarns, increasing the fabric's conductivity and its shielding properties. SEM images were meant to evidence the stainless steel and the silver yarns in the fabric structure ( Figures 5 and 6). The investigation of Cu coated samples, presented in Figure 7, shows that the film is compact and covers the yarns uniformly. A rupture of the film along one fiber allowed evaluating the film thickness, with variations caused by the actual positioning into the field of view. Also, the copper coating seems to present a columnar structure of the deposit. SEM images were meant to evidence the stainless steel and the silver yarns in the fabric structure ( Figures 5 and 6). The investigation of copper coated samples, presented in Figure 7, shows that the film is compact and covers the yarns uniformly. A rupture of the film along one fiber allowed evaluating the film thickness, with variations caused by the actual positioning into the field of view. Also, the copper coating seems to present a columnar structure of the deposit. The SEM images reveal that the average distance between adjacent yarns is around 300 microns, while the distance between two adjacent metallic yarns is 5 mm. In this manner, a combined network of rectangles is formed: a small net originating from the firstorder neighbors combining both cotton and metallic yarns, which are covered by Cu coating, and larger rectangles formed by the metallic yarns woven into the fabric. The SEM images reveal that the average distance between adjacent yarns is around 300 microns, while the distance between two adjacent metallic yarns is 5 mm. In this manner, a combined network of rectangles is formed: a small net originating from the firstorder neighbors combining both cotton and metallic yarns, which are covered by Cu coating, and larger rectangles formed by the metallic yarns woven into the fabric. The SEM images reveal that the average distance between adjacent yarns is around 300 microns, while the distance between two adjacent metallic yarns is 5 mm. In this manner, a combined network of rectangles is formed: a small net originating from the first-order neighbors combining both cotton and metallic yarns, which are covered by copper coating, and larger rectangles formed by the metallic yarns woven into the fabric.
Electric Conductivity Measurements
The following relation was used for measuring electric conductivity (σ m ) in the case of the washer geometrical shape of the textile shields, tailored according to the requirements imposed by the ASTM ES-07 standard for the determination of the EMSE.
where a is the inner diameter of the circle, b the outer diameter, h the fabric thickness and R w -the measured resistance value by ohmmeter [Ω] (Figure 8).
EM Shielding Effectiveness Measurements
Electromagnetic shielding effectiveness (EMSE) measurement was accomplished according to the standard ASTM ES-07, via a transverse electromagnetic cell (TEM cell). EMSE is defined as: Textiles 2021, 1
10
The scheme of the coaxial TEM cell is presented in Figure 9, which also includes the shape of the samples tailored for testing the EMSE with this system.
Electric Conductivity Measurements
The following relation was used for measuring electric conductivity (σm) in the case of the washer geometrical shape of the textile shields, tailored according to the requirements imposed by the ASTM ES-07 standard for the determination of the EMSE.
where a is the inner diameter of the circle, b the outer diameter, h the fabric thickness and Rw-the measured resistance value by ohmmeter [Ω] (Figure 8).
EM Shielding Effectiveness Measurements
Electromagnetic shielding effectiveness (EMSE) measurement was accomplished according to the standard ASTM ES-07, via a transverse electromagnetic cell (TEM cell). EMSE is defined as: The scheme of the coaxial TEM cell is presented in Figure 9, which also includes the shape of the samples tailored for testing the EMSE with this system. In order to be tested, fabric samples were tailored in annular shape having an outer diameter of 100 mm and an inner diameter of 30 mm and were fixed onto the cell by means of colloidal Ag paste applied on their borders. The measurement system included a signal generator Keysight E8257D, a power amplifier IFI model SMX50, the coaxial TEM cell model 2000 and an oscilloscope Tektronix model MDO3102. The EMSE measurements were performed within the frequency range of 100 kHz to 1 GHz, in accordance to the ASTM ES-07 standard. EMSE was measured for each of the seven fabric samples.
EM Shielding Effectiveness Measurements
Electromagnetic shielding effectiveness (EMSE) measurement was accomplished according to the standard ASTM ES-07, via a transverse electromagnetic cell (TEM cell). EMSE is defined as: The scheme of the coaxial TEM cell is presented in Figure 9, which also includes the shape of the samples tailored for testing the EMSE with this system. In order to be tested, fabric samples were tailored in annular shape having an outer diameter of 100 mm and an inner diameter of 30 mm and were fixed onto the cell by means of colloidal Ag paste applied on their borders. The measurement system included a signal generator Keysight E8257D, a power amplifier IFI model SMX50, the coaxial TEM cell model 2000 and an oscilloscope Tektronix model MDO3102. The EMSE measurements were performed within the frequency range of 100 kHz to 1 GHz, in accordance to the ASTM ES-07 standard. EMSE was measured for each of the seven fabric samples. In order to be tested, fabric samples were tailored in annular shape having an outer diameter of 100 mm and an inner diameter of 30 mm and were fixed onto the cell by means of colloidal Ag paste applied on their borders. The measurement system included a signal generator Keysight E8257D, a power amplifier IFI model SMX50, the coaxial TEM cell model 2000 and an oscilloscope Tektronix model MDO3102. The EMSE measurements were performed within the frequency range of 100 kHz to 1 GHz, in accordance to the ASTM ES-07 standard. EMSE was measured for each of the seven fabric samples.
Results
The results obtained for the electrical conductivity for the samples investigated according to the scheme depicted in Figure 2 are presented in Table 3. It is noticed that the conductivity of the fabrics containing silver yarns are systematically higher than those of the fabrics containing stainless steel yarns, with about one order of magnitude; at the same time, the conductivity increases upon copper coating of the fabrics, regardless of the type of yarns in the structure (Figure 10). cording to the scheme depicted in Figure 2 are presented in Table 3. It is noticed that the conductivity of the fabrics containing silver yarns are systematically higher than those of the fabrics containing stainless steel yarns, with about one order of magnitude; at the same time, the conductivity increases upon Cu coating of the fabrics, regardless of the type of yarns in the structure ( Figure 10). The graphs evidencing the electromagnetic shielding effectiveness in the case of stainless steel-based fabrics are illustrated in Figure 11. They show in the frequency range from 10 5 to 10 7 Hz a shielding up to 22 dB for the plain textile, with a small increase of around 4 dB of shielding upon coating with 1200 nm Cu layer onto both faces of the material. Figure 12 shows a comparison of the measured EMSE values for the silver-based fabrics, which exceed 45 dB for the frequency range from 10 5 to 10 8 Hz, even for the uncoated material. At the same time, one can notice that the additional coating of the structure is conducting to enhanced shielding effectiveness, which is more important as the Cu coating thickness increases and present values exceeding 60 dB for 10 µm layer coating, for frequencies up to 10 7 Hz. At frequencies above 10 8 Hz, one can notice that the Cu layer thickness has a limited influence on the shielding efficiency, which remain in the range 30-40 dB. The graphs evidencing the electromagnetic shielding effectiveness in the case of stainless steel-based fabrics are illustrated in Figure 11. They show in the frequency range from 10 5 to 10 7 Hz a shielding up to 22 dB for the plain textile, with a small increase of around 4 dB of shielding upon coating with 1200 nm copper layer onto both faces of the material. Figure 12 shows a comparison of the measured EMSE values for the silver-based fabrics, which exceed 45 dB for the frequency range from 10 5 to 10 8 Hz, even for the uncoated material. At the same time, one can notice that the additional coating of the structure is conducting to enhanced shielding effectiveness, which is more important as the copper coating thickness increases and present values exceeding 60 dB for 10 µm layer coating, for frequencies up to 10 7 Hz. At frequencies above 10 8 Hz, one can notice that the copper layer thickness has a limited influence on the shielding efficiency, which remain in the range 30-40 dB.
The Model for Estimating EMSE
The principle that states that for combinations of multiple electric shields, the overall EMSE is the sum of the EMSE of individual shields [19] was applied in order to model the shielding effectiveness of the fabrics with inserted conductive yarns and conductive coatings. As such, due to the fact that the structure of such shields includes the coating of one side, the fabric with inserted conductive yarns and the coating of the other side, the following relation is proposed for modelling of EMSE: = + 2 × (3) For EMSEgrid, the relation for electric conductive grid structures according to [19] and for EMSElayer, the relation of impedance method according to [13] are used. Geometric and electric parameters for both relations were applied related to the structure of the grid of inserted conductive yarns and the layer of coating. The following notations for electric and geometric parameters for these types of fabrics apply: σy
The Model for Estimating EMSE
The principle that states that for combinations of multiple electric shields, the overall EMSE is the sum of the EMSE of individual shields [19] was applied in order to model the shielding effectiveness of the fabrics with inserted conductive yarns and conductive coatings. As such, due to the fact that the structure of such shields includes the coating of one side, the fabric with inserted conductive yarns and the coating of the other side, the following relation is proposed for modelling of EMSE: For EMSE grid , the relation for electric conductive grid structures according to [19] and for EMSE layer , the relation of impedance method according to [13] are used. Geometric and electric parameters for both relations were applied related to the structure of the grid of inserted conductive yarns and the layer of coating. The following notations for electric and geometric parameters for these types of fabrics apply: For EMSE grid , the model related to the woven fabrics with inserted metallic yarns, the following equation applies (4): where: A a = attenuation introduced by a particular discontinuity, dB R a = aperture single reflection loss, dB B a = multiple reflection correction term, dB K 1 = correction term to account for the number of like discontinuities, dB K 2 = low-frequency correction term to account for skin depth, dB K 3 = correction term to account for the coupling between adjacent holes, dB With following relations for these terms: where: h-fabric thickness (depth of opening) [m] r = distance between conductive yarns (width of the rectangular opening perpendicular to E-field) [m] R a = 20 log 10 1 + 4K 2 4K [dB] (6) where: K is valid for rectangular apertures and plane waves.
10
[dB] (8) where: S-the area of each hole (sq cm) n-number of holes/sq cm where: The term K2 is the single correction factor of the analytic relation sum which encounters the electric parameter of the yarns (electric conductivity and magnetic permeability), within the relation of skin depth. It is thus a factor with high sensitivity on the overall EMSE relation. The electric parameters were considered for the conductive yarn (not for the fabric), since the ratio p = D/δ y is a property of the yarn. The skin depth of the yarn δ y has the following electric parameters: Since D is the diameter of the electric conductor and we have within the fabric structure two adjacent yarns (float repeat 2:6 Warp and 2:5 Weft), with the diameter d, the resulting diameter is: D = √ d 1 d 2 , due to the elliptical shape of the two adjacent metallic yarns. d 1 = d and d 2 = d + l c with l c = 100 d we and d we = fabric density in yarns/100 mm.
For EMSE layer , the model related to the shielding of the copper coating was given by the general expression of the impedance method according to [13]: where: δ m -skin depth of copper coated fabric with inserted metallic yarns [m]; γ-propagation constant, α-attenuation constant, β-phase constant. γ = α = jβ = jωµ m (σ m + jωε m ); for metals, due to σ >> ωε, γ = jωµ m σ m or γ = (1 + j) π f µ m σ m , then α = β = π f µ m σ m ; The following relations are set for the impedance of the textile shields (Z m ) and the wave impedance of free space (Z 0 ): where: ω = 2πf -angular frequency Since the textile shields considered in this work contain metal coatings and yarns, the conductivity is assumed to be very large as compared with air, meaning that σ m >> ωε. This condition is verified for the sample with lowest electric conductivity (F1) σ m = 45.60 S/m (Table 3) and ωε 0 = 0.0556 S/m for f = 1 GHz. Hence, the condition σ >> ωε is valid for all samples. The shield impedance can be written as: In terms of skin depth of the coating, δ m , the modulus of shield impedance is: Skin depth is defined as the distance from the metal surface for which the current density drops at 1/e from the value at the inner surface. From (15) and (17), the definition of skin depth for copper coating is obtained: By applying the general Equation (3) for the calculation of EMSE for the samples involved in the present study, we obtained the red curves in Figures 13 and 14 for the case of fabrics with inserted stainless steel yarns, and respectively the red curves in Figures 15-18 for the case of fabrics with inserted silver yarns, and with different copper layer thickness.
Textiles 2021, 1, FOR PEER REVIEW 11 γ = α + jβ = jωμ (σ + jωε ); for metals, due to σ >> ωε, γ = jωμ or γ = (1 + j) π μ σ , then α = β = π μ σ ; The following relations are set for the impedance of the textile shields ( ) and the wave impedance of free space (Z0): = jωμ σ + jωε (15) = 377 Ω (16) where: ω = 2πf − angular frequency Since the textile shields considered in this work contain metal coatings and yarns, the conductivity is assumed to be very large as compared with air, meaning that σm >> ωε. This condition is verified for the sample with lowest electric conductivity (F1) σm = 45.60 S/m (Table 3) and ωε0 = 0.0556 S/m for f = 1GHz. Hence, the condition σ >> ωε is valid for all samples. The shield impedance can be written as: In terms of skin depth of the coating, δm, the modulus of shield impedance is: Skin depth is defined as the distance from the metal surface for which the current density drops at 1/e from the value at the inner surface. From (15) and (17), the definition of skin depth for copper coating is obtained: By applying the general Equation (3) for the calculation of EMSE for the samples involved in the present study, we obtained the red curves in Figures 13 and 14 for the case of fabrics with inserted stainless steel yarns, and respectively the red curves in Figures 15-18 for the case of fabrics with inserted silver yarns, and with different Cu layer thickness.
Discussion
The novel structure of textile shields made of fabrics with inserted conductive yarns and metallic plasma coating has a geometry difficult to be modelled in order to accurately calculate the EMSE. Several analytic models have been applied in the planning phase of this research study, such as: - The impedance method [12][13][14]16] - The circuit method [12,15] -The impedance method with correction factors for conductive grid structures [12,19] - The impedance method for multiple shields [19].
The proposed approach to model these structures was to add the EMSE of the three conductive structures: the metallic yarns inserted into the woven structure and the copper coating on both sides of the fabric. The proposed analytic model considers both geometric and electrical parameters of the fabric with inserted conductive yarns and of the conductive coating. However, the analytic model distinguishes between the skin depth of the yarn (δy) and the skin depth of the fabric material (δm). Both electrical parameters for the skin depth (electric conductivity and magnetic permeability) were measured and calculated in the first phase for the metallic yarns and the coated fabrics. The geometric parameters with high sensitivity were
Discussion
The novel structure of textile shields made of fabrics with inserted conductive yarns and metallic plasma coating has a geometry difficult to be modelled in order to accurately calculate the EMSE. Several analytic models have been applied in the planning phase of this research study, such as: - The impedance method [12][13][14]16] - The circuit method [12,15] -The impedance method with correction factors for conductive grid structures [12,19] - The impedance method for multiple shields [19].
The proposed approach to model these structures was to add the EMSE of the three conductive structures: the metallic yarns inserted into the woven structure and the copper coating on both sides of the fabric. The proposed analytic model considers both geometric and electrical parameters of the fabric with inserted conductive yarns and of the conductive coating. However, the analytic model distinguishes between the skin depth of the yarn (δy) and the skin depth of the fabric material (δm). Both electrical parameters for the skin depth (electric conductivity and magnetic permeability) were measured and calculated in the first phase for the metallic yarns and the coated fabrics. The geometric parameters with high sensitivity were the thickness of the fabric and the diameter of the metallic yarn. The equivalent diameter Figure 18. Calculated (red) and measured (blue) values for F7 (silver yarns) and 10,000 nm copper coating.
Discussion
The novel structure of textile shields made of fabrics with inserted conductive yarns and metallic plasma coating has a geometry difficult to be modelled in order to accurately calculate the EMSE. Several analytic models have been applied in the planning phase of this research study, such as: - The impedance method [12][13][14]16] - The circuit method [12,15] -The impedance method with correction factors for conductive grid structures [12,19] -The impedance method for multiple shields [19].
The proposed approach to model these structures was to add the EMSE of the three conductive structures: the metallic yarns inserted into the woven structure and the copper coating on both sides of the fabric. The proposed analytic model considers both geometric and electrical parameters of the fabric with inserted conductive yarns and of the conductive coating. However, the analytic model distinguishes between the skin depth of the yarn (δ y ) and the skin depth of the fabric material (δ m ). Both electrical parameters for the skin depth (electric conductivity and magnetic permeability) were measured and calculated in the first phase for the metallic yarns and the coated fabrics. The geometric parameters with high sensitivity were the thickness of the fabric and the diameter of the metallic yarn. The equivalent diameter of the two metallic yarns was computed as the diameter of the circle having the same area as the resulting ellipse formed by the two adjacent metallic yarns in the fabric structure. The distance between the yarns was considered for computing the diameter of the ellipse, which was given by the fabric density (d w ).
All geometric and electric parameters of the achieved shields were considered within proposed calculation of EMSE: -Electric conductivity and magnetic permeability of the metallic yarns; -Optical diameter of the metallic yarns and equivalent diameter of the electric conductor; -Distance between metallic yarns of the woven fabric, depending on float repeat and weave; -Electric conductivity and magnetic permeability of the fabric; -Fabric thickness; -Thickness of the plasma coated layer.
In the case of the fabrics with stainless steel yarns (F1), the model estimates quite well the fabric with inserted yarns, with differences in the range 1-8 dB over the whole frequency range. An even better fitting of the measured EMSE is obtained for the fabric with inserted yarns and copper coating (F2) by the EMSE grid and the additional EMSE layer relation, with differences between the modelled and measured values less than 5 dB, as shown in Figure 19.
Textiles 2021, 1, FOR PEER REVIEW 14 of the two metallic yarns was computed as the diameter of the circle having the same area as the resulting ellipse formed by the two adjacent metallic yarns in the fabric structure. The distance between the yarns was considered for computing the diameter of the ellipse, which was given by the fabric density (dw). All geometric and electric parameters of the achieved shields were considered within proposed calculation of EMSE: -Electric conductivity and magnetic permeability of the metallic yarns; -Optical diameter of the metallic yarns and equivalent diameter of the electric conductor; -Distance between metallic yarns of the woven fabric, depending on float repeat and weave; -Electric conductivity and magnetic permeability of the fabric; -Fabric thickness; -Thickness of the plasma coated layer.
In the case of the fabrics with stainless steel yarns (F1), the model estimates quite well the fabric with inserted yarns, with differences in the range 1-8 dB over the whole frequency range. An even better fitting of the measured EMSE is obtained for the fabric with inserted yarns and copper coating (F2) by the EMSEgrid and the additional EMSElayer relation, with differences between the modelled and measured values less than 5 dB, as shown in Figure 19. In case of the fabric with silver yarns (F3), the model EMSEgrid underestimates the measured values, a fact which could be explained by the two parameters with high sensitivity of the model-the electric conductivity and the equivalent diameter of the silver yarn. The electrical linear resistance of the silver yarn presented different values for different measurements, a fact explainable by its non-homogenous structure and the general terms of its specification (Rl < 1.5 kΩ/m) [25]. The measured value for silver yarn conductivity introduced into the model is a potential factor of underestimated values of EMSEgrid relation.
The fabrics F4 and F5 show a significant difference between modelled and measured values of around 20 dB on the frequency range 10 5 to 10 7 Hz, which could be explained by the low values of the EMSElayer model in case of 10 3 nanometer values: 1200 nm (F4) and 1750 nm (F5). On the other hand, the EMSElayer model has significant increasing values for 5600 nm (F6) and 10,000 nm (F7), which makes that EMSEtotal reaches the measured values for F6 and F7, as shown in Figure 20. These results show that the steady increase of the fabric conductivity upon copper coating, of 3.2 times for F6 and 3.6 times for F7 with respect to the uncoated fabric, plays an important role in the model of the EMSElayer. These In case of the fabric with silver yarns (F3), the model EMSE grid underestimates the measured values, a fact which could be explained by the two parameters with high sensitivity of the model-the electric conductivity and the equivalent diameter of the silver yarn. The electrical linear resistance of the silver yarn presented different values for different measurements, a fact explainable by its non-homogenous structure and the general terms of its specification (R l < 1.5 kΩ/m) [25]. The measured value for silver yarn conductivity introduced into the model is a potential factor of underestimated values of EMSE grid relation.
The fabrics F4 and F5 show a significant difference between modelled and measured values of around 20 dB on the frequency range 10 5 to 10 7 Hz, which could be explained by the low values of the EMSE layer model in case of 10 3 nanometer values: 1200 nm (F4) and 1750 nm (F5). On the other hand, the EMSE layer model has significant increasing values for 5600 nm (F6) and 10,000 nm (F7), which makes that EMSE total reaches the measured values for F6 and F7, as shown in Figure 20. These results show that the steady increase of the fabric conductivity upon copper coating, of 3.2 times for F6 and 3.6 times for F7 with respect to the uncoated fabric, plays an important role in the model of the EMSE layer . These facts suggest a significant role played by the conductivity of the components in the model. One has to consider that this type of composite EM shield is quite difficult to model and that the proposed relation of EMSE includes all the parameters of the electric structures of this composite shield.
Textiles 2021, 1, FOR PEER REVIEW 15 facts suggest a significant role played by the conductivity of the components in the model. One has to consider that this type of composite EM shield is quite difficult to model and that the proposed relation of EMSE includes all the parameters of the electric structures of this composite shield. The differences between the calculated and measured EMSE values (from Figures 19-20) are due to the fact that the ideal conditions considered in the theoretical model, which is based on the isomorphism between the infinite plane shield placed in free space and the washer-shaped sample placed in a coaxial line, (homogenous material sample, perfect electrical contact between sample and sample holder (TEM Cell)) are difficult to achieve in practice. The electrical contact between the sample and sample holder becomes very important at high frequencies. Moreover, when using a coaxial TEM cell for determining the EMSE of material, higher transmission modes appear at high frequencies, which can affect the measurement results [11]. Also, given the composite structure of the proposed electromagnetic shields (textile yarns, conductive yarns, conductive coating), other phenomena may occur that would not usually occur in a perfectly homogenous material; these could affect the EMSE.
Conclusions
This paper proposes a novel type of textile shield: fabrics with inserted conductive yarns and metallic coating obtained by magnetron sputtering deposition. The results regarding the electromagnetic shielding efficiency (EMSE) of these fabrics evidence that the metallic plasma coatings applied additionally on fabrics with inserted conductive yarns contribute with 10-15 dB to overall EMSE in the frequency range 0.1-1000 MHz and therefore significantly enhance the material functionality. The utilization of flexible textile shields would open up new practical opportunities, and therefore the modelling of the electromagnetic shielding effectiveness (EMSE) is particularly important. As such, in the present paper, we considered the combinations of multiple electric shields originating from the initial fabrics structure with metallic yarns inserted and the coating of fabric on both faces with a conductive copper layer of various thicknesses.
Each of the contributions to the overall EMSE was analytically determined according to the equations 3-19, respectively EMSEgrid and EMSElayer, and the obtained values were computed to model the shielding effectiveness of the fabrics with inserted conductive yarns and conductive coatings. The approach of modelling is meant to be able to estimate the property of EMSE in the design phase of the textile shield. Although there are still differences between calculated and measured results, it is considered that the analytic The differences between the calculated and measured EMSE values (from Figures 19 and 20) are due to the fact that the ideal conditions considered in the theoretical model, which is based on the isomorphism between the infinite plane shield placed in free space and the washer-shaped sample placed in a coaxial line, (homogenous material sample, perfect electrical contact between sample and sample holder (TEM Cell)) are difficult to achieve in practice. The electrical contact between the sample and sample holder becomes very important at high frequencies. Moreover, when using a coaxial TEM cell for determining the EMSE of material, higher transmission modes appear at high frequencies, which can affect the measurement results [11]. Also, given the composite structure of the proposed electromagnetic shields (textile yarns, conductive yarns, conductive coating), other phenomena may occur that would not usually occur in a perfectly homogenous material; these could affect the EMSE.
Conclusions
This paper proposes a novel type of textile shield: fabrics with inserted conductive yarns and metallic coating obtained by magnetron sputtering deposition. The results regarding the electromagnetic shielding efficiency (EMSE) of these fabrics evidence that the metallic plasma coatings applied additionally on fabrics with inserted conductive yarns contribute with 10-15 dB to overall EMSE in the frequency range 0.1-1000 MHz and therefore significantly enhance the material functionality. The utilization of flexible textile shields would open up new practical opportunities, and therefore the modelling of the electromagnetic shielding effectiveness (EMSE) is particularly important. As such, in the present paper, we considered the combinations of multiple electric shields originating from the initial fabrics structure with metallic yarns inserted and the coating of fabric on both faces with a conductive copper layer of various thicknesses.
Each of the contributions to the overall EMSE was analytically determined according to the equations 3-19, respectively EMSE grid and EMSE layer , and the obtained values were computed to model the shielding effectiveness of the fabrics with inserted conductive yarns and conductive coatings. The approach of modelling is meant to be able to estimate the property of EMSE in the design phase of the textile shield. Although there are still differences between calculated and measured results, it is considered that the analytic model based on adding the particular contribution to EMSE of the metallic grid and of the metallic coating gives a valuable guidance when designing this type of textile shield. | 10,260.6 | 2021-03-24T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
PHYSOR 2020: Transition to a Scalable Nuclear Future Cambridge, United Kingdom, March 29th-April 2nd, 2020 EXPERIMENTAL DETERMINATION OF THE ZERO POWER TRANSFER FUNCTION OF THE AKR-2
The transfer function is a basic characteristic of every nuclear reactor. It describes how a perturbation at a given place and time influences the neutron flux. In case of a known perturbation, the determination of characteristic reactor parameters is possible. The present paper shows an experimental method to determine the gain of the zero-power reactor transfer function (ZPTF) of the AKR-2 reactor at TU Dresden and the comparison to the theoretical shape of the ZPTF derived from kinetic parameters simulated with MCNP. For the experiments, a high-precision linear motor axis is used to insert an oscillating perturbation acting at frequencies smaller than the lower bound of the plateau region of the ZPTF. For higher frequencies, a rotating absorber is used. This device emulates an absorber of variable strength. The reactor response is detected with a He-3 counter. The data evaluation shows good agreement between measured and corresponding theoretical values of the gain of the ZPTF.
INTRODUCTION
The transfer function is a basic characteristic of every nuclear reactor. It describes how a perturbation at a given place and time influences the reactor's state variables [1]. The concept of a transfer function is reliably applicable as long as the system's behavior can be approximated by a linear time-invariant dynamical system. The knowledge of the transfer function may allow determining the type of the perturbation from the measurements of the induced fluctuations of the neutron flux or, in case the perturbation is known, the determination of characteristic reactor parameters [2]. The determination of the reactor transfer functions has been performed since the early days of reactor research [3]. Nevertheless, in current safety research, the inversion of the transfer function as a method of incident detection is of particular interest [4]. In this context, the gain of the transfer function of the AKR-2 reactor at TU Dresden is experimentally determined by use of a vibrating absorber and an absorber of variable strength. Since thermal feedback effects in the AKR-2 core can be neglected during normal operation, the AKR-2 can be assumed to be a zero-power reactor. Moreover, the dynamical behavior of the AKR-2 can be described by the point-kinetic approximation. In such conditions, the transfer function is determined by the zero power transfer function (ZPTF), with the applied perturbation converted into its reactivity effect. For the experiments, a highprecision linear motor axis is used to insert an oscillating perturbation acting at frequencies smaller than the lower bound of the plateau region of the ZPTF. For higher frequencies, a rotating absorber is used. This device emulates an absorber of variable strength. The reactor response is detected with a He-3 counter placed inside the reactor. Using this setup, the gain of the ZPTF of the AKR-2 is measured through the analysis of the input signals (movement of the absorber) and the output signals (induced fluctuations in neutron flux). For an assessment of the measured ZPTF, it is compared with the theoretical shape of the ZPTF derived from kinetic parameters that were simulated with an MCNP model of the reactor. Based on the knowledge of the ZPTF, the investigation of neutron flux fluctuations using the same experimental setup will be possible in the future. Conceivable research fields could be the generation of high-precision data with a high level of fidelity for the validation of computer codes, inducing one or more perturbations. Additionally, a direct evaluation of unknown cross sections of different materials with the pile-oscillator method would be possible.
THEORY OF ZERO POWER TRANSFER FUNCTION
The transfer function ( ) describes the transfer behavior of a system, which reacts to an input signal replying with a corresponding output signal in the frequency domain. For a nuclear reactor, the input signal is a reactivity deviation that leads to a local or global deviation of the reactor's state variables. As for a zero power reactor thermal and other feedback effects are neglected. Input and output are coupled just with the ZPTF [5]. The input signal is a deviation of the reactivity ( ), which leads to a change in the neutron flux (Φ). This transfer is coupled via the transfer function ( ). Mathematical description of the ZPTF is derived from the point kinetic approximation. The derivation is widely described in many publications [1,2]. The following formulation is the transfer function in frequency domain: Where β is the fraction of delayed neutrons, λ is the decay constant of the precursors, both for the i-th group of precursors, and l the generation time of the prompt neutrons. The initial neutron number ( 0 ) is commonly used as normalization factor. The gain of the ZPTF is represented by the absolute value of ( ). For a sinusoidal perturbation is = .
DESCRIPTION OF THE EXPERIMENTAL SETUP
The experiments to determine the ZPTF were carried out at the AKR-2 reactor with experimental equipment that was developed in the context of the European project CORTEX [4] for the validation of simulation tools. The low frequency range (0.05 Hz to 1 Hz) was examined with a vibrating absorber and the higher frequencies (0.5 Hz to 12 Hz) with an absorber of variable strength. In the following, the reactor is briefly introduced, as well as the two different experimental setups.
The AKR-2 reactor
The AKR-2 reactor [6], located at TU Dresden, is a thermal, zero-power reactor with an allowed maximum thermal power of 2 W. It is used mainly for training and teaching purposes, and additionally for research. The core has cylindrical shape with a diameter of 250 mm and a height of 275 mm. The disk-shaped fuel elements consist of a homogeneous dispersion of polyethylene moderator and uranium oxide, which is enriched to 19.8 %. For security reasons the core is separable, whereas the lower half is movable. For the basic start-up procedure, the core halves are brought together. The operation is controlled via three control and safety rods installed next to the fuel zone, consisting of cadmium sheets on polyethylene blocks. A graphite reflector with approx. 32 cm thickness surrounds the core. The biological shield consists of two (1)
Physics of Reactors Transition to a Scalable Nuclear Future
Proceedings of the PHYSOR 2020, Cambridge, United Kingdom cylindrical walls of 15 cm and 58 cm thickness, made of paraffin and heavy concrete. The reactor is accessible through seven horizontal and two vertical experimental channels. The setup of the AKR-2 is illustrated in Figure 1.
Vibrating absorber
The vibrating absorber triggers a repetitive perturbation in the reactor. For the AKR-2 this is realized with the help of a linear motor axis that moves a shaft containing a cadmium absorber within the experimental channel 1-2. The absorber is a cadmium cylinder with a diameter of 12.7 mm and a height of 1.016 mm. It consist of 99.9656 % natural cadmium and minor impurities of other metals. The setup can be seen in Figure 2 a). The linear motor axis can effect different motion profiles with a maximum amplitude of 15.5 cm with a positioning accuracy of 10 μm and a repositioning accuracy of 10 μm. Because of safety limitations of the driver software of the axis, a repeatable motion profile can only be guaranteed up to a frequency of 1 Hz. Frequencies lower than realized in the shown experiments are not possible due to the reactor response, which would exceed the reactor's safety (power) margins. For the experiments, a rectangular motion shape was chosen with an amplitude of 2 cm (between 7 cm and 11 cm from core center). This assures the movement of the absorber inside a zone with a linear decrease of the neutron flux, resulting in a linear transfer of the reactivity perturbation. The induced reactivity over the absorber position is shown in Figure 2 b). The reactivity difference between the maximum and the minimum position of the absorber is 0.055 $. The reactivity influence over the movement path was determined via compensationmethod with help of one the control and safety rods. The movement of the axis is controlled and recorded by the programmable logic controller of the linear motor axis via an absolute position encoder.
Absorber of variable strength
The absorber of variable strength inserts a local reactivity perturbation. For the AKR-2, this is realized with the help of an absorber, rotating in the experimental channel 3-4 and driven by a stepper motor. The absorber is a bent rectangular cadmium sheet with size of 25 cm x 2 cm x 0.02 cm with a bending and rotation radius of 2.98 cm. The largest dimension is parallel to the experimental channel. The material of the absorber is natural cadmium with unknown impurities. The stepper motor induces a motion profile with a constant angular velocity. Frequencies range from 1 Hz to 12 Hz. The perturbation is started in the position closest to the ground level (as can be seen in Figure 3 a). This position is referenced as angle 0°. The movement starts towards the reactor core. The difference of the minimum and maximum reactivity of a whole repetition of the absorber is 0.011 $. The reactivity influence was determined in the same way as the one for the other absorber type. The reactivity over the angle is illustrated in Figure 3 b). In the present case, it was not possible to track the movement of the stepper-motor. This led to the assumption of a continuous movement with constant angular velocity, which was checked by a time control for the lower frequencies and a sound noise analysis for the higher frequencies.
Physics of Reactors Transition to a Scalable Nuclear Future
Proceedings of the PHYSOR 2020, Cambridge, United Kingdom
Measurement setup with detector position
The used detector was a He-3 counter in pulse mode operation. The volume of the effective, helium-filled zone was determined via computed tomography: V = 12.441 cm 3 (+ 0.638 cm 3 / -0.181 cm 3 ). The signal was amplified and then recorded with an ORTEC multi-channel-scaler. The position of the detector and of the perturbations during the experiments is shown in Figure 4.
MEASUREMENTS AND DATA ANALYSIS
The measurements were performed at four different days with a displayed power for each measurement of The output signals are the recorded neutron counts. With limited time for each measurement, it was difficult to always obtain a perfect critical state before starting the recording. To control whether the critical state is reached, the decay of the majority of the precursors has to be waited for after a movement of the control and safety rods. In case of a decrease or an increase of the reactor power, the reactor state has to be adjusted recursively. This would have exceeded the available measurement time. To overcome this challenge, for each measurement, the baseline of the recorded output data was fitted with a polynomial of second order. Subsequently, the fit curve was subtracted from the signal. The fits represent the increase or decrease of the power due the influence of the precursors with the larger half-life. The concentration of these precursors are not influenced by the relatively high frequencies of the current experiment. The applicability of this method on the decay of precursors was shown for pile-oscillator experiments [7]. A Fast Fourier Transform (FFT) transfered both, input and output data separately, to the frequency domain. The FFT was performed using a periodic Hamming window [8]. The main peaks for every frequency domain data set were identified and fitted with a Gaussian peak function in an automated manner. The peak heights in frequency domain are an equivalent representation of the amplitude of the signals of the input and of the output, respectively. This assumption is reliable, as the peak shapes are not overlapping. The described method is illustrated in Figure 5 showing an example measurement analysis of the vibrating absorber at a perturbation frequency of 0.05 Hz. The ratio of the peak height value of the output signal in frequency domain to the peak height value of the input signal in frequency domain is the gain of the ZPTF for the given frequency. The errors of each measurement point are just evaluated with the standard deviations of the fit functions. Failures of the data acquisition system, the absorber movement, and the determination of the reactivity influences via the compensation method are neglected, because these errors are an order of magnitude lower than the errors of the fit functions. An assessment of this assumption is shown in [9]. The reactor kinetic parameters were determined with an MCNP 6.0 [10] model of the AKR-2 and the ENDF/B-VIII.0 data library [11] for six groups of precursors. Where = 57.2956 • 10 −6 s with a standard deviation of 8.50 • 10 −8 s. The other parameters can be seen in Table I. The measurements including the error-bars and the theoretical shape of the ZPTF derived by the simulation can be seen in Figure 6. The larger errors for the absorber of variable strength in comparison with the vibrating absorber result from the comparably small reactivity effect of the absorber of variable strength.
In future measurements, using larger output signal lengths, can reduce the errors. Over all, a good agreement of the theoretical shape of the ZPTF and the measurement data can be stated. For a final proof of the used methods, it would be useful to extend the frequency range to frequencies up to 100 Hz, to observe the reactor's behavior on frequencies higher than the upper bound of the plateau region of the ZPTF. It is also necessary to install a position encoder at the drive of the absorber of variable strength to reduce the error of the corresponding measurement points. This will also allow the investigation of the phase of the ZPTF, assuming a timely tied data acquisition of input and output signals.
CONCLUSIONS
The gain of the ZPTF of the AKR-2 was experimentally determined. Two absorbers of different kinds were used in overlapping frequency ranges. The measured data was analyzed with the help of discrete FFT. The obtained data was compared with the theoretical shape of the ZPTF, derived with the help of MCNP generated kinetic reactor parameters. The applicability of the used experimental setups could be shown within some limitations. For future experiments, a broadening of the used frequency range should be applied. A synchronized recording of the input and output signals would enable the evaluation of the phase of the ZPTF. | 3,379.6 | 2020-01-01T00:00:00.000 | [
"Physics"
] |
The distributed co-evolution of an on-board simulator and controller for swarm robot behaviours
We investigate the reality gap, specifically the environmental correspondence of an on-board simulator. We describe a novel distributed co-evolutionary approach to improve the transference of controllers that co-evolve with an on-board simulator. A novelty of our approach is the the potential to improve transference between simulation and reality without an explicit measurement between the two domains. We hypothesise that a variation of on-board simulator environment models across many robots can be competitively exploited by comparison of the real controller fitness of many robots. We hypothesise that the real controller fitness values across many robots can be taken as indicative of the varied fitness in environmental correspondence of on-board simulators, and used to inform the distributed evolution an on-board simulator environment model without explicit measurement of the real environment. Our results demonstrate that our approach creates an adaptive relationship between the on-board simulator environment model, the real world behaviour of the robots, and the state of the real environment. The results indicate that our approach is sensitive to whether the real behavioural performance of the robot is informative on the state real environment.
Introduction
Swarm robotics is regarded as being a difficult class of robotic system to design.Multiple autonomous robots are expected to produce useful group behaviour as an emergent consequence of their interactions.From a designer's point of view, only a single robotic agent is defined and the result of complex interactions must be extrapolated outwards.Through decentralisation, selforganising robotic systems they are cited as being robust, flexible, and scalable; although this is not without caveats [1].
Evolutionary computation is an appealing design approach to swarm robotics.The design outcome can be defined as a group behaviour, and an evolutionary algorithm addresses the hard problem of a solution for the individual robot.Often a simulation is used and provides convenient access to group-level evaluative metrics [6,14,16,17,23].
The use of simulation in evolutionary robotics has been heavily debated.To avoid a prohibitively slow simulation it must be designed to balance the accuracy of the representation against the time of computation, inherently encapsulating errors [13].Inaccuracies in a simulation can be exploited by the evolutionary process, producing robotic solutions with a discrepancy between simulated and actual performance.This issue of discrepancy is referred to as the reality gap [7], and discussed in terms of the transferability of solutions [9].
The alternative to utilising a simulation is to evaluate evolved solutions directly on a robot, termed embodied evolution by Watson et al. [24].Eiben et al. [3] elaborate on embodied evolution and discuss three binary features to clarify where, when and how an embodied evolutionary algorithm can be implemented: Online/offline whether the evolutionary algorithm operates as part of their ''real'' operation, or as a prior design phase of operation before actual deployment.On-board/off-board whether the algorithm executes on the actual robot hardware, or is computed external to the robot with only the resultant solution evaluated on the robot hardware.Encapsulated/distributed whether a robot operates the evolutionary algorithm independently on it's own hardware, or if the evolutionary algorithm is designed to operate across a group of robots.
There have been several recent investigations into online, on-board, distributed evolutionary robotics motivated by the vision of a multi-robot system capable of continuous unsupervised evolutionary adaptation [4,5,8,10,19,20].Whilst the online on-board distributed approach is suitable for swarm robotics, three problematic issues are highlighted, and form part of the underlying motivation to develop our work: Spatial Referred to as the boot-strapping problem.The spatial mobility of robots is determined by the solutions developed by the evolutionary algorithm.Early explorative evolutionary development often creates incorrect sensory-motor mappings, causing robots to collide and spatially interfere with each other.Therefore each successive evaluation occurs in a new non-deterministic environment which can disrupt the reliable evaluation of newly evolved solutions [10].Temporal Online evolution is proposed as a mechanism to produce functional behaviour to solve a task, as opposed to a study of evolution in of itself.This applies pressure to generate solutions at a rate comparable to the dynamic change within the task environment [8].Selection The migration of solutions across the group of robots is non-deterministic since the robots are mobile.Furthermore, because of the noisy evaluation circumstances, the evaluative metric is not reliable between robots [20].
The benefits and shortfalls of the simulated and embodied approaches appears to be leading to a converged methodology.Koos et al. [9] define a category of evolutionary robotics as robot-in-the-loop simulation-based optimisation, encompassing a body of work that investigates the use of simulated evaluations with periods of evaluation in reality to correct for transference problems.
Evolving robot controllers, Koos et al. [9] develop a 'simulation to reality disparity measure' of transference between an offline off-board simulator and periods of evaluation in reality, used to bias the evolutionary selection mechanism towards controller solutions with better transference.Evolving walking gait behaviours, Bongard et al. [2] develop the 'estimation/exploration' algorithm which utilises evaluations in reality to capture limb-joint sensor data to adapt an offline off-board simulation of the robot morphology.Zagal et al. [25] develop the 'Back To Reality' algorithm, which co-evolves an offline off-board simulation of a quadruped robot and a walking gait controller, by using a single measure of discrepancy between the achieved walking gait in simulation versus reality.
This work concerns advancing an online on-board distributed approach suitable for application in swarm robotics that maintains the vision of an unsupervised evolutionary system.Motivated by the design context of swarm robotics and the previously isolated problems with the online on-board evolutionary approaches, we propose a distributed robot-in-the-loop simulation-based methodology.This work presents novelty in extending previous online on-board distributed approaches with an on-board simulator for each robot, allowing controller evaluation to be encapsulated virtually per robot, and selectively transferring a controller on to the same robot for use in reality.
Zagal et al. [25] describe the potential utility of an onboard simulator in terms of an incorporated aspect of an embodied robot controller, drawing analogy to the faculty of dreaming in cognitive neuroscience.This work proposes a different utility; an on-board simulator may aid the aforementioned problems with an online on-board distributed evolutionary approach.Spatial problems could be minimised by conducting the majority of evaluations within an on-board simulation; temporal attributes could be accelerated by allowing evaluations to happen within an on-board simulation; selection could be improved by allowing a communicated solution from one robot to be reevaluated by the recipient robot's on-board simulator.
This work addresses the primary issue of the reality gap associated with an on-board simulator.Zagal et al. [25] address the reality gap of an off-board simulator with a coevolutionary approach encapsulated on a single robot.Their co-evolutionary approach uses the difference in fitness of a robot controller between simulation and reality (a measure of transference) to steer the evolution of the simulator.Importantly, their approach evaluates a population of controller solutions in reality, and then the same controller population is evaluated within an evolving population of simulators to create an explicit measure of transference.This paper also proposes a co-evolutionary approach to the reality gap, but has novelty in distributing the on-board simulator evolution across a swarm of robots.Therefore each robot owns only one on-board simulator at any time, and the number of robots represents the total evolutionary population of simulator genotypes.This removes the need to correlate which controller is the product of which simulator.Furthermore our approach does not utilise an explicit measure of transference between the two.We propose that the on-board simulator can gain improving transference by competitive distributed co-evolution between many robots, by taking the success of a robots evolved real behaviour as an implicit indicator of the fitness of the associated on-board simulator.We are interested in investigating this distributed and implicit selection mechanism of on-board simulators to avoid the need to evaluate multiple on-board simulators per robot, and to leverage the variety of evaluations across many robots against the possibility of uninformative circumstances of a single robot.
We are able to make a distinction in our approach by the aspect of the reality gap we wish to address.We propose that the reality gap can be decomposed in to three elements of correspondence between reality and simulation: Robot-robot correspondence Refers to physical robot aspects, such as differences in morphology.The work of Bongard et al. [2] is a primary example of a robot that is able to adapt a self-model of morphology.Robot-environment correspondence Refers to differences in the dynamic interactions between a robot and the environment, both sensory and through actuation.Bongard et al. [2] demonstrates how the relationship between morphology and a known state of the environment can be usefully exploited.Zagal et al. [25] coevolve the physical dynamics of a simulator coupled to walking gait evolution.Environment-environment correspondence Relates the representation of salient features of the environment.Notably, such relationships are not intended as a navigational map.Rather, it should represent characteristics of the environment that can be alter behaviours over time, such as spatial density.
To date we have found no examples that specifically adapt a simulator for environment-environment correspondence.The environment is of special significance for swarm robotics as it is often used as the cue, memory or coordinating aspect of a system comprised of self-organising agents [21].This work documents an experimental investigation on the environmental correspondence of the reality gap using a swarm of physically simplistic robots.
In this work a swarm of ten real e-puck robots are used to investigate the distributed co-evolution of an on-board simulator to adapt to a changing task environment through the coupled evolution of controller solutions.The correspondence between simulation and reality has a consequence on the transferability of controller solutions.If the on-board simulator environment model can be appropriately evolved, we can expect to observe changes in the resultant behaviour from co-evolved robot controllers to complete a task.A novelty of the approach is the the potential to improve transference between simulation and reality without an explicit measurement between the two domains.We hypothesise that the variation of on-board simulator environment models across many robots can be competitively exploited by comparison of the real controller fitness of many robots.We hypothesise that the real controller fitness values across many robots can be taken as indicative of the varied fitness in environmental correspondence of on-board simulators, and used to inform the distributed evolution an on-board simulator environment model without explicit measurement of the real environment.To test this hypothesis, the foraging problem is selected, where a swarm of robots must discover and deposit food items to a designated nest site, and have the potential to use a moving light source as an environmental aid.
The remainder of this article is structured as follows: Section 2 provides a brief overview of our distributed coevolutionary approach to the evolution of on-board simulator and controller.Section 3 describes the hardware used to conduct the experiments.Section 4 details the specifics of the co-evolutionary algorithm used and the settings used for the experiments.Section 5 details the results gained and ends with a discussion.Section 6 draws conclusions from our presented work and gives projections for future work.
Distributed co-evolution of an on-board simulator and controller
This section provides an overview, and specific details of the implementation of these algorithms are detailed in the following sections.The proposed co-evolutionary method has two evolutionary components.One genetic algorithm is encapsulated on each robot and evolves a population of controller genotypes within a robot's on-board simulator.A second genetic algorithm is distributed across the physical swarm, where each robot owns a single instance of an onboard simulator genotype, and the swarm of robots constitute an evolving population of on-board simulators.These algorithms execute concurrently with each other and the operation of the mobile robot.Figure 1 illustrates the co-evolutionary algorithm in overview.Similar to Zagal et al. [25], we utilise a fitness metric of the evolved controller behaviour within both evolutionary components.The encapsulated controller evolution is informed by evaluations within the on-board simulator.After each generation of encapsulated simulated controller evolution, a controller is instantiated on the real robot and a real fitness measure of the controller is generated for use with the distributed simulator evolution.The use of a controller fitness to assess the on-board simulator is as opposed to an explicit measurement of correspondence between the on-board simulator environment model and reality, such as the extensive set of sensor recordings used for the estimation-exploration algorithm developed by Bongard et al. [2].We also do not explicitly compare the controller fitness between the on-board simulator and real performance of a robot.Instead we create a competitive system based on the variation of on-board simulators and real evaluations across many robots to attempt to remove the need for explicit correlation.
Dissimilar to Zagal et al. [25] we distribute the simulator evolution.Therefore each robot owns and instantiates only one on-board simulator genotype at any time, and the number of robots represents the total evolutionary population of simulator genotypes.This implementation detail removes the need to correlate which controller is the product of which simulator, and we make no explicit measure of transference.We hypothesise that the inherent variation in on-board simulators and the real behavioural performance between many robots can be used to competitively co-evolute towards improving simulator transference.From the encapsulated controller evolution, we choose to use the controller genotype with the highest fitness within the on-board simulator to instantiate on the real robot, resulting in a single instance of real activity of a robot as the sole indicator of the fitness of the on-board simulator.These implementation choices are for an approach that maximises the consistency of a robots real behaviour by minimising the interleaving between simulator and controller evaluations and correlation between the two.
Each robot evaluates a population of controller genotypes within it's on-board simulator.Within this same time-frame the robot is operating in reality and constructs a real fitness measure.The real fitness measure is broadcast with it's current on-board simulator genotype as part of the distributed evolution of on-board simulators.Therefore the swarm constitutes many real fitness assessments (representative of the simulator) occurring in parallel, which is sampled by communication encounters between mobile robots.An encounter is defined by the communication range between robots (25 cm), which is necessarily short range for a decentralised self-organising system.Each robot constructs a temporary population of encountered simulator genotypes and their associated real-world controller fitness.The on-board simulator is subjected to it's own evolution once the current generation of controller evaluations within the on-board simulator has elapsed.Therefore the population of controller genotypes are evaluated within the on-board simulator within a single real world evaluation of a controller, and the computation of evolution for a single generation of both the controller genotypes and on-board simulator genotype is a momentary synchronisation event in the operation of the robot.
Experiment method
We use ten e-puck mobile robots (documented by Mondada et al. [15]) each equipped with a Linux extension board for parallel computation and Wi-Fi connectivity (documented by Liu and Winfield [11]).The Linux extension board is used to operate a noise-based [7] minimal simulation written in C (see prior work [18]), and for all evolutionary computation.We use the e-puck infra-red proximity sensors for obstacle avoidance, determining ambient light levels, and for short range communication between robots.The short range infra-red communication is used to initiate further communication between robots over a Wi-Fi network.The Wi-Fi communication provides superior bandwidth but remains decentralised through the locality of the infra-red communication.A Vicon tracking system monitors the position of e-pucks and is used in conjunction with Wi-Fi to facilitate a virtual sensor by informing a robot if it is spatially located within virtually superimposed food items or the designated nest site.
Experiments
We investigate the distributed evolution of an on-board simulator environment model against a dynamic task environment through the co-evolution of controller solutions.If the on-board simulator environment model can be appropriately adapted, we can expect to observe changes in the resultant behaviour from co-evolved robot controllers to complete a task.The proposed method does not rely on The best controller genotype from simulation is transferred to the real robot.3 A controller fitness in reality, in this work foraging efficiency, is used to indicate the fitness of the associated on-board simulator.4 A robot transmits and receives on-board simulator genotypes and real fitness values.5 Synchronised with the end of virtual controller evaluation, the on-board simulator is evolved against the robot's own perceived fitness and any encountered robots' fitness values an explicit measure of transference between simulation and real operation.Rather, it is proposed that the inherent variation in on-board simulators and the real performance between many robots can be used to competitively coevolve on-board simulators with improving controller transference.To test this hypothesis, the foraging problem [12] is selected, where robots must discover and deposit food items to a designated nest site, and have the potential to use a moving light source as an environmental aid.
Experimental setup
Around the foraging problem, three basic environment scenarios are applied (Fig. 2); a light source over the nest site (a), no light source (b), or the light source opposite the nest site (c).The presence of a light source should act as a navigational aid, improving the foraging efficiency of a robot through phototaxis behaviour.The three basic environment scenarios are combined into five experiment cases, and a sixth control of fixed random movement obstacle avoidance behaviour without the co-evolutionary approach: 1.No light source 2. Light fixed opposite nest 3. Light fixed opposite nest 4. Light over nest !Light opposite nest 5. Light opposite nest !Light over nest.
Random movement
In the first five experiment cases, the hypothesised outcome is that the distributed on-board simulator evolution should adapt relative to the light stimulus available in the real environment, and the encapsulated controller evolution should exploit the on-board simulator model to evolve behaviours with improving foraging efficiency in the real environment.
Encapsulated evolution of robot controller
The encapsulated evolution of controllers occurs only within the on-board simulator of each robot.For each robot controller genotype to evaluate, one robot is simulated to forage for 60 virtual seconds.Each robot operates a steady state genetic algorithm to adapt a genotype mapping of sensory input to behavioural output, with the following parameters: An internal Food state signifies if a robot is in possession of a food item.G 0 corresponds to state Food = True.G 1 corresponds to state Food ¼ False.The values of G 0 ; G 1 are mapped to select a behaviour, as per Table 1.These values were chosen for an equal distribution between the possible behaviours.
Selection for reproduction is rank based and elitist.40 % of the population is used to overwrite the lower ranking percentage.Each gene of the child genotype is subjected to a 20 % chance of a random mutation on a Gaussian distribution (mean = 0, SD = 2).Mutation is the only mechanism to introduce variation.We take these operator parameters from prior related work [18].The fitness of each genotype is determined by evaluating the performance of the controller phenotype as a single simulated robot in the on-board simulator as summation of deposited food as a function of time: where F is the derived fitness metric, D is a deposited food item, T Max is the evaluation time limit of 60 s, T D is the recorded time to successfully deposit a food item.Time is used rather than quantity of food for stronger differentiation between efficiency in solutions.When all ten genotypes have been evaluated in the on-board simulator, the genotype with the highest simulated fitness value is immediately instantiated for use on the real robot.
Distributed evolution of on-board simulator
The distributed evolution on-board simulators operates across the swarm of mobile robots.A simplistic genetic The environmental model of the on-board simulator is determined by the single gene value mapping of S 0 (see Table 2).The mapping values of S 0 to the environment scenarios are chosen for an equal distribution.Each robot maintains the value of S 0 for the duration of a complete generation of controller evaluations within the on-board simulator, after which it is subjected to distributed evolution operators, and the on-board simulator is subsequently re-instantiated with the new mapping.The real robot operates and is evaluated for 60 real-time seconds, which also serves as the time period to encounter other robots and accumulate foreign S 0 :F R pairs.Concurrently, an average of 34 real-time seconds are taken to conduct the necessary ten instances of sixty simulated second evaluations of controller genotypes within the on-board simulator.
As the robot operates in the real world it broadcasts it's current S 0 and current real world fitness value F R , and receives the S 0 and F R values of encountered robots, over a maximum distance of 25 cm.F R is determined as the robot operates by the same equation used in the encapsulated simulated evaluation (see Eq. 1).A temporary population of 10 S 0 :F R pairs are stored and updated by each robot, representing the variation and fitness of environment models across the swarm.The population size of 10 has been selected for a conveniently matched proportion to the number of robots used in our investigation, and has not been empirically evaluated.Selection from the S 0 :F R pairs is rank based elitist, and always subjected to a random mutation on a Gaussian distribution (mean = 0, SD = 2).
An individual robot compares its own S 0 :F R pair against the S 0 :F R values encountered from other robots.Therefore, with fewer than two robots there is no selective pressure to form the distributed evolution of S 0 .A robot's accumulated population of foreign S 0 :F R pairs and it's own controller F R value are cleared at the update transition of controller and on-board simulator environment model.
Robot controller
A set of discrete behaviours are pre-defined: obstacle avoidance, random search, positive phototaxis and negative phototaxis.The modular behaviours are arranged in a hierarchy of priority within the subsumption architecture illustrated in Fig. 3.A behaviour based approach is used to reduce the number of variables in the experiment and maintain a focus on the adaptation of controller solutions with respect to the simulator environment model.A summary of the controller illustrated in Fig. 3 is as follows.Obstacle avoidance is activated with the highest priority when triggered by a robot's proximity sensors.Negative phototaxis and positive phototaxis can be activated depending on the Food State and the controller genotype mapping.The random search is always active, but can be over-ridden by any of the previous behaviours.The same controller mechanism is used for both the simulated robot within the on-board simulator and the real robot.The controller can be adapted by changing the genotype mapping of the Food state to enable the negative phototaxis or positive phototaxis behaviours.
Experiment settings
The five experiment cases outlined are each run 10 times for a duration of 50 min.If the light sourced is moved, this occurs at the 25 min mark.The light source is placed either directly behind the nest site or exactly opposite on the other side of the arena.Experiments are conducted within an enclosed circular arena measuring 120 cm diameter.The arena is free from obstructions.A single circular nest site is superimposed with a radius of 20 cm to intersect the arena boundary and maintains the same coordinates through all experiment runs.Seven food items are randomly placed within the arena.These food items always appear outside the nest area.A total of 10 e-puck robots are used which are randomly positioned and orientated at the beginning of an experiment.All e-pucks are activated by an on-board switch.A photograph of this setup is shown in Fig. 4.
Results and discussion
Figure 5 plots the mean foraging rate for each experiment case.Using the control case Random Movement, which does not use the co-evolutionary approach, the Student's t test (sample size 50, taking mean foraging efficiency at 60 s intervals) indicates that the case No Light had no significant difference from random movement (p [ 0:5), whilst the other experiment cases differ significantly from Random Movement (p\0:005).This suggests that the coevolutionary approach is able to make beneficial adaptation to the on-board simulator when a light source is present, and improving the transference of controllers.However there is a stark contrast in foraging efficiency dependent on the location of the light source.The light source over the nest appears to double the effective foraging efficiency.
The following sections investigate each experiment case.
No light source
Figure 6 shows that the mean value of S 0 maps to no light source within the on-board simulator consistently throughout the experiment.Another simulator environment mapping would likely lead to the co-evolution of controllers utilising phototaxis within simulation and a poor transference.In this case the on-board simulator has been co-evolved with a strong correlation to the real environment.The plots for G 0 and G 1 show a wide distribution centred on random search behaviours when with or without food.A wide distribution in G 0 and G 1 controller mapping Fig. 5 Graph plotting the foraging rate, calculated as mean food deposited in 250 s intervals during each experiment case is representative of a poor consensus of which behaviours lead to efficient searching without a light source.
Light source fixed over nest
Figure 7 shows that the evolved value of S 0 averages around the boundary mapping value of 0.66 with a distribution that indicates a co-evolved simulator model with a light source over the nest or no light source.G 0 shows a clear trend towards the use of positive phototaxis when with food, and G 1 trends toward negative phototaxis to search for food.
The narrow distribution of G 0 and G 1 controller mapping indicates that these behaviours provided a consistent means to inform the distributed evolution of S 0 , and that S 0 gives a strong controller transference.In this experiment case, the co-evolutionary approach appears to converge on and exploit the environment circumstance.The evolutionary development in Fig. 7 is consistent with the superior foraging efficiency shown in Fig. 5.
Light source fixed opposite nest
Figure 8 shows a mean value of S 0 to map to an onboard simulator environment model with no light source for the duration of the experiment, which does not correspond to the actual position of the light source in this experiment case.Using the no light simulator model, the G 0 and G 1 evolve for controllers on average in random search behaviour but with a wide distribution.Despite generally evolving random search behaviour, Fig. 5 gave a statistical difference in foraging efficiency for this experiment case against the Random Movement control.Importantly, there is a light source in this scenario, and it is the wide distribution of evolved controller behaviour mappings that is able to stochastically utilise the light source.In which case, an extra foraging efficiency shown in Fig. 5 can be explained through the explorative behaviour of the controller genotype evolution, rather than a strong controller transference from the on-board simulator.In which case, the success of a stochastic deviation in controller genotype evolution would not be an exploitation of the on-board simulator, and would not correlate to and inform the distributed evolution of the on-board simulator genotype.This may indicate that there is a problem of precedence between the co-evolution of an on-board simulator and controller, and whether one can provide a reliable fitness indication of the other through our distributed co-evolutionary approach.In this experiment case the light source is initially located over the nest site, and then moved to opposite the nest half way through the experiment.Figure 9 shows the mean value of S 0 correctly evolving the on-board simulator to the Light Over Nest scenario for the first half of the experiment, and the mean values of G 0 and G 1 co-evolve appropriately.This relates to the strong initial foraging efficiency shown in Fig. 5, and also the strong foraging efficiency for the Light Fixed Over Nest experiment case.Figure 9 shows a slow adaptation of S 0 after the environment transition point in time, which would cause the evolution of poorly transferring controller solutions and would relate to the sharp drop in foraging efficiency shown in Fig. 5. Whilst the S 0 mapping of the light scenario does not successfully converge to the corresponding state of the environment, it does alter in value beyond the time of the environmental change.This is as opposed to the co-evolutionary exploitation shown in the results for the light fixed over nest experiment case.Therefore, we can draw that the exploitation in light fixed over nest was related to the stability of the environment, and this transitional light over nest to light opposite nest experiment case provokes explorative behaviour from the distributed co-evolutionary approach (Fig. 9).
Light source opposite nest to over nest
In this experiment case the light source is initially located over the nest site, and then moved to opposite the nest half way through the experiment.In Fig. 10, before the environment transition, the mean value of S 0 moves towards the boundary value of the mapping between a simulator environment model with no light source and a light source opposite the nest site.The exact reason for the adaptation towards the correct simulator environment scenario in this instance and not in the experiment case light fixed opposite nest (Fig. 8) is not known, and may relate to a potential problem of precedence between the evolution of an onboard simulator and subsequent evolution of controllers, noted earlier.This suggests a larger number of experiment iterations are required to isolate the anomaly in future work.However, despite the apparent convergence of S 0 toward an appropriate environment correspondence, G 0 and G 1 evolve for a wide distribution of controller behaviour mappings.This indicates that the controller evolution did not provide a clear behavioural advantage between random search behaviour and negative phototaxis to inform the simulator evolution.
Discussion
The correspondence between simulation and reality has a consequence on the transferability of controller solutions.We hypothesise that the variation of on-board simulators across many robots can be competitively exploited via the associated real controller fitness of each robot to inform the evolution of an on-board simulator environment model without explicit measurement of the real environment.Our principle result on foraging efficiency across varying experiment cases (Fig. 5) suggests that our distributed coevolutionary approach is able to adapt an on-board simulator environment model to the presence of a light source, and consequently improves the evolution of controller solutions tasked with foraging.On closer inspection the results are mixed.
In support of our hypothesis, despite the no light experiment drawing no significant difference in foraging efficiency to the random movement control, the on-board simulators evolve with a convergence on the correct environment correspondence.If the on-board simulator was entirely disassociated from reality, we would expect to observe a wide distribution of simulator models.The foraging efficiency appears similar to the control due to the inefficient common mode of random movement behaviour in the absence of a light source.However, the real controller performance does inform the on-board simulator evolution.
Furthermore, the experiment cases light fixed over nest and light over nest to light opposite nest show a convergence of on-board simulators to the relevant environment model scenario and a higher foraging efficiency.In the case of the light source relocating, the on-board simulator does not successfully re-converge to the relevant environment model scenario, but there is a visible response in evolutionary development.These two experiment cases, having the same initial environment condition, help to demonstrate that the distributed co-evolutionary approach is able to exploit a stable environment circumstance or respond to a changing environment.This supports our hypothesis that.
Compromising our hypothesis, despite a significant improvement in foraging efficiency relative to the control, the light opposite nest experiment case failed to evolve an on-board simulator with the relevant environment model scenario.In actuality, the on-board simulator evolved with a convergence to the no light scenario, and evolved a wide controller mapping distribution comparable to the no light experiment case.In which case, the approach was unable to identify and utilise the light source through the real behaviour of the robots.The statistical difference in foraging efficiency from the control was likely gained through the explorative behaviour of the controller evolution to make use of a light source regardless of the on-board simulator.
Furthermore, whilst the light opposite nest to light over nest experiment case appears to initially evolve the relevant environment model scenario, the controller mapping evolves with a wide distribution, indicating that there is an ambiguity as to which behaviours transfer well to the real environment when the light is opposite the nest.There is a change in evolutionary development related to the light source relocation, but not enough to reach the much higher foraging efficiency otherwise apparent when the experiments start with the light source over the nest.
Our results indicate that it is possible to couple the distributed evolution of an on-board simulator with the encapsulated evolution of a controllers, providing that the environment gives a strong enough stimulus draw a meaningful real world fitness assessment.When this is not true, the evolutionary development reflects the ambiguity.In our investigation this weakness is when the light is opposite the nest.We hypothesise that when the light is above the nest it acts as a strong attractor, but opposite the nest site the light disperses in all directions providing only a weak repulsive navigational aid.
Conclusions and future work
In this work a background motivation toward an online onboard distributed co-evolutionary approach for swarm robotics is described.We propose that on-board simulation and evolutionary computation is an appealing design approach for swarm robotics.We propose that an on-board simulator may aid the currently documented issues facing online on-board distributed evolutionary robotics.We investigate the reality gap, specifically the environmental correspondence of an on-board simulator, by a novel distributed co-evolutionary approach to improve the transference of controllers evolved within an on-board simulator.A novelty of our approach is the the potential to improve transference between simulation and reality without an explicit measurement between the two domains.We are interested in a distributed and implicit selection mechanism of on-board simulators to avoid the need to evaluate multiple on-board simulators per robot, and to leverage the variety of evaluations across many robots against the possibility of uninformative circumstances of a single robot.We hypothesise that the variation of on-board simulator environment models across many robots can be competitively exploited by comparison of the real controller fitness of many robots.We hypothesise that the real controller fitness values across many robots can be taken as indicative of the varied fitness in environmental correspondence of on-board simulators, and used to inform the distributed evolution an on-board simulator environment model without explicit measurement of the real environment.
Our results demonstrate that our online on-board distributed co-evolutionary approach creates an adaptive relationship between the on-board simulator environment model, the real world behaviour of the robots, and the state of the real environment.The results indicate that our approach is sensitive to whether the real behavioural performance of the robot is able to inform on the state real environment.Our results demonstrate a good co-evolutionary convergence of controllers and on-board simulators when a light source can be used as a navigational attractor to the nest site (light fixed over nest, initially in light over nest to light opposite nest).However, if the light source is used as a repulsive navigational aid (light fixed opposite nest, initially in light opposite nest to light over nest), a wide distribution of controller genotype mappings evolved, indicating an ambiguity in useful controller behaviours, and may cause a problem of precedence between the coevolution of an on-board simulator and controller, which will be investigated in the future.The anomaly in our results, where a different evolutionary convergence of the on-board simulator occurs to the same initial environment scenario between the light fixed opposite nest and light opposite nest to light over nest experiment cases requires further investigation.
The dependence of our approach on the informative quality of the environment through robot behaviours may be similar to the boot-strapping problem highlighted by Konig et al. [10], which links the distributed evolutionary development of robot behaviours to their spatial mobility.In future work we would like to vary the number of robots, as the number of robots constitutes the evolutionary population of on-board simulators, to investigate any gains of parallelism in evaluations towards evolutionary convergence.Logically the number of robots has a relationship to the available space of operation, creating a further variable of spatial density of robots.In our decentralised approach, which necessitates short range communication, we hypothesise that the spatial density and mobility of robots will impact the connectivity of the distributed evolutionary algorithm.In this context, our approach with an on-board simulator bears resemblance to the Island Model spatially structured evolutionary algorithm [22].Future work would specifically investigate spatial aspects relating to connectivity in distributed evolution on mobile robots as a parallel to the field of spatially structured evolutionary algorithms, and the utility of an on-board simulator to improve the mechanism of evolutionary selection through virtual evaluations.
Fig. 1
Fig. 1 An illustration of the co-evolutionary implementation.Addressing numbered points: 1 A genetic algorithm evolves a local population of controller genotypes through the on-board simulator.2 The best controller genotype from simulation is transferred to the real robot.3 A controller fitness in reality, in this work foraging efficiency, is used to indicate the fitness of the associated on-board simulator.4 A robot transmits and receives on-board simulator genotypes and real fitness values.5 Synchronised with the end of virtual controller evaluation, the on-board simulator is evolved against the robot's own perceived fitness and any encountered robots' fitness values Genotype length: 2 (G 0 ; G 1 ) • Gene values: in range [0.00:0.99]• Population size: 10 • Mutation rate: 20 % • Mutation: Gaussian noise, mean = 0 SD = 2 • Cross-over: None • Selection: Rank-based elitist, top 4 seed lower 6
Fig. 2
Fig. 2 An illustration of the three environment scenarios.Large circular outlines represent the arena enclosure.Small green circles represent food.The blue semi-circle represents the nest area.Yellow triangles represent a light source location (when present) (colour figure online)
Fig. 3
Fig. 3 An illustration of the robot controller as an implementation of the subsumption architecture
Fig. 4 AFig. 6
Fig. 4 A photograph of the real e-pucks within the arena, and the light source box located in the top left of the picture.The blocks around the arena enclosure are lead-acid batteries used to keep the arena in place
Fig. 7 Fig. 8
Fig. 7 Light fixed over nest: Three graphs plotting the mean value of the genes S 0 ; G 0 and G 1 over time.The error bars are the standard deviation of the results.The green horizontal bands mark the mapping of the gene value to the controller behaviour or simulator model (colour figure online)
Fig. 9 Fig. 10
Fig. 9 Light over nest to light opposite nest: Three graphs plotting the mean value of the genes S 0 ; G 0 and G 1 over time.The error bars are the standard deviation of the results.The green horizontal bands mark the mappings of the gene value to the controller behaviour or simulator model.The vertical blue line represents the point of light source relocation (colour figure online)
Table 2
Gene S 0 mapping to the embedded simulator scenario | 9,060.8 | 2014-08-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Dairy farming systems driven by the market and low-cost intensification in West Africa: the case of Burkina Faso
The increase in demand for dairy products in Burkina Faso is encouraging livestock producers to develop milk production. Three types of dairy systems (pastoralists, agropastoralists and market-oriented dairy farms) have been characterised based on a sample of 60 producers operating in the West and centre of the country. Pastoralists’ dairy operations consist mainly of zebus, rely on pasture for feed, store little fodder, and recover little manure. Milk yields are low (1.4 l/tropical livestock unit (TLU)/day) and milk sales are limited, but mostly benefit women. Agropastoralists’ dairy operations consist mainly of zebus, store more fodder for feed, use more concentrate and recover manure better. Milk yields are higher (3.1 l/TLU/day) and milk sales are threefold those of pastoralists, but less of the money generated by milk sales goes to women. Market-oriented dairy farmers’ operations are mainly made up of crossbreds, reared indoors and fed on fodder and feeds, store much more fodder and recover manure even better. They generate the highest milk yields (7.3 l/TLU/day), and milk sales are 2.5-fold those of agropastoralists. However, money earned from milk sales mainly benefits men. The study shows that the improvement in dairy systems’ technical and economic performance, which mostly rests on genetics and cow feed, but also on better recycling of agricultural by-products, is driven by a low-cost intensification and market opportunity (raising processors demand).
Introduction
While still low (~14 l/capita/year in 2018 according to the FAO), milk consumption in West Africa is increasing sharply. Milk production rose substantially between 2000 and 2018 (~+87% according to the FAO) and is estimated at around 6 million tonnes of litres per year, 60% of which comes from cows (Corniaux et al. 2014). As a result, more and more livestock producers in Burkina Faso increase milk production to supply the domestic market (Sib et al. 2017;Vidal et al. 2020).
This raises the issue of productivity and sustainability of those emerging dairy operations.
For a long time, livestock producers in Burkina Faso seemed to have steered clear of the milk market, as shown by the dairy farmer typologies of Hamadou et al. (2003) and Ouédraogo (1995) established on the outskirts of Bobo-Dioulasso and Ouagadougou, respectively. These typologies revealed two groups of livestock producers based on the degrees of production intensification and integration to the market. According to these studies, the first and overwhelmingly largest group (98.5%) consisted of pastoralists and agropastoralists who saw milk production as a secondary economic activity after cattle trading. These holdings were characterised by the breeding of zebu cows, with pasture as the main feed resource, a very low milk yield per heifer, and limited marketing of the milk produced. The second group, much smaller (1.5%), consisted of dairy farmers who were in the process of specialisation and were market-oriented. For them , m ilk producti on was t he main object ive. Consequently, in order to secure their activity and increase production, they were pursuing a land acquisition strategy and developing more intensive dairy farming practices (i.e. use of feed concentrates, exotic local breeds and sometimes artificial insemination). Recent studies by Sib et al. (2017) and Vidal et al. (2020) in the Bobo-Dioulasso area, and by Gnanda et al. (2016) on the outskirts of Ouagadougou, have shown that the dairy production landscape in Burkina Faso is still dominated by the same categories of producers, but with an increasing proportion of market-oriented dairy farmers. The primary objective of this study is to characterise the different dairy systems more accurately than in previous studies by providing the most accurate quantifying elements possible. The ultimate objective is to compare dairy farmers' practices at dairy system level, assess the performance of dairy systems and provide information on drivers and sustainability of ongoing developments within dairy systems.
Materials and methods
The survey was based on a sample of 60 milk-producing farms representative of the range of types highlighted by previous studies (Hamadou et al. 2003;Gnanda et al. 2016;Sib et al. 2017;Vidal et al. 2020 Data was collected through a once-through survey questionnaire, applied to each farm manager considering 1 year of production (from February 2018 to January 2019), focusing on the dairy operation (during the hot dry season (from February to May 2018), then during the rainy season (from June to October 2018) and then during the cold dry season (from November 2018 to January 2019)). A principal component analysis (PCA) was applied to twenty informed and calculated variables, followed by an ascending hierarchical classification (AHC) in order to distinguish uniform classes. Those twenty variables included: -Structure-related variables: the cattle herd was estimated from the count of heads carried out during the survey and was expressed in tropical livestock unit (TLU); 1 TLU=adult weighing 250 kg; and considering 1.5 TLU for an adult male zebu and a crossbred dairy cow, 1 TLU for an adult dairy zebu cow, 0.8 TLU for a young bull and a heifer and 0.4 TLU for a calf; crops (ha, including fodder crops); number of milking cows (in TLU; i.e. Sudanese Fulani zebu plus cross-bred milking cows); and percentage of cross-bred milking cows (%; crossbred issued from artificial insemination (AI) zebu x Holstein, or zebu x Montbéliarde, or zebu x Brune des Alpes, or zebu x Tarentaise) -Operating variables: pastured ingested, fodder ingested and feed ingested per milking cow (in kg of dry mater (DM)/TLU/day, considering 6.25 kgDM/TLU/day for a full 14 h day of grazing); fodder stored (kg/TLU/year); manure lost on pasture (kg/TLU/year); and manure recovered for farm needs (kg/TLU/year) -Performance variables: average milk yields (l/TLU/day), milk sold (l/TLY/year), milk production costs in CFA F/TLU/year; 1 euro = 655,957 CFA F) for wages, for feeds and for healthcare; income from milk and profit margin (CFA F/TLU/year); and milk income managed by women of the household (yes or no) We carried out an ANOVA test completed by a Newman-Keuls test to see whether differences between classes were significant at the 5% threshold.
Results
The PCA and the AHC brought out three classes of dairy systems (Table 1; Fig. 1).
Pastoralists' dairy operations feature low milk yields (1.4 l/TLU/day) and low volumes of milk sold (349 l/TLU/year). Pastoralists have large cattle herds (49 TLU). Their dairy operation is largely dominated by Sudanese Fulani zebus (16 zebus and only two crossbred milking cows: zebu x Holstein, or zebu x Montbéliarde, or zebu x Brune des Alpes depending of the farms). On average, they milk 10 milking cows per day. They make extensive use of grazing (native grasslands ingested estimated at 4.3 kgDM/TLU/day). Pastoralists provide very little fodder and feed rations (2.3 and 0.8 kgDM/TLU/day). They store very few crop residues for fodder (855 kgDM/TLU/year). They cultivate small areas (3.7 ha/farm: mainly maize and sorghum) and grow little fodder (6% of the cultivated area; cowpea or velvet beans). Faeces recycling is low. The amount of manure recovered for farm needs is estimated at 331 kg/TLU/year, while the quantity of manure lost on pasture is 401 kg/TLU/year. They have the lowest production costs (34,446 CFA F/TLU/year for wages + feeds + healthcare; 115 CFA F/l). Costs can be broken down as follows: feed 70%, healthcare 19% and wages 11%. Income from milk sales and profit margin on milk are the lowest among the three classes, at 63,602 CFA F/TLU/year and 29,156 CFA F/TLU/year, respectively. For pastoralists, milk production remains a minor economic activity for the household as a whole but primarily benefits women in 60% of cases.
Agropastoralists' dairy operations are characterised by higher milk yields (3.1 l/TLU/day) and higher volumes of milk sold (937 l/TLU/year) than those of pastoralists.
Agropastoralists have smaller cattle herds (17 TLU). On average, they milk 7 cows per day, mostly Sudanese Fulani zebus (7 zebus and 1 crossbred: zebu x Holstein, or zebu x Montbéliarde, or zebu x Brune des Alpes). They use pasture to feed their milking cows, but less intensely than pastoralists (pasture ingested on native grasslands was estimated at 3.5 kgDM/TLU/day). However, they provide larger fodder and feed rations, at 7.2 and 3.9 kgDM/TLU/ day, respectively. They store more crop residues than pastoralists for fodder (2621 kg/TLU/year). Their cultivated area is smaller (1.2 ha/farm: mainly maize and sorghum), and they grow more fodder (18% of the cultivated area; cowpea or velvet beans). They recycle faeces more efficiently than pastoralists (405 kg/TLU/year of manure recovered for farm needs and 320 kg/TLU/year of manure lost on pasture). Given their higher level of production intensification, agropastoralists have higher costs than pastoralists (255,101 CFA F/TLU/year for wages + feeds + healthcare; 293 CFA F/l of milk). Their highest cost item is feed (77%), followed by wages (22%), with healthcare costs only amounting to 1%. Income from milk sales and profit margin on milk are significantly higher than for pastoralists, at 305,500 CFA F/TLU/year and 503,980 CFA F/TLU/year, respectively. Within the household, milk plays a greater economic role than among pastoralists, but the proportion of women benefiting directly from the milk income falls to 40%.
Market-oriented dairy farmers' operations feature the highest milk yields (7.3 l/TLU/day) and a significantly higher volume of milk sold than pastoralists and agropastoralists (2140 l/TLU/year). They deliver most of their milk to a dairy. Their cattle herds are of medium size (21 TLU). They milk an average of 17 milking cows per day. Their dairy operation consists mainly of Sudanese Fulani zebu crossed by artificial insemination with exotic dairy breeds (Holstein, Brune des Alpes or Montbéliarde or Tarentaise; 14 crossbred cows and 3 zebus). They are less reliant on grazing (pasture ingested on native grasslands is estimated at 1.3 kg/TLU/day). They provide similar fodder rations to those of agropastoralists, but larger than those of pastoralists (fodder ingested estimated at 5.6 kgDM/TLU/day). They make greater use of feed concentrates (cottonseed meal or corn bran) than the other classes (feed ingested estimated at 5.7 kgDM/TLU/day). Some farmers provide excessive amounts of concentrates. Like agropastoralists, they store large quantities of crop residues for fodder (2039 kg/TLU/year). Their cultivated area is similar to that of pastoralists (3.0 ha/holding: mainly sorghum and secondly maize), but they grow more fodder (59% of the cultivated area; mainly forage sorghum which is often processed into silage and secondly cowpea). Faeces recycling is more efficient than among pastoralists and agropastoralists (491 kg/TLU/year of manure recovered for farm needs and only 108 kg/TLU/year (12%) and healthcare (1%). They also boast the highest levels of income from milk sales and the highest profit margins on milk, at 798,142 CFA F/TLU/year and 508,423 CFA F/TLU/year, respectively. For these producers, milk production is often a major economic activity, the income of which is mostly managed by men (82% of cases). Fig. 1 Dairy systems of the three classes of producers
Discussion
The study shows that the improvement in dairy systems' technical and economic performance mainly rests on cow genetics, their feed and a better recycling loop for agricultural byproducts. It also shows that women are sadly being excluded from the income generated from milk as dairy activity becomes more important in the household. These four points serve as a framework for our discussion aimed at analysing this ongoing transition from an efficiency and sustainability perspective.
Changes in breeding and reproduction practices
Production levels per milking cows in the study area (500 to 2500 l/lactation) are similar to those recorded in the savannah and Sahel regions (Morin et al. 2007;Gaye et al.., 2020) but remain below those achieved in tropical Highland areas (Bebe et al. 2003) and in areas with more intensive livestock farming (Ageeb and Hayes 2000;Kahi et al. 2000). This is because dairy systems mostly involve local zebus (Sudanese Fulani zebus) not selected for their dairy output. In the Highlands of Kenya, 78% of small dairy producers rear exotic dairy cattle for their high milk yield, often in stalls with little access to pasture, and only 22% rear local zebus (Bebe et al. 2003). In West Africa and especially in Burkina Faso (Morin et al. 2007;Gaye et al. 2020), the interest in exotic breeds and artificial insemination (AI) is growing but remains limited. These practices can sometimes have limitations in a farming environment when they are not properly managed (risk of inbreeding). They also raise serious animal welfare issues when it comes to breeds ill-suited to tropical heat (Ageeb and Hayes 2000;Kahi et al. 2000), and AI protocols are subject to debate owing to the conditions under which hormones are produced (Grimard et al. 2003). The AI of local zebu (Sudanese Fulani) with exotic dairy breeds (Montbéliarde, Holstein, Brune des Alpes, Tarentaise) increase quickly milk production, which is appreciated by dairy farmers. However, to stabilise this progression over time, it is essential to supplement AI with programs for the selection of promising heifers and young bulls among the offspring of crossed dairy mothers, in order to sustainably increase among the local cows of tomorrow the milk yield (around 10 l/day; improvement conveyed by exotic dairy breeds) while maintaining their adaptation to the hot climate of the savannahs (character provided by the hardiness of the local zebu).
Changes in feeding practices
The role of pasture in feed rations declines as milk becomes more economically important to the holding, unlike fodder and feed concentrates (Table 1). For instance, among market-oriented dairy farmers, grazing is significantly reduced in comparison to pastoralists and agropastoralists but nevertheless remains an important part of the ration. In both western and eastern Africa, reduced grazing coupled with a stronger commitment to milk production and trade is a general trend resulting in the adoption of stalling (Bebe et al. 2003;Morin et al. 2007;Gaye et al. 2020). In both regions, fodder mostly consists of crop residues collected from the field and stored. Unlike East Africa, silage and fodder crops are still in their infancy in West Africa (Njarui et al. 2011), except in some peri-urban areas such as the outskirts of Ouagadougou. After genetics, feed concentrates are very widely used by producers as a way of increasing production. In the semi-arid zones of Kenya, Njarui et al. (2011) found that 88 to 92% of dairy farmers provided feed at a rate of 2 kgDM/cow/day on average. In Burkina Faso, market-oriented dairy farmers tend to provide feed in abundance to the milking cows (5.7 kgDM/TLU/day) as they are inexpensive and seen as a simple and reassuring way of increasing milk yields. However, such excessive use is neither efficient nor without risks (acidosis).
Changes in the practice of combining crop and livestock farming
The study shows that the increase in cultivated fodder (mainly cowpea and velvet bean hay or silage of forage sorghum in this case study and possibly grass forages such as Brachiaria sp. or shrub fodder such as Leucaena sp. or Albizia sp.; Sib et al. 2017) used in rations goes hand-in-hand with better recycling of cultivated crop residues (maize and sorghum straw) and animal faeces (Table 1). This leads to greater self-sufficiency in fodder and manure for dairy farmers, more efficient use of resources and preservation of soil fertility, i.e. improved sustainability of the holding. This can partly be explained by strong land pressure in peri-urban areas, where dairy farmers who deliver to dairies are established. In the Highlands of Kenya, Udo et al. (2011) showed that, due to increased land pressure, more than three-quarters of dairy farmers intensified their farming practices by gradually moving from free-range grazing to indoor stabling. Furthermore, in Sudan, Ahmed and Fawi (2018) showed that 56% of dairy farms produced 2 to 6 t of manure/month and that a large share of that manure was sold.
Social consequences
Women are found to be excluded from handling milk-related money when this activity becomes economically important in the household (Table 1). Among the Fulani, income from milk goes exclusively to women. However, the situation seems to change once the decision is made to sell the milk to a dairy. This trend is not specific to Burkina Faso and West Africa. In East Africa, Herego (2017) and Umuzigambeho (2017) showed that in milk value chains, women tended to be more focused on home-based production and processing. With the intensification and marketing of dairy products, women's workloads tend to increase, leading to their being sidelined and to men taking over in the marketing chain. Beyond production, men predominate in the milk value chain as milk dealers, animal healthcare agents, AI service providers and extension staff. Policies introduced in these countries to promote women's inclusion in value chains have been slow to produce results. Those policies seek, in particular, to strengthen women's involvement in the management of dairy cooperatives and to improve their access to credit and training.
Heading for the intensification of milk production
The emergence of market-oriented dairy farmers and agropastoralists suggests that the path towards production intensification followed by those dairy systems, which is based on a stronger crop-livestock combination, improved cattle housing, parsimonious use of local feed concentrates and the introduction of exotic dairy breeds mainly through AI and cross-breeding with local zebus, is broadly consistent with that described by Chagunda et al. (2015) among small-scale dairy farmers in East Africa. We have qualified this path of low-cost intensification driven by market opportunity. None of the three dairy system classes described in our study fully meets the five criteria for a sustainable livestock system provided by Dumont et al. (2013). According to those criteria, market-oriented farmers' dairy systems are those showing the most optimised metabolic operation of the livestock system (principle 3), the lowest healthcare expenditure per capita (principle 1) and where domestic feed and fodder production efforts reduce dependency on external inputs (principle 2), but which at the same time make greater use of feeds purchased on the market, tend to specialise production and simplify management methods, which goes against principles 2, 4 and 5 in the grid.
In further work, it would be interesting to look in greater depth at dairy producers' ecological footprints and agroecological profiles, taking into account, as suggested by Funes-Monzote et al. (2009), indicators that do not refer solely to the production unit (in this case the cow being milked), but also, for example, the yield per asset or per unit area, along with biodiversity indicators and energy balances.
Finally, this study shows that the improvement in dairy systems' technical and economic performance in Burkina Faso is mainly based on the genetics of the cows making up the operation and on the feed provided to the animals, but also on a better recycling loop for the holding's agricultural by-products.
The study highlights issues of concern regarding current genetic breeding practices (based on exotic breeds), excessive use of feed concentrates on the more intensively farmed holdings and the sidelining of women from the household milk economy once that activity becomes important.
The low-cost intensification market-oriented trend described in this study must draw the attention of researchers and developers alike to support dairy producers in building sustainable pathways.
Code availability Not applicable
Author contribution All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Eric Vall, Ollo Sib, Jethro Delma and Arielle Vidal. The first draft of the manuscript was written by Eric Vall, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Funding The research leading to these results received funding from Stradiv Project (System approach for the TRAnsition to bio-DIVersified agroecosystems; 2015-2019; project number 1504-003 funded by Agropolis Fondation, France), and from Africa-Milk (Promote ecological intensification and inclusive value chains for sustainable African milk sourcing; 2018-2022; project funded by LEAP-agri project which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 727715).
Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval The manuscript does not contain clinical studies or patient data.
Consent to participate Verbal informed consent was obtained prior to the interview of the dairy farmers.
Consent for publication
The interviewed dairy farmers have consented to the submission of the results of this study to the journal.
Competing interests The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 4,880.4 | 2021-04-26T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics"
] |
Protein structure similarity from principle component correlation analysis
Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD) in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC) analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum eigenvalues can be highly effective in clustering structurally or topologically similar proteins. We believe that the PCC analysis of interaction matrix is highly flexible in adopting various structural parameters for protein structure comparison.
Background
Conformational resemblance between proteins, whether remote or close, is often used to infer functional properties of proteins and to reveal distant evolutionary relationships between two proteins exhibiting no similarity in their amino acid sequences. Traditionally, high-resolution structure determination succeeds the biological and biochemical studies of proteins to further provide mechanistic details of the function of proteins. The biological function of these proteins have usually been suggested prior to their structural studies by in vitro binding assays, in vivo gene knock-out experiments, and sequence homology with proteins of known function. However, with the completion of the sequencing of the genomes of human and other organisms, major structural biology resources have been harnessed to solve structures of large numbers of proteins encoded by the genomes in a high throughput but less specific fashion, under the name 'structural genomics' [1]. Subsequently, large sets of protein structures are accumulated in the public domain databases for which we know little about their biological roles. This shortfall calls for the development of cost-effective computational methods to predict protein function based on three-dimensional structures, with the aim of providing preliminary information to guide biological experiments later.
In the post-genomic era, large amounts of new protein sequences are available for statistics-based recognition of their biological properties. It has been shown in many cases that with the help of elegant computational algorithms, amino acid sequence information alone can be used to successfully predict a protein's structural class [2][3][4], sub-cellular location [5,6], and even enzymatic activities [7][8][9][10]. These approaches, however, are often limited by sequence noise arose from natural mutations throughout the evolutionary path, in which proteins are structurally and functionally conserved, but divergent in amino acid sequences. It is a recurring theme in structural biology that proteins with completely different sequences can adopt very similar global fold. Hence, incorporating structural information into functional genomics would potentially upgrade predictions to the next level of accuracy. Owing to the rapid technical advances in X-ray crystallography and liquid-state NMR spectroscopy, protein structure determination becomes more routine than before. It is reasonable to predict that full-scale structure determination can be the first step towards characterizing the biological role and mechanism of a newly sequenced protein.
In the 13,000-large protein structure database (PDB), there are only approximately 4,000 different folds represented in the PDB, with a fold/structure ratio of approximately 1/5 (in the protein data bank) [11]. Therefore, (a) Ribbon representation of 1IP9, showing two α helixes and four β strands, and (b) the corresponding symmetric interaction matrix (defined in eq. 2), where h 3 and h 5 are the two α helices, and h 1 , h 2 , h 4 and h 6 are the four β strands Figure 1 (a) Ribbon representation of 1IP9, showing two α helixes and four β strands, and (b) the corresponding symmetric interaction matrix (defined in eq. 2), where h 3 and h 5 are the two α helices, and h 1 , h 2 , h 4 and h 6 are the four β strands. The gray-level values denote the distance between any two C α atoms with white corresponding to the shortest distance, i.e., 0.
given a new protein structure determined experimentally, chances are high that its topological arrangement of secondary fragments already exists in PDB either as an individual protein, or as a domain within a larger protein.
Structure comparison is traditionally based on coordinate RMSD [12,13]. While the RMSD approach is effective in comparing two close topologic structures with similar chain length, it fails when proteins are of different shapes or lengths. One outstanding example is Calmodulin, a ubiquitous Ca 2+ binding protein that plays a key role in numerous cellular Ca 2+ -dependent signaling pathways [14]. The backbone RMSD between the Ca 2+ -bound and apo states of individual calmodulin domain (~64 residues) is as large as 4Å, despite the fact that they are the same molecules with the same topology. When using the Ca 2+ -bound structure as a starting model, a homology based NMR residual dipolar coupling (RDC) refinement scheme, which relies heavily on the model having the correct topology, is able to converge the model to an accurate apo structure using RDCs measured for the apo state [15]. There are numerous proteins with similar secondary element arrangements in the 3D space yet acquire different overall shapes. Clearly for these proteins, algorithms different from the RMSD must be used to reveal their topological similarities. Another well-known software called Matching Molecular Models Obtained from Theory (MAMMOTH) is a sequence-independent protein structural alignment method [16]. It compares an experimental protein structure using an arbitrary low-resolution protein tertiary model. The distance defined in MAM-MOTH is quite different from our approach. There are also many other methods of protein structure comparison, such as [17][18][19][20][21]. Note that all of the aforementioned methods used sequence based comparison. In contrast, our method adopts secondary structure based comparison and focuses on extracting invariant topological features.
In our study, we measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In this method, referred here as the principle component correlation (PCC) analysis, the symmetric matrix for an individual protein is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. It is first demonstrated that the maximum eigenvalues of these interaction matrices can be effectively used to group structurally or topologically homologous proteins. Then by taking into account both maximum eigenvalues and their corresponding eigenvectors, a more refined pair-wise structure comparison is performed, which is able to differentiate structures of similar shape but different topological backbone traces. It is also shown that the results of PCC analysis are highly comparable to those given by the scaled Gauss metric (SGM) calculations [22] for the data sets studied. We believe the PPC method is flexible in adopting various structural parameters for pair-wise structure comparison.
(a) The plot of scaled λ 2 (the second largest eigenvalue) ver-sus λ 1 (the maximum eigenvalue), calculated using the PD matrix, for all proteins in the four data sets, and (b) the plot of λ 1 of PID matrix versus that of PD matrices
Materials
A total of fifty-six protein structures, grouped into 6 different sets according to CATH [23,24] are used to test our algorithms. Proteins in structure set I belong to the "mainly alpha" class, including mostly apoptosis regulators in the BCL-x L super family as well as others with remote conformational resemblance; all have the "Orthogonal Bundle" architecture. The atomic coordinates were retrieved from PDB with accession codes 1A4F, 1A6G, 1COL
Clustering of structurally similar proteins by SMEC method
One of the goals of this study is to compare and identify structurally or topologically similar proteins. In other words, given a new experimentally determined protein structure, the proposed method is expected to rapidly place the structure into a group of structurally or topologically similar proteins in the database, thereby aiding in correlating topological similarity with functional similarity. To illustrate the application of the SMEC approach, we compute the scaled eigenvalues of PD and PID interaction matrices (Section Methods). Figure 2a shows the plot of scaled λ 2 versus λ 1 , calculated using the PD matrix, for all proteins in the four data sets. Figure 2b shows the plot of λ 1 of PID matrix versus that of PD matrices. The different symbols represent different structural groups. These plots were used to resolve clusters of structurally similar structures.
Pair-Wise structural comparison by PCC method
In addition to correlating the maximum eigenvalues, the PCC method described in Section Methods, which compares both eigenvalues and eigenvectors, was tested for the four selected data sets. Using the pair-wise distance matrix defined in Section Methods, the difference metric R defined in Eq. 5 between all pairs of protein structures in the four data sets were calculated and shown in Tables 1-6. Additionally for the same data sets, writhing numbers computed using the SGM method were presented in the same corresponding tables. The R values between a few selected proteins from different groups were also shown to provide a negative control (Table 2).
Discussion
The concept of principle component analysis (PCA) is widely used in mathematics and pattern recognition to simplify a data set. In mathematical terms, it is a transform that chooses a new coordinate system for the data set, such that the greatest variance by any projection of the data set comes to lie on the first axis (then called the first principle component), the second greatest variance on the second axis, and so on. Because of the large amount of information stored along the first axis, the maximum eigenvalue itself can be characteristic enough to represent structural features of a protein. ing. It is therefore expected that smaller components of interaction matrices are not effective for this purpose. Similarly, when using the first number computed with the SGM algorithm, the four structure sets can be resolved (see Fig. 3).
In addition to the PD matrix, PID matrix defined above was used to provide further separation between clusters of eigenvalues. This was demonstrated in Fig. 2b, in which the plot of λ 1 of PID matrices versus that of PD matrices achieves a much better grouping of the four structural sets in the vertical dimension as compared to the plot in Fig. 2a. This further emphasizes the importance of the maximum eigenvalues and variations in the definition of the interaction matrix that provides independent structural information. It does not escape our notice that even better resolution can be achieved by correlating λ 1 with three or more different types of interaction matrices in a multidimensional plot. The caveat, however, is that definitions of invariant relation constructing the matrices should not be redundant as there are a limited number of independent invariants in a protein structure. Nevertheless, the results here show that the PCA method using secondary interaction matrix is highly flexible in adopting various structural parameters as a means of structure comparison. We also investigate how much the first eigenvalue captures the eigenvalue spectrum in the BCL-x L family. We found that the first eigenvalue captures 45.78% of the sum of the 105 eigenvalues. That indicates that more eigenvalues could be helpful in protein structure classification in our future work.
A more elaborate method built on PCA is explored in this study to utilize the directional information contained in the eigenvector corresponding to λ 1 , named here as the PCC analysis as described in Section Methods. This method is particularly suited for the pair-wise structural comparison. Using the simple PD matrix definition (Section Methods), the pair-wise difference metrics, R, are all small (< 0.4) within each of the four known structural sets (Tables 1 and Figure 5(a)-(f)). The SGM score in Figure 5 is defined as the absolute difference between the SGM values of two proteins. The symbol 'o' denotes that the R score is smaller than SGM score, and the '*' denotes the R score is bigger than SGM score. Furthermore, as a negative control, R values between structures from different sets are much larger, typically greater than 2.0 ( Figure 5(e)). Based on the R values in Table 1 and Figure 5(a)-(f) , we found empirically that by setting the cutoff R value to 0.4, the PCC method can faithfully place all structures in their designated groups.
To provide a more in-depth view of the PCC method, the analysis of data set I is described here in detail. This set consists of mainly α helical structures having the "Orthogonal Bundle" architecture. Proteins 2BID, 1F16, 1G5M, 1GJH, 1MAZ, and 1DDB are apoptosis regulators of celldeath pathways associated with mitochondrion. Since mitochondria originated from prokaryotes, these proteins The plot of R score versus the SGM score: (a)-(f) are plotted for datasets from I to VI, respectively Figure 5 The plot of R score versus the SGM score: (a)-(f) are plotted for datasets from I to VI, respectively. The SGM score is defined as the absolute difference between the SGM values of two proteins. The symbol '*' denotes that the R score is smaller than SGM score, and the 'o' denotes the R score is bigger than SGM score.
are believed to have evolved from the same ancient design. Although they differ substantially in amino acid sequence as well as in shape, the overall scaffold and topology are similar. As expected, the R values among them are all less than 0.4 (Table 1). Other proteins in this set, including bacterial toxins that are capable of forming membrane pores (1MDT and 1COL) and myoglobin (1A6G), have remote conformational resemblance with the BCL-x L proteins. The R values between these structures and the apoptosis regulators are also less than 0.3 and are comparable to those found within the BCL-x L family. It is interesting to note that although 1MDT and 1COL are not related to the BCL-x L proteins in terms of physiological roles, they do share a similarity with the BCL-x L members other than topology; that is, they all are able to form large pores when inserted into cellular membrane.
In summing the results of Table 1 and Figure 5(a)-(f), the R values within individual sets are on average very small, with a mean of 0.1102 and standard deviation of 0.1269. This is expected because the structures have been manually examined and pre-grouped into topologically similar sets. The comparison results from PCC analyses are generally comparable to that of SGM for the data sets under study (see Table 1 and Figure 5(a)-(f)). However, in a few isolated cases, the difference in the scaled writhing numbers within the same structure set can exceed the threshold of 0.4 that governs similarity (for example, protein pairs (1MAZ, 2BID), (1F16, 1DDB) in Table 1, and pro-tein pairs (1C78, 1FM0), (1C78, 1NDD), and (1C78, 1IBQ) in Figure 5(b). This is because the PCC analysis using the PD matrix emphasizes more on spatial separation and orientation of secondary segments. It must be mentioned that the PD matrix alone is not expected to detect pure topological similarities. The results for structure sets with predominately β strands and mixed α/β proteins show similar R values ( Figure 5(c) and 5(d)), indicating the generality of this method in protein structure comparison. We also tested these six data sets using MAMMOTH, it can also separate the six classes well.
Another variation of the PD matrix definition is to take into account the N -C terminal sense, in attempt to further emphasize protein topological features. A good example is the comparison between structures 1COL and 1DDB in data set I. A visual examination of the two structures reveals that they share similar shape, but are considerably different in topological arrangement of helices 1 and 3. In protein 1COL, the first and third helices are antiparallel, whereas they are parallel in 1DDB (see Figure 4). This is not identified by the PCC analysis using the PD matrix as R = 0.029. The great similarity in shape prevailed in the comparison. However, by applying the PDS matrix defined in Section Methods, the R-value considerably increases to 1.707, clearly highlighting the difference in backbone topological traces. Finally we also would like to pint out that the definition of R could be improved by introducing more eigenvalues.
Figure 4
Ribbon representation of protein structures of (a) 1COL and (b) 1DDB. The two proteins have similar shape, but different topological arrangements in helices 1 and 3.
Conclusion
PCC analysis of secondary interaction matrix is a conceptually simple method that yields results highly comparable to the SGM method. Both are able to distinguish protein conformations based on the more subtle topological features. While the SGM method compares structures in a more topological sense, the outcome of PCC analysis is more dependent on the definition of the interaction matrix. With the PD matrix, the PCC analysis puts more weight on the detailed structure and shape, while it is also capable, to a certain extent, of distinguishing different topological traces. In certain cases of pair-wise comparison, such as that between 1COL and 1DDB, protein shapes can overwhelm their topological features in the analysis; yet the PCC analysis of the PDS matrix is able to completely differentiate between 1COL and 1DDB.
Owing to the flexibility offered by the new method, a more effective definition of interaction matrix can be explored to provide a more efficient structure comparison. There exist many invariants in each protein. Some invariants are important for protein classification, but some are not. Hence, our future work will further explore feature selection, automated classification of PDB, modeling and statistical learning, as well as protein domain matching.
Principle component analysis of secondary interaction matrix
Assuming a protein having n secondary fragments denoted by h 1 , h 2 ,..., h n , and the number of residues in each secondary structure denoted by l 1 , l 2 ,..., l n , respectively, the total number of residues belonging to second- The principle components of the interaction matrix is then obtained by orthogonal decomposition as shown below: where λ 1 ≥ λ 2 ≥ ʜ ≥ λ N are the sorted eigenvalues, the corresponding eigenvectors are e 1 , e 2 ,..., e N , and E = [e 1 , e 2 ,..., e N ] is an invertible matrix. Generally, the maximum eigenvalue, λ 1 , and its corresponding eigenvector in Ndimensional space encode the most dominant features in the structure and therefore can be effectively used to directly compare structures, as well as to identify the less obvious topological features common to the proteins.
Since the eigenvalues depend largely on the dimension of interaction matrix, they are divided by the matrix size N, a treatment similar to the scaling of writhing numbers in the SGM method (Rogen P. and Fain B., 2003). In a relatively crude analysis, λ 1 can be directly compared to infer structural similarity. This method is referred here as the Scaled Maximum Eigenvalue Comparison (SMEC).
In addition to the maximum eigenvalues, their corresponding eigenvectors can also be used to correlate similar structures. Particularly for pair-wise structure comparison, degree of similarity can be more accurately measured by comparing both eigenvalue and eigenvector. Since proteins are generally not of the same length, their eigenvectors cannot be directly correlated due to different dimensionality. Therefore, a "sliding window" approach is employed to correlate the smaller protein to all matching segments (length-wise) in the larger protein.
secondary structure residues 1 ... N, (λ B 2 , e B 2 ) are from secondary structure residues 2 ... N+1, and so on. To quantify structural similarity, we define a difference metric, R, between Î of protein A and Î of the jth matching segment of protein B as Obviously, smaller R j indicates better correlation or higher degree of structural similarity. The overall difference between the two proteins is defined as The minimum of R 1 , R 2 , ..., R M-N+1 is used here to measure similarity because this potentially allows mapping a smaller structure onto a homologous domain within a larger protein. This method is called the Principle Component Correlation (PCC) analysis.
Defining the matrix elements
The definition of block matrix elements, d( , ), depends on the desired structural features to be extracted.
In the current study, we focus structural comparison on protein backbone conformation. Clearly the simplest invariant describing the backbone conformation is the Euclidian distance between a pair of C α atoms from two different secondary segments. Formally, the elements are defined as d( , ) = || -|| where and are the coordinates of the two C α atoms of residues u of h i and v of h j , respectively. For conciseness, we name the interaction matrix so defined as the Pair-wise Distance (PD) matrix. For illustration purpose, the interaction matrix for the structure of Pb1, Domain of Bem1P (PDB accession code 1IP9), is shown in Fig. 1. This structure, consisting of two α helices and four β strands (Fig. 1a), is used here to provide distances between all pairs of C α atoms in the six secondary elements (Fig. 1b).
Furthermore, two variations of the PD matrix definition are explored in attempt to provide a better resolution in structural comparison and classification. Since physical energy of interaction between a pair of atoms typically increase monotonically as the inverse of their separation, inverse of distance is used to mimic physical interactions between secondary elements. Here the elements of F(h i , h j ) are defined as where u 0 represent a hard-sphere boundary below which the interaction is constant. In this study, we arbitrarily set u 0 to 3Å. This definition is referred as Pair-wise Inverse Distance (PID) matrix.
Another variation of the PD matrix definition is to take into account the N -C terminal sense, in attempt to further emphasize protein topological features. For a secondary element, h i , its direction vector v i is defined by two points in Cartesian space: the center of mass of the five consecutive N-terminal C α and the center of mass of the five consecutive C-terminal C α atoms. Given a pair of secondary elements h i and h j , the new matrix elements are defined as where sgn(x) is a symbol function which is 1 when x ≥ 0 and -1 when x < 0. This variation is referred as Pair-wise Distance with Sense (PDS) matrix in this study.
Linking/Writhing numbers
To evaluate the ability of PCC analysis in extracting pure topological features, the linking and writhing numbers, which are good measures of global topology, are also calculated for the four sets of structures for comparison. The linking number of two curves is defined by the Călugăreanu-Fuller-White formula [25][26][27]: Lk = Wr + Tw, where the linking number Lk counts the sum of signed crossings between the ribbon's two boundary curves, the writhing number Wr counts the sum of signed self-crossings of the curve, averaged over all projection directions [28], and Tw is the twist number.Lk is an invariant to any smooth deformation that avoids self-intersections [29], and it is also independent of projection direction. Wr and Tw are invariant to some transformations, such as rigid body motions. Here we compute the writhing numbers using the Scaled Gauss Metric (SGM) approach previously described by Rogen and Fain [22].
Given two curves c 1 and c 2 , which are two closed nonintersecting curves in 3-dimentional space, and define e(s, t) = (c 2 (t) -c 1 (s))/||c 2 (t) -c 1 (s)||, where ||·|| denotes the Euclidean norm. For two closed curves, the vector field e(s, t) is doubly periodic. Such mappings have an integer-valued degree that is invariant under topological deforma- Writhing number is not invariant under general smooth deformations such as translations, rotations, re-parameterizations, and dilations (Murasugi, 1996). Since the backbone of a protein is a polygonal curve, the writhing number of c 1 (t) can be calculated by where W(i 1 , i 2 ) is the writhing number between the i 1 th and the i 2 th segment; s and t denote two different C α atoms, and N is the total number of C α atoms. The SGM method is defined as the normalized writhing number, namely, Wr is divided by N [22]. The absolute difference between their writhing numbers is used to infer topological similarity. | 5,884.2 | 2006-01-25T00:00:00.000 | [
"Biology",
"Computer Science"
] |
TAO1, a representative of the molybdenum cofactor containing hydroxylases from tomato.
Aldehyde oxidase and xanthine dehydrogenase are a group of ubiquitous hydroxylases, containing a molybdenum cofactor (MoCo) and two iron-sulfur groups. Plant aldehyde oxidase and xanthine dehydrogenase activities are involved in nitrogen metabolism and hormone biosynthesis, and their corresponding genes have not yet been isolated. Here we describe a new gene from tomato, which shows the characteristics of a MoCo containing hydroxylase. It shares sequence homology with xanthine dehydrogenases and aldehyde oxidases from various organisms, and similarly contains binding sites for two iron-sulfur centers and a molybdenum-binding region. However, it does not contain the xanthine dehydrogenase conserved sequences thought to be involved in NAD binding and in substrate specificity, and is likely to encode an aldehyde oxidase-type activity. This gene was designated tomato aldehyde oxidase 1 (TAO1). TAO1 belongs to a multigene family, whose members are shown to map to clusters on chromosomes 1 and 11. MoCo hydroxylase activity is shown to be recognized by antibodies raised against recombinant TAO1 polypeptides. Immunoblots reveal that TAO1 cross-reacting material is ubiquitously expressed in various organisms, and in plants it is mostly abundant in fruits and rapidly dividing tissues.
Molybdenum cofactor (MoCo) 1 binding is common to a group of ubiquitous enzymes, including nitrate reductase, xanthine dehydrogenase, sulfite oxidase, and aldehyde oxidase (1,2). These enzymes are involved in various types of oxidative metabolism, and show broad and occasionally overlapping specificities (1). A subset of this enzyme class, MoCo hydroxylases, includes the structurally similar enzymes xanthine dehydrogenase (XD) and aldehyde oxidase (AO). These enzymes contain, in addition to the molybdenum cofactor, FAD and two types of iron-sulfur centers (3). They have been shown to be related to the AO from Desulfovibrio gigas (MOP) for which a crystal structure has recently been described (4).
AO (aldehyde-oxygen oxidoreductases) are widely distributed among various organisms. They are characterized as dimers of two 150-kDa subunits and catalyze the oxidation of N-heterocyclic compounds in the presence of O 2 , but appear to display a broad range of substrate specificities (5). In humans, AO has been implicated in familial amyotrophic lateral sclerosis (6) and hepatotoxicity of alcohol (7). In plants, AO activities are implicated in the biosynthesis of two plant hormones, abscisic acid (ABA) and indole acetic acid (IAA). The plant hormone ABA is involved in various processes, including the reaction of plants to environmental stresses such as wounding, water stress, seed development, and plant development (8). In the ABA biosynthesis pathway, AO is thought to catalyze the conversion of ABA aldehyde to ABA, which is considered to be the last step in the biosynthesis of ABA. Indeed, MoCo defective tobacco and barley mutants were found to be deficient in ABA synthesis (9,10). IAA is involved in many aspects of plant growth and development, however, its biosynthetic pathway remains to be elucidated. Recently, an AO activity from maize coleoptiles was shown to efficiently oxidize indole-3-acetaldehyde, a putative precursor of IAA (11). It is unknown whether the broad substrate specificities attributed to AO originate from a single enzyme or from a family of closely related enzymes.
XD is very similar to aldehyde oxidase in cofactor content, molecular weight, and sequence (1,3,12). It is a ubiquitous enzyme which is involved in purine metabolism. In plants, XD plays a central role in nitrogen assimilation, and was found to be highly enriched in nitrogen-fixing nodules of legumes of the ureide class (13). Genes encoding XD have been identified in mammals, chicken, flies, and Aspergillus (14 -20).
The distribution and substrate specificities of human AO and XD activities were shown to overlap but to be distinct (5,21). Comparison of the sequences of recently described AO genes with those of various genes reveals high homology between eukaryotic AO and XD genes, particularly in the sequences involved in the binding of the different cofactors. However, several sequences thought to be involved in NAD binding and substrate specificity are absent from AO, and can be used to differentiate between the two enzymes. The absence of the NAD-binding site is in agreement with the fact that AO does not require NAD for its action, and the lack of substrate binding sequences corresponds to the different specificity range of the two enzymes. Strikingly, a high level of homology was found between eukaryotic XD and AO genes and the bacterial MOP in the domains which participate in the iron-sulfur centers and MoCo binding (4).
The genes which encode either XD or AO have not been described in plants. Their isolation will shed light on the origin of the different attributed activities, and will aid in the elucidation of the biosynthetic pathways in which these enzymes * This work was supported in part by grants from the Ministry of Science and the Arts, the Forchheimer Center for Molecular Genetics, and the European Commission Grant for Biotechnology BI04-CT96-0101. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
¶ are involved. Here we describe a novel gene family from tomato, tomato aldehyde oxidase (TAO), which is highly homologous to XD and AO genes. Detailed sequence comparisons in the different functional domains suggest that this gene family belongs to the AO rather than the XD type of MoCo containing hydroxylases. We show that this gene family, as well as AO activity, are highly expressed in fruits of various plant species. We genetically map the genes from the family to two gene clusters on two different chromosomes.
EXPERIMENTAL PROCEDURES
Genetic and Physical Mapping-The Lycopersicon pennellii introgression lines population (ILs), described by Eshed and Zamir (22) was used to genetically map the different copies of TAO1. RFLP analysis of Southern blots was performed as described previously (23). Physical distance of TAOa from TG105 was established by partial digestion of YAC 340-63. The digests were fractionated on counter-clamped homogeneous electric field gels (Bio-Rad), blotted, and hybridized with probes. The maximal distance between a pair of markers was estimated according to the smallest partial band that contained both markers.
Clones and Sequence Analysis-YAC 340-63, which contains the RFLP marker TG105A, was generated from the tomato line Rio Grande-PtoR, and cloned in the vector pYAC 4 (24). The sequence of TAO1 was obtained from four overlapping clones, as shown in Fig. 1. Clones TAO1-1 and TAO1-10 were isolated by using YAC 340-63 for screening of a cDNA library from roots of Lycopersicon esculentum c.v. Mogeor. Since these cDNA clones were not complete, the 5Ј end of TAO1-1 was used to screen again the same cDNA library, and a longer but still partial cDNA clone was isolated, TAO1-5. The 5Ј end of the sequence was obtained from a genomic clone, TAO1-G7. Overlapping sequence confirmed that the different clones originated from the same gene. Sequence analysis was performed using the sequence analysis software package of the University Wisconsin, Genetics Computer Group (25). The insert of cDNA clone TAO1-10 was used as a probe for Southern blot analysis.
Production of Anti-TAO1 Antibodies and Immunoblotting-Two segments of TAO1 were overexpressed in bacteria, using two different types of expression vectors (Fig. 1). For antibody 1, a 500-base pair long EcoRI fragment from the partial cDNA clone TAO1-10 (containing one internal EcoRI (E) site and one EcoRI site from the vector (EЈ), was fused to the glutathione S-transferase protein from Schistosoma japonicum, using the pGEX expression vector (Pharmacia). The resulting fusion protein contained amino acids 950-1128 of the TAO1 protein (Fig. 1A, ab 1). For antibody 15, the construct contained the flanking 800-base pair EcoRI segment from the same cDNA clone (again containing one internal EcoRI site and one site from the vector), which was fused to a stretch of histidine residues, using the pRSET expression vector (Invitrogen). The resulting fusion protein contained amino acids 1129 -1315 of the TAO1 protein (Fig. 1A, ab 15). The polypeptides produced in these clones were purified by glutathione-agarose for the glutathione S-transferase fusion protein and by bound nickel columns for the histidine-fusion proteins. The purified peptides were injected into guinea pigs to obtain antibodies ab 1 and ab 15. Both antibodies recognized on immunoblots polypeptides of apparent 78 kDa (Fig. 1B), which were not detected by control antibodies (not shown).
Protein Gels, Immunoblotting, and Activity Assays-SDS-PAGE and immunoblotting were performed as described by Raz and Fluhr (26), using the ECL system (Amersham). Young green fruits were ground in 20 ml of extraction buffer containing 50 mM Tris-HCl (pH 2.5), 1 mM EDTA, 1 M sodium molybdate, 2 mM 2--mercaptoethanol, 5 g/ml leupeptine, 5 g/ml aprotinin, 1 mM phenylmethylsulfonyl fluoride, 10 M FAD, and 0.5 g of polyvinylpyrrolidone, at 4°C. The extracts were filtered through 4 layers of gauze, and centrifuged at 10,000 ϫ g for 20 min. The supernatant was applied to a 2-ml Q-Sepharose column (Pharmacia). The column was washed with 3 volumes of extraction buffer without polyvinylpyrrolidone, and proteins were eluted by 1 volume of extraction buffer containing 0.5 M NaCl. The eluate was either dialyzed overnight or desalted on a P6 (Bio-Rad) minidesalting column. Samples (50 g) were fractionated on a native PAGE and activity was assayed as described (10).
RESULTS
Sequence Analysis of the TAO Gene Family-During a high resolution mapping study of the I2 Fusarium wilt resistance gene from tomato, we used a 350-kilobase YAC clone, YAC 340-63, to screen a root cDNA library. One of the partial cDNAs isolated was sequenced and found to be closely homologous to the mammalian AO, not yet described in plants. The gene has been designated tomato aldehyde oxidase 1 (TAO1). The sequence of TAO1 was obtained by a combination of overlapping sequence fragments from 3 different clones (see "Experimental Procedures"). The deduced amino acid sequence, of 1315 amino acids, is compared in Fig. 2 with the deduced amino acid sequences of a representative aldehyde oxidase, the human xdh2 (20), and a representative xanthine dehydrogenase, the rat xdh (14), which are the most similar to TAO1. The human xdh2 was originally defined as a xanthine dehydrogenase-type (15), but was shown to actually encode an aldehyde oxidase activity (19,27). Regions that contain identical amino acids are indicated with boxes. The homology resides mainly in the NH 2 terminus of the protein, which contains the iron-sulfur centers, and in the COOH-terminal half that contains the MoCo-binding domain (see below). The region between these two parts is considered to bear the FAD-binding region and displays less homology. We wished to more specifically relate TAO1 to either the XD or AO groups of MoCo hydroxylases. In Fig. 3, we compare conserved functional regions of TAO1 to the consensus sequence of XD-type genes, and to 3 AO-type genes. Xanthine dehydrogenases contain two iron-sulfur centers, an FAD-binding domain, an NAD-binding site, a MoCo complexing domain, and a substrate binding domain (Fig. 3, top; Ref. 19). The locations of the two iron-sulfur domains have now been pin-FIG. 1. Source of clones, probes, and fusion polypeptides. A, the TAO1 gene is shown schematically on top, and below are shown the overlapping clones from which the full-length sequence of TAO1 was obtained, and the fragments used for production of fusion polypeptides. The portion of each clone that was sequenced is outlined with a solid black line. TAO1-10, TAO1-1, and TAO1-5 are cDNA clones in the gt10 vector. TAO1-G7 is a genomic subclone, originating from YAC 340-63. The insert of the cDNA clone TAO1-10 was used as a probe for Southern blot hybridizations. The fragments ab 1 (amino acids 950-1128) and ab 15 (amino acids 1129 -1315), from the partial cDNA clone TAO1-10, were used for the preparation of antibody 1 and 15, respectively, as described under "Experimental Procedures." Symbols are: E, EcoRI site originating from the TAO1 open reading frame; EЈ, EcoRI sites originating from the vector sequence. B, an immunoblot of total protein extract from tomato ovaries. Protein (35 g) was fractionated by SDS-PAGE and immunoblotted. Identical blots were incubated with antibodies 1 (ab1) and antibodies 15 (ab15), as indicated (see "Experimental Procedures" for description of the antibodies), and developed using the ECL method (Amersham). Left panel, 20-s and 1-min exposure of ab 1 and ab 15, respectively; right panel, 20-min overexposure. pointed by the recently described crystal structure of MOP (4), and shown to be highly conserved among XD and AO. The consensus sequence of both iron-sulfur domains is also conserved in TAO1 (Fig. 3, iron sulfur 1 and iron sulfur 2), except for the substitution of the first cysteine for methionine in the first iron-sulfur center of TAO1. The cysteines are considered crucial for the iron-sulfur domain and we are not aware of their substitutions in other cases. As the 5Ј sequence of the TAO1 gene was obtained from a genomic clone (see "Experimental Procedures"), the deduced sequence at that point may reflect an intron junction rather than the actual first methionine residue. However, sequencing of an additional 1020-base pair upstream region showed no evidence for the presence of an additional cysteine-containing exon, nor do the relevant surrounding nucleotides give evidence for a clear splicing consensus sequence.
TAO1, as well as MOP, BAO, and HXD2 do not contain the sequence Gly 394 -Tyr-Arg (underlined in Fig. 2), which in the XD-type enzymes is thought to be involved in NAD cofactor interaction (28). The precise domain involved in the flavin adenine dinucleotide (FAD) binding is not fully established, (20) represents an aldehyde oxidase (19,27) and Ratxdh (14) represents a xanthine dehydrogenase. Boxes show identical residues. The alignment was obtained with the PILEUP function of the Wisconsin sequence analysis software package (Genetics Computer Group, Madison, WI). Identical amino acids were grouped by the ShadyBox software (Camson Huynh, Australian National Genomic Information Service). The putative NAD binding site, and a sequence suggested to be involved in substrate specificity of XD, are underlined in the Ratxdh sequence. Degrees of similarity and identity with TAO1 are, respectively, 55 and 31% for Humanxdh2, and 56 and 35% for Ratxdh. and cannot be deduced from the crystal structure of MOP as this region is completely absent from MOP. Indeed in this region a relatively low level of homology exists among the different genes (Fig. 2).
The three-dimensional structure of MOP suggests a funnellike structure which leads from the surface of the protein to a substrate binding pocket, in close proximity to the site that binds MoCo (4). The domains which participate in MoCo complexing and substrate binding are highly conserved among XDs, AO, and TAO1 as shown in Fig. 3, regions 1-5 (4). However, in several positions amino acid substitutions differentiate TAO1 from XDs, and in those positions TAO1 is more similar to MOP, BAO, and HXD2. Thus, glutamate 807 of rat XD, which is completely conserved among all XDs, is substituted with hydrophobic amino acids in BAO, HXD2, MOP, and TAO1, respectively (Fig. 3, region 1). The equivalent position in MOP is occupied by Phe 425 , which is situated in the substrate binding pocket, and may participate in determining the substrate specificity that differentiates XD and AO activities within the MoCo binding site. Region 4 of the MoCo-binding domain, which was shown by mutational analysis to play a role in substrate specificity of XD, 2 is highly conserved among XDs, but is not conserved between XDs, AOs, and TAO1. In addition, the consensus sequence ERXXXH (underlined in Fig. 2) conserved between all XD-type genes, is completely absent from TAO1, as well as from BAO, HXD2/AO, and MOP. This sequence was suggested by mutational analysis to also be involved in determining substrate specificity of XD (19). Thus, the main apparent differences between TAO1 and XD-type enzymes lie in the proposed NAD and substrate-binding domains. The similarity of TAO1 to MOP, BAO, and HXD2 in these regions suggests that TAO1 encodes a MoCo hydroxylase of the AO-type.
TAO1 Cross-reacting Material Is Highly Abundant in Tomato Fruits-In order to examine the expression pattern of TAO1, two non-overlapping segments from its 3Ј end were expressed as fusion proteins in bacteria, and antibodies were raised against the recombinant fusion proteins (see "Experimental Procedures"). In immunoblots of tomato tissue a 78-kDa size protein band was detected (Figs. 1B and 4). Occasionally two protein bands of similar size can be resolved, which may represent several related but distinct proteins of the TAO1 family (see below), or additional successive degradation products. The 78-kDa size is considerably less than the calculated 144.5-kDa molecular mass for TAO1. However, as both antibodies, but not control antibody, reacted on immunoblots with polypeptides of the same apparent 78-kDa molecular mass, the possibility that the polypeptide detected is irrelevant is unlikely (Fig. 1B). Thus, the observed size is probably a physiological or extraction-based degradation product. Similarly, a degradation of XD proteins to 80-kDa products was detected in SDS-PAGE analysis of Drosophila proteins (29), and an aldehyde oxidase from maize was shown to degrade upon SDS-PAGE, giving a product of 85 kDa (11).
We wished to examine the expression pattern and tissue distribution of proteins from the TAO family in tomatoes. An immunoblot of various tomato organs is shown in Fig. 4. TAO1 cross-reacting material (CRM) is present in all tomato organs. It is most abundant in ovary, in developing fruits, and in dividing tissues, such as the shoot tips, which contain apical meristematic tissue. TAO1 CRM was found to be present at high levels in all fruit parts examined (Fig. 4), and in all stages of fruit development (not shown). An additional, lower molecular weight band is apparent in some lanes. This is probably an 2 A. Glatigny and C. Scazzocchio, personal communication. (19); HXD2/AO, humanxdh2, an AO-type gene (19,20,27); BAO, bovine aldehyde oxidase (12); and MOP, D. gigas aldehyde oxido-reductase (4). Sequence comparisons of the iron-sulfur centers and the MoCo and substrate binding sites are shown below (4). CON, consensus sequence of the iron-sulfur domains; XDCON, consensus sequences of the motifs found in 5 XD-type genes from: Drosophila melanogaster (16,17); rat (14); human (15); chicken (18); and A. nidulans (19). Residue numbers of the consensus XD-type domains refer to Drosophila melanogaster XD. The arrow indicates an amino acid in the substrate binding pocket highly conserved among the XD-type genes, but different in AO-type genes (4). additional degradation product and is occasionally observed.
Tomato Fruits Are Enriched in Several Aldehyde Oxidase Activities Which Cross-react with Anti-TAO1 Antibodies-The relatively high expression of TAO1 CRM in tomato fruits prompted us to examine directly aldehyde oxidase and xanthine dehydrogenase activities in these tissues by a native activity gel assay (Fig. 5). The upper band in Fig. 5A (asterisk) was substrate independent. In leaf tissue, a band of xanthine dehydrogenase activity was detected when hypoxanthine was used as the substrate (Fig. 5, arrow). In fruits, overlapping bands could be detected, resulting in a continuous region of stain (bar in Fig. 5A, panel HX). When the xanthine dehydrogenase-specific inhibitor allopurinol was added to the reaction, the single XD activity band from leaves disappeared, as well as the most rapidly migrating band from the fruits. Thus, most activity bands in the fruits were not inhibited by allopurinol (Fig. 5A, panel HXϩAP), moreover allopurinol itself served as a substrate for these activities (not shown). The allopurinol-insensitive fruit activities were also detected when the AO-specific substrate 6-methylpurine was used, indicating that this tissue is rich in AO activity (5). This correlates with the high abundance of TAO1 CRM in fruit tissue.
We wished to establish a correlation between the activities observed and the TAO1 CRM, detected in denaturing conditions. To this end, leaf and fruit proteins comigrating with the activity bands shown in Fig. 5A were excised from lanes adjacent to those stained for activity. The proteins were eluted electrophoretically and fractionated by a SDS-PAGE denaturing gel. As shown in Fig. 5B, anti-TAO1 antibodies detected a major band, of apparent molecular mass of 78 kDa. Minor bands are probably degradation products of the 78-kDa band. This result suggests that the smaller than expected 78-kDa CRM originates from enzymatically active proteins. Specific cleavage may be a result of denaturation during SDS-PAGE analysis. Alternatively, the native protein may be processed in vivo but still retain intactness and activity in native gels, and be separated to smaller fragments upon SDS-PAGE, as has been documented for XD during ischemia (14).
TAO1 Cross-reactive Material Is Ubiquitous in Plants and Animals-TAO1 antibodies were derived from highly conserved regions in the MoCo binding area (see Figs. 1-3). This prompted us to examine the antibodies for cross-species reactivity in a "zoo-garden-type" immunoblot (Fig. 6). Different levels of TAO1 cross-reactive material of apparent 78-kDa migration size could be detected in young fruits of all plants assayed, dicots and monocots. In petunia seed pods a considerable amount of 140-kDa apparent molecular mass fragment was also detected. Cross-reacting material was also detected in liver cells and in Drosophila. In Drosophila, mutants which do not express XD are well known (30). We examined one of these mutants, ry 506 , null for XD, for the presence of TAO1 crossreacting material (Fig. 6, compare lanes DROS wt and DROS 506). A significant level of cross-reacting material could be detected in the mutant. The findings are consistent with the observation that the antibodies recognize a broad family of MoCo hydroxylases, in which TAO1 is related to the AO type.
Genetic Mapping of TAO1-Southern blot analysis revealed that TAO1 was part of a multigene family (Fig. 7). To position FIG. 6. Immunoblot analysis of TAO1 cross-reacting proteins from different organisms. Proteins (40 g) were fractionated by denaturing SDS-PAGE and immunoblots were developed with anti-TAO1 antibodies (ab 15). Plant protein extracts are from young developing fruits of tomato, pepper, petunia, and from a developing ear of wheat, as indicated. Animals protein extracts are from highly differentiated human hepatoma cells (HUMHEPG2 and HUMHEPG3) and from wild-type (wt) and the XD null mutant y 506 (506) of Drosophila (DROS). The blot was exposed for 5 min. the members of the TAO family on the tomato genetic map we used a near isogenic lines (IL) mapping population, in which single chromosome segments from L. pennellii were introgressed in 50 ordered lines into a L. esculentum background (31). The absence of an L. esculentum-type polymorphic fragment or the presence of a L. pennellii-type polymorphic fragment in a Southern blot of a specific IL, indicates that the origin of the particular fragment is from the introgressed chromosomal segment. The ILs which proved relevant for the mapping of the TAO family are illustrated in Fig. 7. A high level of polymorphism was detected between the original parental lines on a TaqI digest Southern blot, probed with a segment of the TAO1 gene (Fig. 7, right panel, compare L. pennellii and L. esculentum). Nearly all the ILs, as exemplified by IL 8-1, displayed the L. esculentum-type fragment pattern (Fig. 7, right panel). However, specific ILs lacked a subset of L. esculentumtype bands, indicating that the introgressed region contained a TAO copy. A TAO gene cluster was localized in this way to the region of overlap between lines IL 11-3 and IL 11-4, as several L. esculentum-type fragments were absent from both of these ILs (fragments are indicated with arrows and designated "a" in the right panel of Fig. 7). These fragments, as well as other nonpolymorphic fragments, were also present in YAC 340-63, generated from L. esculentum, which was previously mapped to this region of chromosome 11 (32). This result confirms the mapping of the "a" fragments to chromosome 11, and also delineates the additional non-polymorphic bands, mutual to L. pennellii and L. esculentum, which map to the same region of chromosome 11. Physical pulse field electrophoretic mapping of YAC 340-63 has revealed that TAOa is distal of marker TG105, with a distance of approximately 50 kilobase between TG105 and TAOa (data not shown). Other TAO copies were similarly mapped to a second locus, TAOb, on chromosome 1, in the region of overlap between IL 1-1 and IL 1-2 (Fig. 7, left panel). Although screening the rest of the 50 introgression lines did not reveal additional mapping loci, some TAO fragments were not polymorphic between the parental lines. Many of these could be assigned to YAC 340-63 positioned at the TAOa locus. However, the possibility exists that the few remaining nonpolymorphic TAO genes map to loci other than TAOa and TAOb. TAO1, the representative TAO gene sequenced above, was assigned unambiguously to the TAOa locus based on direct sequencing of YAC 340-63 subclones. DISCUSSION We describe here the isolation and characterization of a new gene from tomato, TAO1. Highest homology to TAO1 was found among xanthine dehydrogenase (XD) and aldehyde oxidase (AO) genes from several organisms (12, 14 -20). It contains the consensus sequences of the two iron-sulfur domains and the MoCo-binding domains of XD and AO. We classify TAO1 as an AO-type structure, as it lacks sequences indicative of the NADbinding domain and sequences suggested to be involved in XD substrate specificity. The lack of these particular features is reminiscent of the aldehyde oxidoreductase structure from D. gigas, (4), the bovine AO (12), and HXD2, a human MoCo containing hydroxylase gene (20) which was recently suggested to encode AO-type activity rather than XD-type (19,27). Isolation of the cognate TAO1 polypeptide and direct examination of its activity will be necessary to verify these observations. TAO1 belongs to a multigene family. The family members detected may code for other XD-or AO-type activities with variable substrate specificities and expression patterns. This may suggest that broad substrate range of AO activities originates from different, closely related, enzymes, rather than a single enzyme. The TAO family members display an unusually high level of genetic polymorphism detected by RFLP. This is reminiscent of the high level of polymorphism, detected in both DNA and protein, of XD enzymes from Drosophila ecotypes (33,34). The polymorphism enabled facile mapping of TAO members to two gene clusters, on chromosomes 1 and 11. The clustering of closely related genes from the same gene family is akin to the clustering found in the recently isolated plant resistance genes (35)(36)(37). This phenomenon was also observed by us for the Fusarium wilt disease resistance gene candidate I2C, 3 which maps in close genetic and physical proximity to TAO1. Resistance genes are also characterized by their highly polymorphic nature which are thought to be part of the plants adaptability to the changing pathogenic environment. Interestingly, AO activities in the liver carry out biochemical detoxification (12). If a similar detoxification function is played by the plant AO activities, then environmental adaptability may dictate pressure toward maintaining the observed genetic polymorphism in plant AO genes.
Two types of MoCo containing hydroxylase activities have been detected in plant extracts. One activity is inhibited by the XD specific inhibitor allopurinol and is thus classified as XDtype activity. The second type of activity is highly enriched in tomato fruits, is allopurinol insensitive, and can utilize both typical AO substrates such as 6-methylpurine and hypoxan- FIG. 7. Genetic mapping of TAO multigene family clusters. TaqI digested DNA of the parental lines L. esculentum and L. pennellii and of the relevant ILs were hybridized to the 3Ј end of TAO1 (see Fig. 1). Each panel shows a linkage map of the chromosome and the introgressed regions of the ILs that contain TAO copies (adopted from Eshed and Zamir (22)). Fragments in the blot are designated a or b according to their genomic origin from chromosomes 11 and 1, respectively. Left panel, relevant ILs for mapping to chromosome 1. Right panel, relevant ILs for mapping to chromosome 11. thine, and the XD inhibitor allopurinol. Anti-TAO1 antibodies detect polypeptides in leaf and fruit. Due to the polyclonal nature of the anti-TAO1 antibodies we cannot rule out crossreaction between AO-and XD-type polypeptides, however, the antibody reactivity appears well correlated with the prominent AO activity detected in fruits. In addition, we have noted that spectral stress induces accumulation of AO activity in leaves which was also correlated with an increase in cross-reacting polypeptide accumulation as detected by TAO1 antibodies. 4 TAO1 CRM indicated immunoreactive polypeptides of approximately 78 kDa, which were detected by 2 different antibodies prepared from fusion peptides originating from sequential nonoverlapping carboxyl-terminal regions. In the cases of Drosophila and fruit pods from petunia plants an additional approximately 140-kDa polypeptide was detected, consistent with the predicted molecular mass of AO. In tomato, the 78-kDa molecular mass was detected when enzymatically active proteins were eluted from native gels and refractionated on SDS-PAGE. Based on the deduced amino acid sequence, the terminal 78-kDa part would begin roughly at amino acid 590, and thus contains the MoCo complexing domain but probably not the region containing the iron-sulfur centers. Interestingly, XD is known to undergo a physiologically well known and important proteolytic irreversible conversion from the XD to xanthine oxidase form. This proteolytic and irreversible conversion was shown to result in a "nicked" but active xanthine oxidase, which upon fractionation in denaturing gels yielded fragments of 20, 40, and 85 kDa, of which the latter is the carboxyl-terminal product (14). Thus the possibility exists that the 78-kDa TAO1 CRM is a result of similar proteolysis of the enzyme, which results in a nicked but enzymatically active protein. Such proteolysis may occur either as a physiological process in the plant, or during the extraction process. In either case, the fact that many of the extracts analyzed in crossspecies immunoblot analysis yielded similar size polypeptide indicates that such proteolytic processes are ubiquitous.
Here we show that TAO1 immunoreactive polypeptides are abundant in ovaries and fruits of tomato and other plant species. The presence of immunoreactive polypeptide in tomato fruits correlates with the high level of AO-type activity detected in tomato fruits. A biological role for the presence of TAO in apical meristematic tissue and fruit is not known, and it may be directly related to the biosynthetic capacities required for typical plant metabolic "sink" tissues. Alternatively, the expression of AO in those tissues may fulfill the specific needs for the final steps in biosynthesis of two plant hormones, ABA and IAA. Recently zeaxanthin epoxidase activity was shown to be involved in the first step of ABA biosynthesis (38). An AO-type activity has been implicated by mutational analysis to be essential for the last step of ABA biosynthesis which is the conversion of ABA aldehyde to the active carboxylic form of the plant hormone ABA (9, 10). Interestingly, sitiens, one of the tomato ABA-deficient mutants putatively lacking AO activity, maps to the same chromosomal region as TAOb (39). With the help of a high resolution mapping population of that region we are pursuing the possibility that a member of TAOb is involved. 5 In maize an AO activity, enriched in the coleoptile apical region, was shown to oxidize indole-3-acetaldehyde into IAA (11,40). The abundant tissue-specific expression detected by TAO1 antibody in fruit may reflect the role ABA plays in seed maturation and dormancy, while the elevated expression detected in apical meristems is consistent with the role this tissue plays as a known source of auxin biosynthesis. The isolation of a member of the novel TAO gene family from tomato, highly homologous to the AO group of MoCo containing hydroxylases offers a useful starting point for the analysis of this pivotal gene family in plants. | 7,278.6 | 1997-01-10T00:00:00.000 | [
"Biology"
] |
Flight test results for microgravity active vibration isolation system on-board Chinese Space Station
The Fluid Physics Research Rack (FPR) is a research platform employed on-board the Chinese Space Station for conducting microgravity fluid physics experiments. The research platform includes the Microgravity Active Vibration Isolation System (MAVIS) for isolating the FPR from disturbances arising from the space station itself. The MAVIS is a structural platform consisting of a stator and floater that are monitored and controlled with non-contact electromagnetic actuators, high-precision accelerometers, and displacement transducers. The stator is fixed to the FPR, while the floater serves as a vibration isolation platform supporting payloads, and is connected with the stator only with umbilicals that mainly comprise power and data cables. The controller was designed with a correction for the umbilical stiffness to minimize the effect of the umbilicals on the vibration isolation performance of the MAVIS. In-orbit test results of the FPR demonstrate that the MAVIS was able to achieve a microgravity level of 1–30 μg0 (where g0 = 9.80665 m ∙ s−2) in the frequency range of 0.01–125 Hz under the microgravity mode, and disturbances with a frequency greater than 2 Hz are attenuated by more than 10-fold. Under the vibration excitation mode, the MAVIS generated a minimum vibration acceleration of 0.4091 μg0 at a frequency of 0.00995 Hz and a maximum acceleration of 6253 μg0 at a frequency of 9.999 Hz. Therefore, the MAVIS provides a highly stable environment for conducting microgravity experiments, and promotes the development of microgravity fluid physics.
order of 0.01-10 Hz.Similarly, the dynamics of hard spheres have been investigated at around 1000 μg 0 and at acceleration frequencies on the order of 10-1000 Hz, while the rheology of non-Newtonian fluids have been require microgravity conditions on the order of 10,000 μg 0 .
In addition, special research platforms have been designed and installed in manned space stations for conducting microgravity fluid physics experiments 6,7 .Typically, these research platforms include vibration isolation systems as well to isolate the experimental platform from disturbances arising from the space station itself.Examples of these platforms and corresponding vibration isolation systems are listed in Table 1.As can be seen, the ISS includes a number of research platforms, including the Fluid Integrated Rack with its corresponding Active Rack Isolation System employed in the Destiny Laboratory Module of the US 8 , the Fluid Science Laboratory with its corresponding Microgravity Vibration Isolation Subsystem employed in the Columbus Laboratory of the European Space Agency 9 , and the RYUTAI Rack integrated with the Hope Experiment Module of the Japan Aerospace Exploration Agency.In addition, the Fluid Physics Research Rack (FPR), which is equipped with ten macroscale fluid dynamics test systems supporting a range of microgravity fluid physics experiments, is onboard the Mengtian laboratory cabin module of the Chinese Space Station (CSS) in conjunction with the Microgravity Active Vibration Isolation System (MAVIS).Microgravity experiments on the dynamic processes, diffusion processes, phase transitions, and self-organizing behaviors of different fluid systems will be conducted on MAVIS.In addition, MAVIS also supports interdisciplinary scientific and technological experimental research related to fluid thermal and mass transport in space material preparation and space biological processes.The MAVIS has six main operating modes, including a locked mode, central alignment mode, microgravity mode, vibration excitation mode, moving-to-locking-position mode, and fault mode.
The MAVIS employed in conjunction with the FPR is a structural platform consisting of a stator and a floater, which uses non-contact electromagnetic actuators, high-precision accelerometers, and displacement transducers for vibration isolation control.The stator is fixed to the FPR, while the floater serves as a vibration isolation platform supporting payloads, and is connected with the stator only with a number of umbilicals.However, the umbilicals, which mainly comprise power and data cables, have some stiffness, and inevitably provide pathways for the transfer of disturbance from the stator to the floater 10 .This represents a challenging condition for the MAVIS when operating in both its microgravity operating mode, which provides an environment with a controllable acceleration on the order of 1 μg 0 , and in its vibration excitation operating mode that provides an environment with controllable vibration acceleration signals of specific amplitudes in the frequency range of 0.01-10 Hz.These challenges were addressed by designing system controllers for the two operating modes of the MAVIS with a correction for the umbilical stiffness to minimize the effect of the umbilicals on the vibration isolation performance.The FPR was launched into orbit on October 31, 2022, installed on the CSS, and subjected to numerous tests to ascertain the performances of the designed control systems in both the microgravity and vibration excitation operating modes.
The present work presents the control system designs of the MAVIS in detail, and reports on the results of in-orbit testing.First, the hardware architecture of the MAVIS is described, and the requirements for its six operating modes and control performances are defined.Next, the control strategies for the microgravity and vibration excitation modes are explained in detail.Finally, the in-orbit test results of the MAVIS are summarized.
MAVIS components and requirements
The FPR and its MAVIS component currently installed in the Mengtian laboratory cabin module of the CSS is illustrated in Fig. 1, where the MAVIS is in its locked operating mode, which fixedly connects the floater and stator by a locking mechanism.The length × width × height dimensions of the MAVIS are approximately 600 mm × 950 mm × 940 mm.The floater and payload weigh approximately 132 kg.
The MAVIS senses vibrational accelerations on the experimental payload using accelerometers, and the motion of the floater/payload relative to the stator is measured using displacement transducers.Vibration isolation is achieved by transmitting the acceleration and relative motion information to the system controller, which uses a closed-loop control strategy to calculate the currents that must be applied to electromagnetic actuators to generate the appropriate opposing forces required to attenuate the magnitude of disturbances while avoiding collision between the floater and the stator.Four two-dimensional (2D) position sensitive devices (2D-PSDs) and eight onedimensional (1D) electromagnetic actuators are configured between the stator and floater.In addition, electromagnetic levitation control is obtained by mounting three 2-axis accelerometers on the floater and one 3-axis accelerometer on the stator, as described in a previous report 11 .The displacement and attitude of the floater with respect to the stator measured by the 2D-PSDs is used for the single-loop displacement-based control (SDC), while the microgravity acceleration of the floater measured by the accelerometers and its displacement and attitude with respect to the stator measured by the 2D-PSDs are used for the two-loop impulse-averaging acceleration-based and displacement-based control (TIADC).Details regarding the measurement sensors and actuators employed in the control system are listed in Table 2, where the response frequency of the sensors refers to their measurement frequency, while that of actuators refers to their output frequency.
The functions and control strategies employed by the six operating modes of the MAVIS are summarized in Table 3, and are further described in detail as follows.
(1) As discussed above, the floater and stator are fixedly connected by a locking mechanism under the locked mode.The MAVIS engages in no control operations in this mode, and the electromagnetic actuators are not activated.The locking mode is mainly used for protecting the floater and payloads during launching, rendezvous, and other operations affecting the CSS.(2) Under the central alignment mode, collision between the floater and stator is avoided by controlling the floater to remain fixed at the center of its available spatial range relative to the position of the stator.The available spatial range of the floater includes a vertical range of ±10 mm and rotational range of ±2°.This is the default mode at any microgravity level when the locking mechanism is released.In addition, the central alignment mode is activated when a preset safety threshold of relative displacement (e.g., 90%) is exceeded in the microgravity operating mode.(3) As discussed above, a high-level microgravity environment is created for experimental payloads in the microgravity operating mode through active vibration isolation control.The microgravity mode uses a twoloop impulse-averaging control strategy, where both the relative displacement/attitude and microgravity acceleration of the floater are subject to feedback control.This mode can also apply a single-loop strategy for controlling just the displacement of the floater.(4) As discussed above, the vibration excitation mode enables the application of vibrations of specific frequencies, magnitudes, and directions to experimental payloads.A single-loop control strategy based on displacement is applied for producing vibration signals in the frequency range of 0.01-1 Hz, while a two-loop control strategy based on both displacement and acceleration is applied for producing vibration signals in the frequency range of 0.1-10 Hz.
(5) The moving-to-locking-position mode is activated when a test is terminated or as necessary under special circumstances.Under this mode, the floater is controlled to move rapidly to the designated locking position, and then connected to the stator by the locking mechanism.The moving-to-locking-position mode can be activated at any microgravity level, but the action time and overshoot of the control system are subject to specific requirements.The floater is set to move to the locking position within two minutes and without overshoot, which means there is no strong collision with the stator.(6) The fault mode is activated when a measurement transducer or electromagnetic actuator partially fails.Under this mode, the MAVIS either switches to a backup measurement system according to a failure response program or continues the closed-loop control according to the corresponding fault algorithm model.When the MAVIS experiences a severe failure, the floater is locked or the power is switched off via ground commands.The microgravity and vibration excitation modes are the primary operating modes of the MAVIS.The performance requirements applied for the microgravity mode follow from the microgravity acceleration requirements defined for payloads on the ISS in the SSP 41000E specification 12 .These requirements are summarized as follows.The microgravity acceleration at the center of the payload averaged over 100 s of operation in the microgravity operating mode must lie within the range of the root mean square (RMS) acceleration in the one-third octave indicated by the black line in Fig. 2. The microgravity acceleration performance required for the MAVIS is indicated by the red line in Fig. 2. The relationships between the microgravity acceleration a and vibration frequency f can be approximated as follows: (1) a ≤ 5.4 μg 0 when 0.01 Hz ≤ f ≤ 0.3 Hz; (2) a ≤ 18 μg 0 when 0.3 Hz < f ≤ 100 Hz; (3) a ≤ 1800 μg 0 when 100 Hz < f ≤ 300 Hz.
The performance requirements applied for the vibration excitation mode include (1) a vibration frequency range of 0.01-10 Hz; (2) vibration types employing sine, triangular, and other classical periodic functions and time functions resulting from the superposition of two or more sine signals; (3) a maximum vibration acceleration ≥ 1500 μg 0 in a given direction for a 100 kg payload.
MAVIS dynamics models
The primary dynamics of the MAVIS were modeled using multiple coordinate systems, including the geocentric equatorial inertial system (N xyz ), center of mass (CoM) orbit coordinate system (O xyz ), spacecraft-fixed coordinate system (B xyz ), stator-fixed coordinate system (S xyz ), and floaterfixed coordinate system (F xyz ).The following nonlinear model was established for defining the motion of the floater relative to the stator according to the Newton-Euler equations and the characteristics of rigid body composite motion.
Here, m F and I F are the mass and moment of inertia of the floater, respectively; r SF is the displacement of the floater relative to the stator, _ r SF j S and € r SF j S are the first-order and second-order differentials of r SF with respect to S xyz , respectively; S θ F , S ω F , and S α F are the attitude, body angular velocity, and attitude angular acceleration of the floater relative to the stator, respectively; N ω B , and N α B are the body angular velocity and attitude angular acceleration of the CSS with respect to N xyz , respectively; F Gra F , F Mag F , F Umb F , and F Oth F are the forces acting on the floater due to celestial gravitation, the electromagnetic actuators, the umbilicals, and other disturbances, respectively; F Gra S and F Mic S are the celestial gravitation and non-conservative forces acting on the stator, respectively; M Gra F , M Mag F , M Umb F , and M Oth F are the torques acting on the floater due to celestial gravitation, the electromagnetic actuators, the umbilicals, and other disturbances, respectively.
The disturbances from the umbilicals can be represented using the following second-order damping system 13 .
Here, K tt , K tr , K rt , and K rr pertain to the stiffness matrices and C tt , C tr , C rt , and C rr are the damping matrices of the umbilicals, F Umb F0 and M Umb F0 are the respective pretensioning force and torque of the umbilicals when the floater is located at the center of its available spatial range, and r FU is the position vector from the CoM of the floater to the equivalent point of action of the disturbance force of the umbilicals.
The nonlinear model was linearized to simplify the design and analysis of the controller.In addition, the translation equation was expanded in the S xyz coordinate system, and the rotation equation was expanded in the F xyz coordinate system.This yielded the following linear model.
Here, ðXÞ U designates the expansion of vector U in coordinate system X, with X designating the S xyz (S) or F xyz (F) coordinate system.Other parameters are defined in Table 4, where ðOÞ ω 0 is the array of orbital angular 5 is related to the magnitude of the orbital angular velocity ω 0 , and Y Q X is the coordinate transformation matrix from coordinate system X to coordinate system Y.
Table 4 | Formulas of the parameters in the MAVIS linearization model
Parameter Formula
MAVIS control strategies
As discussed above, the single-loop displacement-based control (SDC) and the two-loop impulse-averaging acceleration-based and displacementbased control (TIADC) strategies were employed to control the different operating modes of the MAVIS.The SDC strategy is illustrated in Fig. 3a, where the displacement and attitude of the floater with respect to the stator are controlled using a closed-loop feedback process.The TIADC strategy is illustrated in Fig. 3b, where the microgravity acceleration of the floater and its displacement and attitude with respect to the stator are controlled within a closed-loop feedback process.
The linearized dynamics model presented in Eq. ( 3) above was applied to establish the feedforward and feedback processes employed for the abovediscussed single-loop and two-loop control strategies in terms of a proportional-integral-derivative (PID) controller.This yielded the following idealized control expressions.
Here, K PC P , K PC D , and K PC I are the PID controller parameters for the SDC strategy, and K AC Px ,K AC Dx , K AC Ix , K AC Pa , and K AC Ia are the PID controller parameters for the TIADC strategy.However, the current design of the PID controller compensated only for the pretensioning force (F Umb F0 ) and torque (M Umb F0 ) of the umbilicals because the stiffness and damping matrices of the umbilicals are difficult to measure due to their flexibility, high nonlinearity, and hysteresis.
The PID controller parameter tuning problem was solved by first establishing the closed-loop transfer function from the stator's microgravity acceleration to the floater's microgravity acceleration, and then rewriting this function as a combination of some typical frequency elements, as proposed previously 14 .In the present work, the controller parameters were determined based on the control performance levels required for the different operating modes of the MAVIS.Accordingly, the PID controller parameters were calculated using the following equations.
Here, ω n1 is the natural frequency of the first-order inertia element; ω n2 and ζ 2 are the natural frequency and damping ratio of the second-order oscillation element, respectively; M diag is 6-dimensional diagonal matrix, with the first three terms representing the mass of the floater and the last three terms representing the moment of inertia of the floater's main axes; A p is an adjustable parameter with a value in range of 0-1; ω n3 is the natural frequency of the second-order differentiation element.The vibration attenuation performance is determined by the parameters ω n1 , ω n2 , ω n3 , and ζ 2 .
Microgravity mode.The stiffness of the umbilicals determines the lower limit of the control system bandwidth.As discussed above, the control expressions of the PID controller are given in Eq. ( 5), and the PID controller parameters are calculated using Eq. ( 7).effects of the parameters of the typical elements on the vibration isolation performance were then determined based on analysis of the amplitude-frequency characteristics, which are listed in Table 5.The vibrations at the frequencies less than the natural frequency ω n1 cannot be attenuated, and the vibration attenuation performance per ten octaves are approximately −20 dB above the natural frequency ω n1 , −60 dB above the natural frequency ω n2 , and −20 dB above the natural frequency ω n3 .The PID controller parameters were then tuned according to the tuning process illustrated by the flowchart in Fig. 4. The procedure is given in detail as follows.
(1) The microgravity acceleration target is set according to Fig. 2, and this target is combined with the estimated value for the actual microgravity acceleration of the space station 5 to calculate the vibration isolation performance target, which is the attenuation of the microgravity acceleration target with respect to the microgravity acceleration of the space station.The vibration isolation performance target can be approximately described as follows: (a) for vibrations at frequencies less than 0.1 Hz, vibration attenuation is not required, and the maximum vibration magnification at the resonant frequency should not exceed 3 dB; (b) for vibrations in the frequency band of 0.1-10 Hz, the vibration attenuation is from 0 to −40 dB; (c) for vibrations at frequencies greater than 10 Hz, the vibration attenuation is −40 dB.(2) The vibration isolation performance target and the amplitude-frequency characteristics in Table 5 are combined to determine the parameters for typical elements.A set of preliminary design results that meet the requirements of the vibration isolation performance target are listed in Table 6.
(3) The parameters obtained for typical elements are applied in Eq. ( 7) to calculate the PID controller parameters.
Table 5 | Analysis of amplitude-frequency characteristics when evaluating the effects of PID controller parameters on the vibration isolation performance of the MAVIS based on the typical elements
Frequency Amplitude Vibration excitation mode.The amplitude-frequency characteristics of the SDC and TIADC strategies can be summarized as follows.
PID controller parameter tuning
(1) As discussed above, the SDC strategy is applied for generating vibrational acceleration signals in the relatively low frequency range of 0.01-1 Hz.This limitation is applied because the accuracy of the closed-loop control applied for providing the target displacement output is only sufficient at these relatively low frequencies.(2) Similar to (1) above, the TIADC strategy is applied for generating vibrational acceleration signals in the relatively high frequency range of 0.1-10 Hz owing the ability of this strategy to output acceleration signals accurately in this frequency range.The maximum amplitude of the floater's vibration acceleration at a frequency f cannot exceed (2πf) 2 •L lim /2, where L lim is the length of the spatial constraints.(3) For both control strategies, the amplitude of the actual output signal becomes attenuated relative to the desired amplitude of the target output when the frequency of the target output is close to the controller bandwidth.Therefore, the output signal is subjected to appropriate amplification to compensate for the impact of attenuation.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Results and discussion
After installing the FPR in the Mengtian laboratory cabin module in the CSS, the MAVIS was subjected to 13 days of in-orbit tests according to the procedures listed in Table 7.As can be seen, a full series of tests was performed using appropriate operation and control procedures, including selfcheck tests under the locked state, the testing of control algorithms, and the testing of microgravity and vibration excitation modes.The tests resulted in the tuning and optimization of 11 controller parameters, involved the input of approximately 2000 commands, and the output of approximately 15 GB of engineering data from the MAVIS.
Self-check tests under the locked state
The three self-check tests under the locked state were conducted to evaluate the operational status of the MAVIS, including the heat sink, current, and voltage of the control board, and the functioning of the control components to ensure its readiness for closed-loop control after being unlocked.The control component functionality tests included the following.
(1) Functioning of the four 2D-PSDs configured between the stator and floater and switching between the four measurement modes involving at least three 2D-PSDs applied for measuring the displacement and attitude of the floater relative to the stator.Here, measurement data is obtained from all four 2D-PSDs in the PSD41 measurement mode.Obtaining data from any three 2D-PSDs occurs under measurement modes denoted as PSD31, PSD32, and PSD33.Four measurement modes are used for mutual verification of accuracy and mutual backup.The PSD41 measurement mode is the default usage mode.If one 2D-PSD is faulty and the others are normal, the MAVIS will be switched to the measurement mode using the normal three 2D-PSDs.The in-orbit test results demonstrated that the displacements and attitudes of the floater relative to the stator measured using the four measurement modes were similar and highly consistent.The average displacement and attitude of the floater relative to the stator over the four measurement modes measured under the locked state in the X, Y, and Z directions were [−1.1 ± 0.1 mm 0.8 ± 0.05 mm −9.2 ± 1 mm] T and [−0.15°± 0.02°−0.06°±0.005°0.51°±0.08°] T , respectively, and were consistent with the expected results.Accordingly, the 2D-PSDs were assumed to be functioning normally.(2) Functioning of the eight 1D electromagnetic actuators configured between the stator and floater.A 0.5 A control current input command was sent to each of the 1D electromagnetic actuators, and the actual currents of their energized coils were measured by current sensors.The in-orbit test results demonstrated that the measured currents were consistent with the control commands, which indicated that the electromagnetic actuators were functioning normally.(3) Functioning of the three 2-axis accelerometers on the floater and the one 3-axis accelerometer on the stator.The accelerations of the floater and stator must be equivalent under the locked state condition.Therefore, the accelerations measured by the accelerometers mounted on the floater were compared with the accelerations measured by the accelerometer mounted on the stator, and the results obtained in the X, Y, and Z directions during the in-orbit tests are presented in Fig. 5a.These results demonstrate that the measured accelerations of the floater were consistent with those of the stator.Accordingly, the accelerometers can be assumed to have been functioning normally.In addition, the RMS microgravity accelerations measured for the floater and stator under the locked state during the specified period of the inorbit tests (i.e., 16:40-18:12 GMT + 8 [Beijing time]) are presented in Fig. 5b.As can be seen, the microgravity level of the floater under the locked state did not reach the level required for conducting microgravity experiments.Accordingly, active vibration isolation control was necessary.
Testing of control algorithms
Under the microgravity mode, both control strategies use a small control bandwidth to minimize the disruption of the microgravity state of the floater by the control force.In contrast, both control strategies employed in the vibration excitation mode use a large control bandwidth to improve the accuracy of control.The data obtained from the control experiments conducted during the in-orbit tests can be analyzed as follows.
(1) The time series variations of the control force and torque observed under displacement-based closed-loop control during the specified As can be seen from Eq. ( 4), the values of F Umb_F0 and M Umb_F0 are approximately equal and opposite to the control force and torque under steady-state displacement-based closed-loop control.Accordingly, the floater is subject to disturbances from both the umbilicals and the control effects of the electromagnetic actuators during closed-loop control.Therefore, the natural frequency of the MAVIS is in effect determined by the stiffness of the umbilicals and the PID controller parameters.In fact, if the PID controller parameters are all set to zero, the natural frequency of MAVIS (equivalent to the control bandwidth) will be completely determined by the stiffness of the umbilical cables.
Based on this analysis, we can conclude that the stiffness of the umbilicals has a significant impact on the control system bandwidth.
The RMS microgravity accelerations measured for the floater and stator under the unlocked state, but when the position of the floater was not actively controlled, are presented in Fig. 6c for the specified period of the in-orbit tests (i.e., 9:15-10:47 time).As can be seen, the floater adheres to the surface of the stator under these conditions.This can be attributed to the disturbance effects of the umbilicals.In addition, the microgravity acceleration of the floater was markedly greater than that of the stator in the frequency range of 0.2-0.3Hz.This frequency range corresponds to the natural frequency of the floater under the disturbance effects of the umbilicals, and therefore reflects the stiffness of the umbilicals.(2) that the disturbance force and torque of the umbilicals affect both the displacement attitude of the floater relative to the stator.Second, floater control is conducted based on the force and torque output by the electromagnetic actuators.Therefore, a larger control torque can be expected to produce larger errors in the output torque and force.From this perspective, we investigated the natural frequency of the MAVIS at relatively large (ω n2 = 2π × 0.3) and small (ω n2 = 2π × 0.1) parameter values applied for attitude control in Fig. 6d, e, respectively, which was investigated in terms of the power spectral density of the microgravity acceleration observed for the floater and stator during in-orbit tests.As can be seen, the natural frequency of the MAVIS was approximately 0.75 Hz with a peak magnitude of about 1.516 × 10 −2 under the relatively large control parameter values (Fig. 6d), while it was approximately 0.66 Hz with a peak magnitude of about 7.532 × 10 −4 under the relatively small control parameter values (Fig. 6e).Therefore, the attitude control loop should be designed with relatively small control parameter values to promote the application of relatively small control torques that ensure maximum control accuracy.(3) Because of the large stiffness of the umbilicals, the parameter applied to the proportional term of the PID controller designed for the displacement control loop can be set to a suitable negative value, as long as the negative feedback of the umbilicals is greater than the positive feedback of the electromagnetic actuators.As a result, the MAVIS is subject to negative feedback control overall.Time series of the displacement and attitude of the floater measured in the X, Y, and Z directions with respect to (w.r.t.) that of the stator are presented in Fig. 7a, b, respectively, when applying a negative coefficient (Kp = −0.05mF ) to the proportional term of the PID controller.The results demonstrate that the parameter applied for the proportional term achieved excellent closed-loop control stability that satisfied the constraints on the available spatial range of the floater ( ± 10 mm and ±2°).The RMS microgravity acceleration spectra measured for the floater using different parameter values (Kp) for the proportional term of the PID controller are presented in Fig. 7c.The results confirm that a negative value of Kp improved the microgravity level of the floater.(4) The RMS microgravity acceleration spectra measured for the floater under the SDC and TIADC strategies during in-orbit testing are presented in Fig. 7d.Here, the same PID controller parameter values were applied for the displacement controllers in both control strategies.As can be seen, the TIADC strategy outperformed the SDC strategy in terms of vibration isolation over the entire frequency range of 0.01-100 Hz, except for near the natural frequency of the floater near the range 0.6-0.9Hz.Accordingly, the in-orbit test results confirm the effectiveness of the TIADC strategy in improving the vibration isolation of the floater.
Tests of major operating modes Microgravity mode.The time series of the microgravity accelerations measured for the floater and stator in the X, Y, and Z directions and their corresponding RMS microgravity acceleration spectra observed under the microgravity mode are presented in Fig. 8a, b, respectively.The excellent vibration isolation performance of the MAVIS is clearly demonstrated by the significantly smaller microgravity accelerations of the floater than those of the stator (Fig. 8a).Meanwhile, the RMS microgravity acceleration spectrum of the floater (Fig. 8b) exhibits a microgravity acceleration of 0.1-30 μg 0 in the frequency range of 0.01-125 Hz, which is markedly greater than that required for many microgravity experiments 5 .Vibration excitation mode.The in-orbit test results obtained for the MAVIS under the vibration excitation mode are listed in Table 8 for sine waves, triangular waves, and mixed sine waves at frequencies ranging from 0.01 to 10 Hz and with different amplitudes and displacements.In particular, the MAVIS achieved a minimum microgravity amplitude of 0.41 μg 0 at a frequency of 0.01 Hz, and maximum microgravity amplitudes of 4355 μg 0 at a frequency of 9.6 Hz and 6253 μg 0 at a frequency of 10 Hz when mixed with a sine wave at a frequency of 1.0 Hz.Time series of the displacement of the floater relative to that of the stator obtained during in-orbit testing under the vibration excitation mode are presented in Fig. 8c.Meanwhile, time series of the microgravity accelerations of the floater and stator obtained in the Z direction under the vibration excitation mode during in-orbit testing are presented in Fig. 8d.These results demonstrate the stability and effectiveness of the two control strategies in this operation mode.In addition, a Fourier transform of the microgravity accelerations of the floater measured in the X, Y, and Z directions under the vibration excitation mode is shown in Fig. 8e.As can be seen, the frequency response of the microgravity accelerations achieve maxima at 0.9999 Hz and 9.999 Hz, which demonstrates that the MAVIS can generates vibration signals at specified frequencies.
Conclusion
The MAVIS is a component of the FRP research platform designed to isolate the payload of the FPR from disturbances arising from the space station itself in the microgravity operating mode, while providing an environment with controllable vibrational acceleration signals of specific amplitudes in the frequency range of 0.01-10 Hz in the vibration excitation operating mode.The design and in-orbit test results of the MAVIS were presented.The primary results and analyses can be summarized as follows.
(1) The controller combing feedforward and feedback design based on the applied dynamics models and the method employed for calculating the PID controller parameters using the closed-loop transfer function are effective.The pretensioning force and torque of the umbilicals are approximately equal to the output force and torque from the controller under steady-state in-orbit closed-loop control.The stiffness of the umbilicals can be estimated by comparing the microgravity accelerations of the stator and floater under the state of free levitation.(4) The translational and rotational motions of the floater are coupled in the control process, and their interactions must be considered in the design of the control system.(5) The large stiffness of the umbilicals enables the use of a negative coefficient for the proportional term of the PID controller to decrease the control bandwidth of the MAVIS effectively and improve the microgravity acceleration level of the floater.(6) The MAVIS achieved a microgravity level of 1-30 μg 0 in the frequency range of 0.01-125 Hz and attenuated the magnitude of disturbances at frequencies greater than 2 Hz by 10-fold in the microgravity operating mode.In the vibration excitation operating mode, the MAVIS generated a minimum vibration acceleration of 0.4091 μg 0 at a frequency of 0.00995 Hz and a maximum vibration acceleration of 6253 μg 0 at a frequency of 9.999 Hz.
Accordingly, these findings confirm that the MAVIS provides a highly stable environment for conducting microgravity experiments, and promotes the development of microgravity fluid physics.
Fig. 1 |
Fig. 1 | FPR and its MAVIS component installed in the Mengtian laboratory cabin module of the CSS.
Fig. 2 |
Fig. 2 | Microgravity acceleration requirements defined for payloads on the ISS in the SSP 41000E specification (black line) and for payloads subject to the MAVIS (red line) on the CSS.
Fig. 3 |
Fig. 3 | Block diagram of the two strategies employed by the MAVIS.a The SDC strategy.b The TIADC strategy.
Fig. 4 |
Fig. 4 | Flowchart of the process employed for tuning the PID controller parameters of the MAVIS.
Fig. 5 |
Fig. 5 | Accelerations measured by the accelerometers mounted on the floater and stator under the locked state during in-orbit tests.a Time series accelerations in the X, Y, and Z directions.b Root mean square (RMS) microgravity acceleration spectra.
( 2 )
During closed-loop control of the MAVIS, both the translation and rotation of the floater are controlled based on the displacement and attitude of the stator.Moreover, the translational and rotational motions of the floater are coupled in the control process.In an effort to analyze the performance of closed-loop control, we first note from Eq.
Fig. 6 |
Fig. 6 | The data obtained from the control experiments conducted during the inorbit tests for the analysis of the umblicals and attitude control.a Control force output under SDC.b Control torque output under SDC.c RMS microgravity acceleration spectra under the unlocked, uncontrolled state.d Power spectral density of the microgravity acceleration under relatively large controller parameters for attitude control.e Power spectral density of the microgravity acceleration under relatively small controller parameters for attitude control.
Fig. 7 |
Fig. 7 | The data obtained from the control experiments conducted during the inorbit tests for the analysis of the controller parameters and control strategies.a Displacement of the floater with respect to (w.r.t.) that of the stator when applying a negative coefficient to the proportional term of the PID controller.b Attitude of the floater w.r.t. that of the stator when applying a negative coefficient to the proportional term of the PID controller.c RMS microgravity acceleration spectra measured for the floater using different parameter values for the proportional term of the PID controller.d RMS microgravity acceleration spectra measured for the floater under different control strategies.
Fig. 8 |
Fig. 8 | Flight test results under the microgravity mode and the vibration excitation mode.Time series of the microgravity accelerations under the microgravity mode.b RMS microgravity acceleration spectra under the microgravity mode.c Time series of the displacement of the floater w.r.t. that of the stator under the vibration excitation mode.d Time series of the microgravity accelerations of the floater and stator under the vibration excitation mode.e Fourier transform of the microgravity accelerations of the floater under the vibration excitation mode.
( 2 )
Relatively low-frequency acceleration vibration signals can be indirectly achieved with the SDC strategy, while relatively highfrequency vibration signals can be achieved directly with the TIADC strategy.(3)The umbilicals applied between the floater and payload are major factors affecting the vibration isolation performance of the MAVIS.
Table 1 |
Experimental fluid physics devices installed on the International Space Station (ISS) and Chinese Space Station (CSS)
Table 2 |
Measurement sensors and actuators employed in the MAVIS
Table 3 |
Summary of the six operating modes of the MAVIS
Table 6 |
Analysis of the impact of frequency, damping, and the adjustable parameter on the amplitude characteristics of the PID controller
Table 7 |
MAVIS in-orbit test procedures
Table 8 |
In-orbit test results of the MAVIS under vibration excitation mode | 8,311.8 | 2024-02-19T00:00:00.000 | [
"Engineering",
"Physics",
"Environmental Science"
] |
Identifying experts in the field of visual arts using oculomotor signals
In this article, we aimed to present a system that enables identifying experts in the field of visual art based on oculographic data. The difference between the two classified groups of tested people concerns formal education. At first, regions of interest (ROI) were determined based on position of fixations on the viewed picture. For each ROI, a set of features (the number of fixations and their durations) was calculated that enabled distinguishing professionals from laymen. The developed system was tested for several dozen of users. We used k-nearest neighbors (k-NN) and support vector machine (SVM) classifiers for classification process. Classification results proved that it is possible to distinguish experts from non-experts.
Introduction
In the field of empirical esthetics, we pose questions about the differences between experts and non-experts in terms of their esthetic preferences and emotional, behavioral, or neurophysiological reactions. In the vast majority of them, we assume that the experts in the field of art (as opposed to the laymen) are continuing or completed their studies in art history, an academy of fine arts, or a conservatory. We assume that they are involved in some kind of art or actively participate in cultural life (e.g., they visit museums and exhibitions, paint, take photographs, sculpture, or read about art either professionally or as a hobbyist. Furthermore, studies have shown inhomogeneity of groups of experts and non-experts confronted with evaluation of works of art. Therefore, there is a need to look for an objective method of measuring expert level in the field of art. Oculography, as a method of measuring of human visual activity, gives some possibilities in this field. One of the reliable indicators of an interest in a specific fragment of an image is the density of fixations, registered by eye-tracker (Antes & Kristjanson, 1991). Regions of interest (ROI) are interpreted as places of especially high information values (Locher, 2006;Henderson & Hollingworth, 1999;Massaro et al., 2012;DeAngelus & Pelz, 2009). Generally, higher values of many oculomotor indicators (e.g., average fixation time, duration of fixations or length of saccades preceding fixations) are recorded in areas of high information values (Antes, 1974;Plumhoff & Schirillo, 2009;Jain, 2010;Celeux & Soromenho, 1996;Fraley, 1998;Bishop, 2006). The results of eye-tracking research to find experts and nonexperts in the field of visual arts show some differences in the distribution of fixations on the known and unknown pictures (Antes & Kristjanson, 1991). Practicing artists often pay attention to these fragments of images that lie beyond the obvious centers of interest (e.g., the faces of people) unlike non-experts. It was also found that the experts have a more global strategy to search image area than non-experts (Zangemeister, 1995). However, non-experts pay more attention to objects and people shown in pictures, whereas experts are more interested in structural features of these images. Vogt and Magnussen (Vogt & Magnussen, 2007) found that the non-experts fix their sight longer on earlier watched parts of images than the experts. It was also found that non-experts, regardless of the type of task being performed (free viewing of photos or scanning them to find the specified object) fix their sight according to the salience-driven effect, which is in line with the bottom up strategy of information processing (Fuchs et al., 2010). Hermens (Hermens et al., 2013) presented an extensive review of the literature concerning eye movements in surgery. On the basis of eye movements some techniques to assess surgical skill were developed, and the role of eye movements in surgical training was examined. Sugano (Sugano et al., 2014) investigated the possibility of image preference estimation based on a person's eye movements. Stofer and Che (Stofer & Che, 2014) investigated of expert and novice meaning-making from scaffolded data visualizations using clinical inter-views. Boccignone (Boccignone et al., 2014) applied machine learning to detect expertise from the oculomotor behavior of novice and expert billiard players.
Viewing a picture runs fragmentarily. While viewing a picture, people focus their eyes on different parts of it with different frequencies (Locher, 2006). If an image is watched by a dozen or so people, it is likely that they will pay attention to its similar fragments. This tendency has been previously confirmed by numerous studies, started from experiments conducted by the pioneers of oculography such as (Tinker, 1936;Yarbus, 1967;Mackworth & Morandi, 1967, Antes 1974. Can we, based on the coordinates and durations of fixations, predict who is watching the image: an expert or a layman? In this article, we present a system that enables identifying experts in the field of art based on eye movements while watching assessed paintigs. The difference between the classified groups of people concerns formal education and related to it greater or lesser experience in dealing with works of art.
Participants and setup
In this study, we collected data from 44 people: 23 experts (including 11 women) and 21 non-experts (including 11 women), who were in the age group of 20-27 years (mean value = 23.4; standard deviation = 1.6). Eighty-five percent of the people in the group of experts were students of the fourth and fifth years of studies, and fifteen percent were students of the second and third year, mainly art history (90%) and painting and graphics (10%). In addition to formal education, all of them declared interest in visual arts and about half of them have been actively involved in some form of art (painting, graphics, sculpture, photography, design, etc.) for several years. Non-experts did not meet any of the above criteria. All persons had normal or corrected to normal vision and did not report any symptoms of neurological disorder. All people participating in the experiments received financial compensation.
We used digitized reproductions of five known paintings. The list of the images is presented below:
P1.
James J. One image was used in the instructions for users:
In this study, we used SMI RED 500 eye-tracker. The images were displayed on a color monitor with 1920x1200 pixel resolution. The person being examined was sitting in front of a monitor at a distance of approximately 65 cm. The program for stimuli presentation and recording the reactions of the respondents was written in E-Prime v.2.0. The subjects answered the question of esthetic evaluation using a keyboard with a variable arrangement of keys.
The task of the users was to watch random sequence of five test pictures. Their eye movements were recorded while they were viewing the images, in the form of fixations and fixation durations. The recordings lasted for approximately 20 min, including the time required for calibration of the eye-tracker and passing instructions to the user. The experiment consisted of the following phases: 1. Instruction on how to perform tasks in test phase, 2. Eye-tracker calibration, 3. The draw of the image, 4. Watching image for 15 s, 5. Esthetic image evaluation. It needs to be highlighted that our aim was not classify experts and non-experts based on their aesthetic preferences. The idea was to check whether we can distinguish experts and non-experts from the way they view the images.
Methods
We assumed that for images, there are individual ROIs, which in a different way attract the attention of experts and non-experts. Therefore, we specified sets of ROIs for each image separately. For each ROI, we calculated the following features: the number of fixations and the average duration of fixation that could enable distinguishing an expert from non-expert. We did not use diameter of eye pupil as a feature, because it is linked significantly with the brightness of the observed portion of the image (Hand et al., 2012;Jiaxi, 2010). We deliberately limited ourselves to static features related to specified clusters. We did not consider transition between clusters which might be useful (Coutrot et al., 2017). We are aware that in this way we could limit the classification accuracy, but the purpose of the article was to check only static features. In the first step, the calculated features were used to learn the classifier. Then, the system was tested using cross-validation test (CV). Block diagram of the proposed system is given in Fig. 1.
Specification of ROI
We considered several methods to specify ROI. The simplest of them included an arbitrary division of an image on separate areas (e.g., rectangles). However, in this case, both the selection of size and number of ROIs was a big problem. Consequently, we decided that such a simple division is unnatural and ineffective. Therefore, we used number of fixations to identify ROIs.
To specify ROIs, based on registered fixations, many clustering methods could be applied. The basis of most of them is the similarity between elements, expressed by some metrics. Hierarchical methods, K-means, and fuzzy cluster analysis are frequently used for that purpose (Jain, 2010). It turns out that, depending on the nature of observations, the type of method used plays an important role. Not without significance is the number of clusters, on which we want to divide the observations. In a number of known methods, the researcher must decide on the number of clusters. This makes analysis more difficult and requires a researcher participation in working out results.
ROI calculation
Feature extraction We decided to use expectation-maximization (EM) clustering algorithm (Massaro et al., 2012). Bayesian information criterion (BIC) (Celeux & Soromenho, 1996) was implemented to automatically determine the number of meaningful clusters. In EM algorithm, we used approximation of distribution of observations (x,y) with the use of mixtures probability density functions of normal distributions (Fraley, 1998). Suppose that the probability density function of observations x for K clusters is defined as (Bishop, 2006): where ( ; ! ) is a probability density function of the kth cluster with ! parameter and ! depicts a mixture parameter. In case ( ; ! ) is a normal distribution function, there exists ! = ( ! , ! ), where ! is the vector of expected values for observations and ! is the covariance matrix. We can use the EM algorithm to determine the expected values' vectors and the covariance matrix of the probability density function of the k-th cluster. Let us define = ! , ! : = 1, … , as a set of parameters of normal distributions' mixture. Then, the probability p ik that the observation x belongs to k-th cluster can be expressed as (Celeux & Soromenho, 1996): This is a basic step of EM method, denoted as E. In the following steps (called M), we can estimate the parameters of f(x) (Hand et al., 2012): where N is the number of fixations. Using this procedure in an iterative mode, starting from an initial value of normal distribution and repeating steps E and M, we can guarantee that the logarithmic reliability of the observed data did not decrease (Hand et al., 2012).This means that the parameters ! converge to at least one of a local maximum of logarithmic likelihood function. It should be noted that an observation belongs to the k-th cluster, when the value Clusters were determined for all registered fixations (for experts and non-experts), as large number of fixations ensured cluster calculation that can be interpreted as representative ROI.
Feature extraction and selection
A fixation is described by its location on the screen ( , ) and/or its duration. Therefore, for each person and each cluster (ROI), we determined features associated with fixations: • ! -the number of fixations in cluster, • ! -the average fixation time in cluster.
Consequently, we calculated 2 features (two features for each of clusters). Features were determined without data normalization (method labeled ! ) and for four different normalization methods (labeled ! , . . . , ! ), for which standardized ! ! values were calculated according to the general rule: where ! -number of fixations or fixation duration,mean value and -standard deviation, j=1, 2 or 3 denote ! , ! or ! normalization method. In the case of ! − ! and ! refer to all data together. In the case of ! − ! and ! refer to individual users. In the case of ! − ! and ! refer to the individual users and viewed images. Thus, the ! ! takes into account individual differences between people examined separately for each image.
After feature extraction, the resulting features were assigned to specific ROIs. Not all features were equally useful for classification. Therefore, it seems sensible to make their selection. We decided to use two known feature selection methods: t-statistic (Jiaxi, 2010) and sequential forward selection (SFS) (Ververidis & Kotropoulos, 2005). The first ranking method allows to determine the best features for the purpose of distinguishing two classes. Having knowledge about the observations for experts and non-experts, we were able to compare feature distribution for each ROI. In this method, only statistical distribution of the features was used; the results of classification are not taken into consideration. Unfortunately, as a result of this method, we often obtained correlated features. In the second feature selection method-SFS, as a criterion, classification accuracy calculated for the tested features is used. Consequently, such selection generates features that are more independent.
Classification
We used k-nearest neighbors classifier (k-NN) and support vector machine (SVM) with different types of kernel functions. K-NN classifier compares the values of the explanatory variables from the test set with the variables from the training set. K nearest observations from the training set were chosen. On the basis of this choice, classification is made. The definition of "nearest observation" boils down to minimizing a metric measuring the distance between two observation vectors. We applied the Euclidean metric. K-NN classifier is useful especially when the relationship between the explanatory and training variables is complex or unusual.
The essence of SVM method is separation of a set of samples from different classes by a hyperplane. SVM enables classification of data of any structure, not just linearly separable. There are many possibilities of determining the hyperplane by using different kernel func-tions, but the quality of the divisions is not always the same. Application of proper kernel function increases the chances of improving the separability of data and the efficiency of classification. In our experiments, we used linear kernel, sigmoid (MLP) kernel, and RBF kernel (Bishop, 2006).
Training and Testing
We decided not to use the same data at the training and testing stage. So, we implemented a leave-one-out test (Bishop, 2006). It is a modified cross-validation test (CV), when all N examples are divided into N subsets, containing one element. In our case, for testing, data from only one user was taken, whereas for training the classifier the data registered for all the other users was used. This procedure was repeated consecutively for all users, and then the classification accuracies were averaged. This approach ensures that classifier was tested and learned on separate data sets and subsequent averaging provided correct result.
Results
The results comprise the classification accuracies for two classes: experts and non-experts. Classification accuracy was defined as the sum of true positives and true negatives divided by the number of all examples. Tables 1-7 include the classification accuracies for respective images (P1-P5) for the various methods of data standardization (Z 0 , Z 1 -Z 3 ). All details such as type of classifier, number of features, and feature selection method are given in the headers of tables. We used variable number of features (10, 5, and 3) selected using t-statistic or SFS for classification. The classification results presented in this study show that it is possible to distinguish an expert from non-expert using oculographic signals. We obtained the highest average classification accuracy for the SVM-MLP method for five best features and SFS selection method (Table 7). For this case, the average classification accuracy for all images was 0.74. For the considered combination of algorithms (SVM-MLP classifier, five best features, SFS selection method) we received the classification accuracy of 0.84 for the image P1, 0.69 for P2, 0.70 for P3, 0.71 for P4 and 0.77 for P5. The classification accuracies averaged for all tested methods were: 0.74 for the image P1, 0.64 for P2, 0.64 for P3, 0.66 for P4 and 0.72 for P5.
Discussion
In Fig.2 the result of clustering with EM method is presented. The chosen number of clusters is eight. Each fixation belonging to the cluster is located near the center of gravity. EM algorithm allows you to create clusters using their statistical distributions. The omission of several fixation does not affect the determination of clusters as lack of some fixations does not disrupt calculation of statistical parameters (Bishop, 2006). The method of specifying the clusters has significant influence on further steps in quantitative description of a cluster. For example cluster #1 can be easily interpreted as being associated with a natural concentration of attention on woman's face. Similarly, cluster #7 (brown) can be interpreted as associated with a concentration of attention on man's hand. EM method takes into account statistical dependencies in the distribution of fixations and, to a large extent, allows the specification of clusters, which can be interpreted semantically. Very good results of implementation of mixtures of normal distributions can be obtained for clusters of elliptical shape. It was also found that the result of grouping using the EM algorithm is sensitive to the initial ! parameter. For this purpose, the algorithm can be repeated many times for different initial parameters, and next, we can choose the best solution meeting the BIC.
We assumed that the method of data normalization could significantly affect the accuracy of classification. However, there was no such relationship. We did not find that normalization of data had a significant impact on classification accuracy. Average accuracies for the tested classifiers and different data normalized methods are presented in Table 8. At classification stage, we used two kinds of features: number of fixations in a cluster and average fixation duration for a cluster. It was worth to find the feature, which is a better to distinguish an expert from a nonexpert. For this purpose, we calculated the sum of t-values for all clusters of individual pictures (Table 9). It appeared that the better feature for distinguishing experts from nonexperts was average fixation duration.
The average of the sum of t-values for individual images was 7.72 for the number of fixations and 13.12 for the average fixation duration (as features). The calculation of p-values and t-values was performed for data divided into two sets: experts versus non-experts. The pvalues showed that calculated features for certain clusters enable to distinguish experts from the non-experts. For average fixation duration for the best cluster, there was no significant difference only for image P5 (p>0.05). The average p-value calculated for the best clusters of all images for average fixation duration was 0.03, whereas it was 0.26 for number of fixations. This confirms that better feature for distinguishing experts from non-experts is the average fixation duration than number of fixations.
An important element of the developed EM algorithm was assigning fixations to specific clusters and determination of the appropriate number of clusters. The list of optimal number of clusters for each image, calculated using BIC, is presented in Table 9. Proper cluster determination was significantly affected by the number of registered fixations. Too small number could be insufficient to calculate the representative clusters, which cover all ROIs. The dependence of BIC value on the number of clusters for P2 picture is illustrated in Fig. 3 In this case, the smallest (3.59×10 −5 ) BIC value was for 14 clusters. The method of division fixations on clusters for different assumed number of them are presented in Figs. 4-6. For the case presented in Fig. 4, three clusters of fixations were created. It can be easily observed that it is not an optimum division. Intuitively, it should select more clusters in that case. Although cluster #2 represented fixations on one face, but still, there were not enough clusters representing the other faces. For the case of K=6 (Fig. 5), the situation improves, but still, the number of cluster is too small. Only for K=14 (Fig. 6), clusters could be interpreted as representative ROIs. Thus, the clusters #1, #2, and #3 can be interpreted as the ROI associated with the faces of individual characters. Cluster #4 is associated with a sword and so on. For greater clarity, Fig. 7 contains only ellipses of 14 clusters for P2 image. Table 10 presents the average feature values (number of fixations) for experts and non-experts, and t-values calculated for each cluster of P2 image. It can be seen that there are two clusters for which the distribution of features suggests significant differences (p<0.1) in the group of experts and non-experts (cluster #1 and cluster #10). For cluster #1, the average number of fixations for experts was 13.6 and for non-experts 18.3 (p=0.07). The biggest statistical significance (p=0.04) was for cluster #10, for an average fixation duration as feature. For experts, the average fixation duration was 119.3 ms and for non-experts 128.1 ms. This is consistent with results obtained by other research groups. Krupiński (Krupinski, 1996) and Manning (Manning et al., 2006) showed that in comparison to non-experts, experts typically perform tasks with fewer fixations. At works (Crundall & Underwood, 1998;Underwood et al., 2003) it was shown that experts had longer fixation durations than novices when driving a car. The distribution of the number of fixations (cluster #1) and the average fixation duration for cluster (cluster #1) for group of experts and non-experts is shown in Fig. 7.
Clusters calculated for all P1-P5 images using EM methods and BIC are given in Supplementary Materials (available online).
Conclusions
The proposed algorithm allows us to automatically classify a person watching a painting to a group of experts or not-experts in the field of art. A key role in the proposed algorithm is EM clustering method. With this method it is possible to determine ROIs on the image. With features selected for the ROIs, such as: number of fixations and average fixation duration, and automatic classification of an image viewer is possible. The algorithm was tested in such a way as to get close to the actual conditions of operation of the expert system.
Ethics and Conflict of Interest
The authors declare that the contents of the article are in agreement with the ethics described in http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.ht ml and that there is no conflict of interest regarding the publication of this paper. | 5,212.2 | 2018-05-24T00:00:00.000 | [
"Computer Science"
] |
Federal Synergy Computing Model Based on Network Interconnection
To
solve the shortage problem of the computing power provided by the single
machine or the small cluster system in scientific research, we offer a
collaborative computing system for users. This system has massive operation
ability. It introduced a scalable mixed collaborative computing model. Through
the internet and the heterogeneous computing equipment, the system uses the
task decomposition model. This system can solve the research and development
problem because of the shortage of capacity. To test the model, a subtask
decomposition example is used. The results of the example analysis show that
the computing work can obtain the shortest computation time when the number of
calculation nodes is more than the number of subtasks; Maximum calculation efficiency
can be achieved when the number of the calculating nodes closes to the number
of subtasks. Through joint collaborative computing, the extensible mixed collaborative
computing mode can effectively solve the mass computing problem for the system with
heterogeneous hardware and software. This paper provides the reference for the
system, which provides large scale computing power through the Internet and the
research problem of due to the lack of computing ability.
. the System network topology Fig1.network topology diagram of the system LC1~n are the local computing nodes, the number of which depends on the size of the network IP address allocation pool. Therefore, different types of network provide different number of node accesses, and as a result constrain the computing capacity of the computing system constructed by them DBS provides a data storage system for the system; the whole system can share this data source, with the help of this node the system can complete the data sharing and publishing. MS system is designated as the uniquely management server, all the computing nodes and the access node must be registered again for this server. The server is responsible for the establishment of the task, the task assignment, task scheduling, as the center core server that are respectively connected intranet and extranet, building a connection channel for RC and LC. Through RC1~n, a remote random access client, the MS can be established and asked for computing tasks, and one can also download the COM component from the MS server to join computing, and to accept the MS scheduling. MS, DBS and LC1~n are connected together through a switch SW, which is responsible for assigning network addresses to them in turn. In this way, LC and RC do not need the same physical structure or software system, which can shield the difference of the system structure and provide the capability of heterogeneous collaborative computing through the upper layer of software design.
Task Processing Flow
The remote device RC request to the MS for initiates task, and the task will be submitted in the form of task description. The design of task description Reference [9], and introduce the concept of multitasking job processing [10] , we design the MS task flow as shown in the Figure 2: Remote users access the MS via the Internet and submit an application to MS. When the RC application login successfully, MS will be set the RC identity to the server. The RC will become a remote computing node of MS when it accesses to MS after the application is approved by the server, then MS will issue a general computing task for it, If the RC submits to the MS application for computing tasks, MS will review the calculation of the RC application, if the application is accepted, the establishment of RC computing tasks will be successful, if not accepted, the establishment of the task will be failed. If the RC does not have a task description, then load the job description after writing the job description, then MS task will be scheduled according to the task description. If the access node of the MS is 0 or the idle node is 0, the system cannot complete the task, and the task is waiting to be executed, otherwise the task will be assigned to perform. After the end of the computing,the RCs Will report to the system that the Task completed, and release system resources.
According to the needs of the application, the system constructs by hybrid system architecture. The MapReduce computing model is introduced in the process of assigning tasks to each computing node [11] . The system task specification describes a subset of subtasks that split a large problem into a number of small problems, and then perform the tasks on each node in the cluster computing node, it is a Map process. At the end of the Map process, each node in the cluster will compile, execute and solve the tasks according to the task specification. After the completion of the task, there will be a reduce process, this process will bring all the computing output results of the decomposition of the subtasks together, and send it to MS and DBS. Whether it is a Reduce process that brings the results together after the system is completed, or the Map process that is executed when the system is initialized, Subtask execution nodes need to the distribute server for the necessary tasks description, the task specification describes data sources, remote storage of intermediate key/value results and ` submit the results of the implementation, the task distribute server shall provide an entrance to this service or service. Therefore, it is necessary to provide a large data query and analysis model for the data nodes, and provide remote data access API to capture the data of the system design; In order to avoid the accumulation and loss of data, it is necessary to introduce a new method to store the new data of the system to server when the computing nodes need to save the new generated data in a certain time window, This will ensure different computing nodes computing performance that bounds to different servers on the data movement and the operation not movement; In order to solve the problem of large data query and analysis, we need to calculate the cluster configuration of a small memory computing cluster, The introduction of memory computing model to improve the computing performance of a variety of computing models to deal with large data, A variety of computing models are mixed with the memory computing model, which can achieve high real-time data query and analysis.
Subtask Decomposition Model and Task Description
Reference the literature [12], the system model of subtask decomposition method is designed.
Given the computing task T, when the complexity of the task O (T) is greater than the given threshold value, Continue to resolve the decomposition Subtask Ti of task T, Ti can be described by the task tree view description language (TTVDL) based on XML. Create a list of tasks on the basis of task representation, computing task requests from computing node N, and establishing the Thread of computing node. Open the leaf node (i) after the root traversal calculation based on the tree depth first algorithm.
Task decomposition scheduling algorithm divides the simulation task into 2 layer m fork tree, assigned to each computing unit. If the Subtask is larger, it can continue to decompose. The task can be decomposed statically or dynamically. It is necessary to determine the granularity of ` decomposition, the coefficient of convergence and the convergent boundary of decomposition.
Task Decomposition Algorithm
A computing task can be described by a task system (T, M, S, L, P). The task decomposition model is shown in Figure 3: The system uses two layers of nested DAG, the sub_DAG is a collection of subtasks DAGi decomposed by DAG, E is a collection of communication edge ei, T is a collection of communication costs Ti.
As a 2 layer m fork tree, task DAGi has the explicitly previous and subsequent relationship between each task, therefore, it do not need to seek the relationship between the subtasks.
Define Subtask Convergence Boundary
In order to reduce the transmission of the original data, reduce the traffic and improve the network throughput, a copy of the original data is saved in the access unit mi, the MI can be either a computer or a computing independent network unit composed of several computers. The first layer of the task can be extracted from the original data copy of the local computing unit; it does not require data transmissions. The original data and the final results are stored in the Si of data center DBS.
Set M as a collection of computing unit mi in the system, S is a collection of data center si, L is a collection of computing unit capacity Li, P is a collection of computing power pi. When the system task is decomposed into an m fork tree with hierarchical structure, the tree has a total of N subtasks, the complexity O is introduced, which reduces the complexity from [13] . Thus we obtained: In the formula, K is the coefficient of convergence. Given the k value, when the ratio of the decomposition subtask complexity and the matched with computing power less than or equal to the given boundary convergence condition kli, then stop decomposition, and the decomposition tree is sent to the computing unit mi'
Task Decomposition description
Direct at the computational tasks proposed by RC, The system uses task descriptions to describe the task decomposition, task allocation, task recovery and so on, each task corresponds to a task description. The nodes involved in the computation need to get the task description from the server and compile it locally. When the computing node LC is ready, the ready signal is sent to the management server, waiting for system scheduling. Manage server MS to maintain a task description for each computing task, the task computing dictionary is generated in the MS, the MS implementation processor scheduling by polling the task description calculates dictionary and queries the status of each computing node. The Reference [9] used XML as a task description method; we also use the XML task tree view to describe the task when designing the task description of the system. The task specification base node is as follows: <?xml version="1.0" encoding="utf-8" ?> <TaskDescription> <TaskDividedTree> </TaskDividedTree> ` <SubTaskMapping> <ComputingNode treeID=""> <ImportData></ImportData> The <TaskDividedTree /> Node is the static description of the whole task decomposition tree.
Each Node contained in the node has a strict description of the communication edge ei, the communication cost ti and others of the subtask DAGi. The node's i Node hierarchical relationship reflects the relationship between the previous and next nodes, this node is the basis of task scheduling.
Node <SubTaskMapping /> is the input, output, static description and calculation method of dependence of each sub node, How many sub nodes are described in <TaskDividedTree />, the <SubTaskMapping /> will contain a description of the number of tasks that do not exceed <TaskDividedTree />, TreeID is the computing node <ComputingNode /> association Key between <TaskDividedTree /> and <SubTaskMapping /> .Node <ImportData /> contains the input requirements of the computational tasks, and Node <ExportData /> contains the final results, The results of the calculation of the node will upload to storage server after the computing completion. the node will be recovered before rescheduling when the task computing completion, and the results of the last task before the recovery will still be maintained in the node, can provide P2P node data access, this can reduce server data transfer pressure. <NodeDependence /> is a collection of dependencies of ` nodes. By accessing the nodes, the nodes can be set to wait, sleep, and wake up and so on. <ComputingCode /> is the algorithm description of the computing nodes. According to this algorithm, the computing nodes are dynamically compiled and calculated locally. The algorithm is compiled only when the first load, the second running is no longer compiled, which is different from the interpretation of the implementation, so the performance loss can be ignored.
Task Scheduling Algorithm
The system algorithm has improved which based on the original task scheduling algorithm in Reference [10]. The improved model uses a hybrid strategy, and its algorithm is described as follows.
In order to improve the efficiency and throughput of the cluster, the task allocation is reasonable when scheduling a group of tasks, so that the computing resources of each computing node can be fully utilized. In order to prevent some computing tasks from being permanently executed, we must consider the equalization of the computing resources as much as possible in each task group when the system was first designed.
A task will go through seven states from the task submission to the execution end, such as wait, Map, ready, execute, reduce and complete. When a computational task is successfully created, it needs to be submitted to the system, in the first the system checks completeness of task description, and following the task instructions are itemized audit verification. Each LC is queried according to the task description of the sub task description tree when the instructions through the inspection.it is necessary to wait for the non-idle computing node to complete other tasks when the idle LC is not able to satisfy the computing task, then the submitted task at this time enters the wait state; According to the task description tree, each sub task will be mapped to each local computing node LC when the system has the idle LC to meet the computing task, then the submitted task at this time enters the ready ` state; Next the management server assign each node in turn to start the calculation according to the instructions of the dependencies in the mission, then the system into the implementation state; When each task node performs all the tasks in turn, the Complete signal is reported to the management server, and the results are transmitted to the storage center, then the system enters the reduce state; The system enters the finished state when all the tasks have been completed and return all the results. The management server sends the GC command to each node that joins the computation, carries on the garbage collection, releases the resource, and wait until the next scheduling.
Task scheduling algorithm adopts priority algorithm and first come first serve (FCFS) hybrid scheduling algorithm, and add rotation method basic idea. Maintain a Dictionary<int, Queue <Task>> dictionary in the MS server. Where Key is the priority of the task queue, Queue <Task> is task queue, Task is a single computing task. The algorithm principle is shown in Figure 4. When the computational task is established, the system is statically assigned a priority value K, the K-value is between 1-n. The task enters the corresponding priority queue according to the K value.
The task is queued according to the first come first serve (FCFS) scheduling algorithm when it enters the queue because they have the same priority. Viewed from a straight line, the algorithm is fair in general sense, that is, each task depends on how long they wait in the queue to determine whether or not they prioritize services. But for those tasks that have a shorter execution time, they will wait a long time if they arrive after a long time of execution. To this end, this system uses the round robin, and set a time slice for each task. When the task is out of time slice, the execution of the task is aborted, and the K-1 value of the task is determined; If the value of the K-1 is in the Dictionary Keys, that is, the value of Dictionary.ContainsKey (K-1) is equal to true, then the task is removed from head and added to the end of the Dictionary, it is contained in Dictionary[K-1] team; Otherwise it is added to the end of the Dictionary[K] team. The choice of the time slice length will directly affect the system overhead and response time. The number that the programs deprive the system of computation will increase if the length of the time slice is too short, and this will increase the cost of the system. If the time slice length is too long, in extreme cases, a time slice can guarantee the required execution time of the longest task that can be executed in the queue, the system will lose the round robin, and just use FCFS algorithm. The selection of the time slice length can be determined according to the requirement of the response time of the system R and the maximum allowable tasks number Nmax in the queue, and it can be expressed as: q=R/Nmax. In the const value of Q, the response time of R seems to be greatly reduced if the number of tasks in the queue is far less than Nmax. But for system overhead, the timing of task switching will not change due to the fixed value of Q. For simplicity, the system uses a fixed time slice.
The performance of task scheduling can be measured by the parameters, such as task turnaround time, response time, throughput, and the utilization ratio of computing nodes. Here we focus on the task turnaround time. The turnaround time for the task i is defined as Ti, thus: Ti=Tie-Tis. Where the Tis is the start time of the task and the Tie is the end time of the task completion. For n (n>=1) tasks, the average turnaround time is: When the task is submitted to the system, it will be executed immediately until the task is Mapped, Therefore, the task is likely to enter the wait state. Set Tiw as the waiting time that the task from the submission to Map, then correct turnaround time as Ti, and Ti=Tir+Tiw, there Tir is the execution time.
Furthermore, we can use the weight of the turnaround time to measure the scheduling performance.
Define the weighted turnaround time as the ratio of task turnaround time to task execution time: Wi=Ti/Tir. For the n tasks contained in the task flow, the average weighted turnaround time is:
Model Evaluation
Through the revision and improvement of the scheduling algorithm in literature [12], the evaluation model of the system is as follows Assuming that the size of the particle had linearly related to the size of the task, the execution time Ti is: bi is the time of initializing the system, ai is the task granularity linear growth factor, xi is the size of tasks.
Assuming that the data transmission time had linearly related to the size of the task, then, In the formula, Data_Tij is the required time that transferred data from the task i to the task j. Where Data_bij is the time required to transmit the initialization data, Data_aij is a linear factor.
Formula (6) (7) can be adopted to solve the TCP traffic model, the model can be referenced literature [14] [15]. In the high speed local area network with 100 M/1000 M adaptation, the ratio of the data ` transfer time and the computation time are small in the whole simulation process, that is because the transmission rate between computers is very high, while Data_bij and Data_aij are relatively small and a copy of the original data has been saved in the computing unit prior to the start of the calculation.
For the 2 layer m fork tree DAGi, the size of the sub task is total M copies, but the granularity of the M subtasks are different. Then, the relationship of the sub task scale is (8) According to the characteristics of sub task diversity, the primary role of the root task is: transmits data from the root node to the leaf node, the compute and collect results from the leaf nodes to the root node, and transform the root task computing result to the DAG map of lower sub task. Therefore,
Compute Node Assignment
MS loads the subtasks into a task list List<T> by reading the task description. Then, the task priority ` of each sub task is determined according to the dependency set List List<R> of each task in their description. In the calculation of node allocation, at first each sub tasks in the List<T> will be distributed into a different Dictionary<int, Queue<T>> according to the level of each sub tasks. Where the int is the task queue level, the Queue<T> is the same level task queue. The tasks in the queue are scheduled according to the FIFO strategy, and the high level sub task queue is given priority to compute the node assignment. The FIFO strategy is used to compute node allocation between tasks and tasks. When the time slice of the task T in the queue is used up, it will release the computing node, and then return to the end of the queue, waiting for rescheduling. When the task interdependence leads to competition for resources, the task will be sent to the low level queue by reducing the level of sub tasks, and this can solve the problem of deadlock caused by task preemption. The computing node is released and the system task is completed when the task is completed. The node will request to reassign the task and modify the state of the task in the MS. The MS will notify the subtasks that are waiting for the dependency to continue execution by event method.
Computing data communication model
According to the calculation model of the above design, master-slave mode and P2P mode are adapted to the communication and data exchange between the nodes, the chart of Compute node LCn startup flow as shown in Figure5: The node will run the joint computing program, which maps on when it is started. After the program starts, it initializes the parameter information of the node in the first. The service address of the managed server MS is stored in each LC compute node. It is done when the LC is remotely deployed by the system configuration; using this parameter, LC can sense the presence of the server and try to connect to the management server; If the connection is not successful, then the hardware link fails, the node cannot access the collaborative computing system, It will become a calculation of ac-node; ` If the node can connect to the server, it will be registered on the MS itself, and the registration information contains the basic information of the node, computing power, etc.; LC can apply to The MS server will run programs in computing task nodes after successful registration. If the MS server does not have a task at this time, that is to say, the federal computing system is idle, then the node will set itself to idle, waiting for scheduling; MS will scan the status of the local compute node client LC after it completes the initialization of the task when the MS server has a RC application task; If the number of idle computing nodes LC which the MS scanned is more than 0, then the resource allocation and task scheduling, if the idle nodes which the MS scanned is 0, the task will be set the task into scheduling queue and wait for being scheduled because the lack of resources; The idle LC will download the task specification and load it when it receives the MS scheduler. LC compiles the subtask execution code in task specification through a dynamic compilation system, and applies for the issuance of subtasks from MS after the task specification was compiled. Under normal circumstances, the subtask execution code in task specification can be compiled through. This can only show that the calculation of the computing power of the node cannot meet the requirements of the task description if it cannot be compiled by the instructions. When the LC receives the sub task of MS, it carries out the task loading, and analyzes whether there are other sub task dependencies; If there is a dependency, the output parameters of the subtasks associated with subtasks are first obtained; If LC can get data, it is illustrated that the sub task has been terminates and its output can be used as input parameters for the this task, otherwise, the output data cannot meet the input of the task and then the task is required to re calculate the output in accordance with the requirements of this task. When the output of all dependent subtasks can satisfy the input of the task, the task is executed; the results of the calculation will be uploaded to the data sharing area for other subtasks. The LC that completed the task computing can be reinitiated and request to the MS for another subtask. If there are no subtasks available, LC is set to be idle and waiting for the MS scheduler.
Three methods are used to realize the communication and data exchange between nodes. These data which has calculated by the computing node and merged to the server can be applied by the other nodes that apply to data management server. When the application for the identification of the identity of the consumer data is audited, the application node can consume data provided by the production data node; if the node is unable to meet the request of the data node to the management server, the reason for the failure of the data is checked; if the other computing node is calculating the application ` data, the calculation node enters the wait state, and registers the waiting resource application to the MS server. When all the calculations are completed and all the result has reported to the data server, the MS server will find the waiting nodes from resource application, and inform those application data nodes which is listed in the application form to loading data; If the data is not retrieved on the management server, and the current computing network does not have a computing node to compute the data, then set the current computing node into the stack, and compute dependent data set.
In Sharing Service, so that the port can be shared between multiple user processes. The data exchange uses the XML language which based on the object transfer protocol, and this provides the possibility for the exchange of structured and solidified information between heterogeneous computing nodes. In addition, in order to ensure the data access security between nodes, the system uses a security algorithm based on the elliptic curve algorithm and federal verification [16] . Because Silverlight does not support the WCF Security model, if you want to call this service in SL, you must set the Security to None. By default, Security Mode is Transport, so this section must not be omitted and must be explicitly configured.
When configuring the information about the service, two endpoint points need be added because of the adoption of the two protocols. There are two kinds of endpoints in the <services> node, one is called by the client, and the other is the publication of metadata for the generation of service information. Using <endpoint contract= "IMetadataExchange" binding= "mexTcpBinding" address= "mex" /> node to publish metadata. Using <endpoint address= "ForWinform" contract="NetTcpDuplexCommunication.Server.IService1" binding= "netTcpBinding" bindingConfiguration= "tcpConfig" /> node to Configure client Net.TCP calls. Using <endpoint address= "ForSilverLight" binding= "pollingDuplexHttpBinding" binding Configuration= "pollingDuplexHttpBinding1" contract= "EndoscopeIMS.Server.IServiceForEndoscopeCDS"/> ` node to Configure client pollingDuplexHttpBinding calls. When the specified channel is not callback, the channel is removed from the _clients dictionary, and declares that the computing node is dead. The node will no longer be assigned sub tasks and scheduled.
The server will reclaim the task that has been assigned to the node, and then re -perform the Map. on the other idle nodes. Given the job J, the J can be broken down into subtasks set Ti and subtask dependency set Ri. Then J can be described as: J={Ti,Ri}.
Given the subtask set Ti and the subtask dependency Ri of the test case J, it can be described as : Ti={TA,TB,TC,TD,TE,TF,TG,TH,TI,TJ,TK,TL,TM,TN,TO,TP, For the decomposed subtask Ti, its execution time can be described by a four tuple(Tin, Tout, Tinstructions, Tcommcapacity), Where Tin is the time required to execute the task execution, which is dependent on the functional dependencies of the dependency set Ri and the input data size; Tout is the result of the output of the task to the data center, which is mainly affected by the output data scale and network communication ability; Tinstructions is the time required to compute the node Ni execution of the subtask Ti, whose length is determined by the computing power of the node Ni (the total number of instructions executed per second) and the total number of subtasks. The Tcommcapacity is a main expression of measuring the communication capacity of the node, the communication throughput of the node Ni is greater, and the time of each communication is shorter. The task simulation test case data are shown in Table 1: This federal synergy computing model which the system provided with heterogeneous and dynamic characteristics can be applied to large-scale network and support the dynamic check in and check out. Using computer networks connect heterogeneous computer devices to provide high performance computing capabilities is currently common method of supper-large scale computing.
With the help of previous research results, this paper proposes a compact scalable hybrid federal computing model based on literature [17][18][19][20][21][22]. To compared with the current mainstream network computing model, the implementation of the proposed method shields computing nodes differences in the software and hardware by design of the application network protocol layer. Any computing device can access the system at any time to participate in the operation. It greatly reduces the cost of computing equipment and the formation of a network of inexpensive computing and provides an alternative solution for the rapid implementation of a large scale computing network. The system has high expansibility and feasibility to compare with the method provided by literature [18]. The task decomposition algorithm in this paper is a further extension of the method mentioned in the literature [12], and further improves the application environment of the method. However, Task decomposition algorithm in this system cannot be completely decomposed by MS. this paper will focus on enhance the automation and intelligence of the program and improve the task diversity algorithm. The calculation model proposed in this paper, to a certain extent, has the advanced nature and reference to solve this kind of method, and has certain practical significance for engineering guidance. | 7,064.4 | 2019-04-28T00:00:00.000 | [
"Computer Science"
] |
Interstellar Benzene Formation Mechanisms via Acetylene Cyclotrimerization Catalyzed by Fe+ Attached to Water Ice Clusters: Quantum Chemistry Calculation Study
Benzene is the simplest building block of polycyclic aromatic hydrocarbons and has previously been found in the interstellar medium. Several barrierless reaction mechanisms for interstellar benzene formation that may operate under low-temperature and low-pressure conditions in the gas phase have been proposed. In this work, we studied different mechanisms for interstellar benzene formation based on acetylene cyclotrimerization catalyzed by Fe+ bound to solid water clusters through quantum chemistry calculations. We found that benzene is formed via a single-step process with one transition state from the three acetylene molecules on the Fe+(H2O)n (n = 1, 8, 10, 12 and 18) cluster surface. Moreover, the obtained mechanisms differed from those of single-atom catalysis, in which benzene is sequentially formed via multiple steps.
Introduction
Since the first spectroscopic detection of interstellar benzene (C 6 H 6 ) [1], which is the simplest building block of polycyclic aromatic hydrocarbons in the interstellar medium [2][3][4], numerous theoretical and experimental studies have been performed to understand the benzene formation mechanisms under low-pressure and low-temperature interstellar conditions [5][6][7][8][9][10][11]. Many of these studies have proposed that benzene can be formed via barrierless processes, including ion-molecule, neutral radical-radical, and radical-molecule reactions in the gas phase. For example, Kaiser et al. [8] demonstrated that benzene molecules could be formed under single-collision conditions through the gas-phase barrierless reaction of an ethynyl radical (C 2 H) with trans-1,3-butadiene (C 4 H 6 ) using a combined study of crossed molecular beam experiments and high-level quantum chemistry calculations, including statistical rate coefficient analyses. Recently, Habershon et al. [10] developed an automated reaction mechanism search algorithm from a given set of reactant and product inputs and applied it to the exploration of efficient interstellar benzene formation schemes. They confirmed that the reaction of C 2 H with trans-1,3-butadiene, initially proposed by the Kaiser group [8], is the most likely barrierless mechanism without human intuition and thus can be chosen as a good candidate for the interstellar benzene case. In addition, they identified that the reaction of C 2 H with two acetylene (C 2 H 2 ) molecules is another favorable reaction for the formation of a C 6 H 5 radical, which can be a precursor molecule of benzene [10]. It is worth mentioning that the gas-phase benzene formation process generally consists of multiple reaction pathways via several intermediates on the multidimensional potential energy surface.
In this work, we discuss the different formation mechanisms of interstellar benzene, which can be directly produced from three acetylene molecules catalyzed by atomic iron cations attached to water ice clusters, using quantum chemistry calculations. The direct synthesis of benzene from three acetylene molecules is generally called [2+2+2] acetylene cyclotrimerization [12][13][14][15]. Catalytic cyclotrimerization synthesis using various transitionmetal-containing compounds and transition-metal clusters has been extensively studied in the field of organic chemistry [12][13][14][15][16] because non-catalytic cyclotrimerization is extremely slow even at high temperatures, owing to the large energy barrier associated with π-bond breaking [17][18][19]. Previous experimental [20][21][22][23][24] and theoretical [22][23][24][25][26][27][28] studies have shown that a single transition-metal atomic cation (or single neutral transition-metal atom) is sufficient to catalyze the acetylene cyclotrimerization reaction. For example, Shuman et al. [23] performed ion-molecule reaction experiments using the selected-ion flow tube technique and found that atomic Fe + cations can efficiently catalyze the gas-phase acetylene cyclotrimerization reaction. Quantum chemistry calculations were also performed to understand the reaction mechanism and show that the overall reaction pathway consists of fewer steps than the gas-phase benzene formation mechanism [23,28]. Motivated by these studies, we theoretically investigated the acetylene cyclotrimerization reaction mechanism catalyzed by Fe + (H 2 O) n clusters, which may provide a possible model for the interstellar benzene formation on the water ice surface augmented by Fe + cations.
Although iron is the sixth most abundant element on the basis of hydrogen, which is the most abundant element in astrophysical environments [29] and is a key element in life science [30], the role of iron and its compounds in the formation of interstellar molecules is not entirely understood in the astrochemical field. So far, only two molecules containing iron, FeO [31,32] and FeCN [33], have been spectroscopically detected in the interstellar media. Therefore, iron is a highly depleted element in interstellar and circumstellar environments. The current status of astrochemical observations has led to the speculation that iron must exist in other forms, such as iron nanoparticles and iron-containing dust particles [34], making the additional search for iron-containing molecules in the interstellar medium extremely necessary. We believe that quantum chemistry calculations would be useful for understanding the catalytic role of iron as the most abundant transition metal in the formation of various interstellar molecules. Recent quantum chemistry calculations related to the present work are also available in the literature [35][36][37][38][39].
Computational Details
All quantum chemistry calculations presented in this work were performed using the Gaussian09 software package [40] at the unrestricted B3LYP density functional theory (DFT) level. Our previous study showed that the B3LYP functional provides a reasonable spin-state order for the Fe-containing chemical systems [28,[41][42][43]. Since the self-consistent field (SCF) calculation frequently converges to the solution with internal instability, we had to employ "stable = opt" calculations implemented in the Gaussian09 code. Using this option, the solution is forced to converge to the most stable SCF solution without internal instability. It is worth mentioning that it is very difficult to obtain smooth potential energy surfaces without this option, especially for transition-metal systems with open shells. It is important to include long-range dispersion interactions to discuss the reaction mechanisms quantitatively [44]. The GD3BJ option implemented in the Gaussian09 program was used to account for dispersion interactions [45]. Most of the calculations were performed using the def2-SVPP basis set. However, the def2-TZVP basis set was also used for small cluster systems to understand the basis set effect. Figure 1 shows the cyclotrimerization pathway (solid black line) from the Fe + (C 2 H 2 ) 3 reactant complex to the Fe + C 6 H 6 product complex calculated at the B3LYP(D3BJ)/def2-SVPP level as a function of the reaction path length, which corresponds to the distance calculated using mass-weighted coordinates. Note that this process occurs entirely on the potential energy surface with a quartet spin state [28]. In this case, the overall cyclotrimerization pathway consists of two intrinsic reaction coordinate (IRC) calculation results. These two IRC potential curves are merged into a single curve in this plot, where the horizontal coordinate origin is set to the transition state structure (denoted as TS 1 ) of the first IRC path, and the energy is measured from the energy level of the optimized Fe + (C 2 H 2 ) 3 reactant complex. In this reactant complex, it was found that three acetylene molecules were equivalently bound to the Fe + cation through a charge-π interaction (and a small contribution of the d-π bonding interaction) with D 3 symmetry. Thus, the π-bond in each acetylene molecule is weakened, indicated by the non-linear structures of the three acetylene molecules shown at the left-top in Figure 1. The first step from the reactant complex to the intermediate (INT 1-2 ) formation occurs through the transition state (TS 1 ), which corresponds to the dimerization reaction (the first CC σ-bond formation) between the two acetylene molecules. The activation energy for TS 1 originates from the breaking of the acetylene π-bond. Interestingly, one acetylene molecule mostly acts as a spectator in this first reaction pathway, and the orientation of this nonreactive acetylene was found to be nearly perpendicular to the C-C-C-C plane of the dimerization product, INT 1-2 . The transition state structure (TS 2 ) of the second IRC path corresponding to the benzene production process shows the expected structural feature, where all six carbon atoms are located approximately on a single plane with two newly formed CC σ-bonds at distances of 2.7-2.8 Å. Additionally, the energy barrier of the TS 2 structure measured from the INT 1-2 intermediate should be partially associated with the rotational barrier of the third acetylene molecule for the acetylene molecule to adopt a nearly planar structure. Thus, the corresponding energy barrier measured from INT 1-2 is large (>20 kcal/mol). Note that the reaction process has a large exothermicity in order to form stable benzene. 1 also shows the results for the acetylene cyclotrimerization process catalyzed by the complex of Fe + and a solid water cluster consisting of eight water molecules. Here, we chose to employ the most stable water cluster structure with D 2d symmetry [46] as an initial optimization structure to avoid structural changes during the acetylene cyclotrimerization reaction. The reaction pathway was calculated for the quartet spin state because this spin state was the most stable (see below). In this case, the Fe + cation was preferentially bound to the O atom of the water molecule, which did not have a dangling OH bond. The (C 2 H 2 ) 3 -Fe + (H 2 O) 8 reactant complex also has a charge-π interaction similar to the single-atom Fe + case, including the contribution of the d-π bonding. The feature of the bonding interaction is described in the Kohn-Sham orbitals shown in Figure S1 in the supplemental material. Interestingly, three acetylene molecules are bound to Fe + , forming an approximately planar structure in the optimized (C 2 H 2 ) 3 -Fe + (H 2 O) 8 reactant complex. This is in contrast with the case of the single-atom Fe + catalysis. More interestingly, we also found that the acetylene cyclotrimerization reaction from the reactant complex to the benzene complex occurred via a single IRC pathway, including only one transition state structure (TS W8 ). The intermediate structure observed in the single-atom Fe + case is completely missing for the Fe + (H 2 O) 8 catalysis case. In this case, the TS W8 has a nearly planar (C 2 H 2 )-(C 4 H 4 )-Fe + (H 2 O) 8 structure while a remaining C 2 H 2 is perpendicular to the C 4 H 4 of TS 1 for the single-atom Fe + case. Interestingly, we found that the C-C σ-bonding character between the remaining C 2 H 2 and C 4 H 4 is seen already at TS W8 . This should be the main reason for the missing barrier. The C-C σ-bonding orbital character is shown in Figure S2 in the supplemental materials. In addition, the energy barrier measured from the reactant energy level was reduced to 12.9 kcal/mol, which was much lower than that of the single-atom Fe + (19.7 kcal/mol). This indicates that the Fe + cation bound to water acts as a more efficient catalyst than the bare atomic Fe + cation. Notice that the potential energy curve with the fixed (H 2 O) 8 structure on the IRC pathway also describes the single-step cyclotrimerization (see Figure S3).
Results and Discussion
To further understand the structural changes in the acetylene cyclotrimerization reaction catalyzed by the Fe + (H 2 O) 8 cluster, selected atomic distances were plotted and are shown in Figure 2 as a function of the same reaction path length defined in Figure 1. Figure 2a shows the four Fe-C i (i = 1, 2, 3, and 6) distances, while three CC bond distances are plotted in Figure 2b. The three CC distances gradually decrease up to s ≈ -4 from s ≈ -10, where s denotes the reaction path length. After this point, the C 4 -C 5 distance decreases further, indicating the formation of a CC σ-bond. Once the C 4 -C 5 σ-bond is formed, singly occupied p-orbitals are isolated at the C 3 and C 6 sites. These two p-orbitals subsequently attract the π-bond of the third acetylene molecule (see Figure S2). In fact, after the transition state (at s = 0), the C 1 -C 6 and C 2 -C 3 distances decrease gradually, leading to the formation of two CC σ-bonds. Over s = 10, three CC bond distances barely change with a gradual energy stabilization to the benzene complex depicted in Figure 1. During this cyclotrimerization process, a change in the Fe-C i distance can be seen from the results presented in Figure 2a, indicating a change in the coordination structure during the cyclotrimerization process. From s ≈ -10 to 10, four Fe-C coordination distances shrink slightly (1.9-2.0 Å) to assist to form benzene. It could be found that the benzene complex is more stable at the long Fe-C coordination bond, where the distances are 2.3-2.4 Å over s = 10, shown in Figures 1 and 2.
As mentioned above, the quartet state is the lowest spin state in this reaction. In order to confirm it, we investigated the effect of other spin states, namely doublet and sextet, because these states are energetically close to the quartet spin state [23,24,28]. Figure 3 shows the potential energy profiles for the sextet and doublet spin states along the IRC structures calculated for the quartet spin state. No crossing points can be observed for these three potential energy curves, indicating that acetylene cyclotrimerization catalyzed by the Fe + (H 2 O) 8 cluster occurs exclusively on the quartet potential energy surface. Next, we studied the effect of the number of water molecules on the catalytic properties for the acetylene cyclotrimerization process. Figure 4 shows the potential energy profiles calculated at the B3LYP/def2-SVPP level for the reactions catalyzed by the Fe + (H 2 O), and Fe + (H 2 O) 10 , and Fe + (H 2 O) 8 complexes. The stationary-point structures on the potential energy surfaces are shown in Figure 5, and their Cartesian coordinates and zero-point energies are presented in the supplemental material. For the Fe + (H 2 O) 10 complex, we chose to employ the most stable pentagonal structure as the initial optimization structure for the (H 2 O) 10 moiety [47]. In all cases, the relative energy levels of the (C 2 H 2 ) 3 -Fe + (H 2 O) n (n = 1, 8, and 10) complexes are defined as zero in Figure 4; therefore, the results are consistent with those in Figure 1. It is interesting to note that single-step cyclotrimerization occurs even for n = 1, indicating that the Fe + -H 2 O complex acts as an efficient catalyst for acetylene cyclotrimerization. The barrier height measured for the (C 2 H 2 ) 3 -Fe + (H 2 O) n (n = 1, 8, and 10) complexes decreased slightly with increasing n. For the Fe + -H 2 O complex, we also calculated the IRC path using the larger def2-TZVP basis set. We found that the obtained result was similar to that of the def2-SVPP case, as shown in Figure S4 in the supplemental material. We have also performed similar calculations for the Fe + (H 2 O) 12 (with a hexagon structure) and Fe + (H 2 O) 18 (consisting of multiple cubic structures). The results are presented in Figure S5 in the supplemental material. These results indicate that the barrier height is not largely dependent on the water cluster size for n ≥ 8 nor the detailed cluster structure. To understand the astrochemical implications of the present study, we show the energy levels of the sum of the (C 2 H 2 ) 2 -Fe + (H 2 O) n complex and free C 2 H 2 energies. It should be noted that in the optimized (C 2 H 2 ) 2 -Fe + (H 2 O) n complex, two acetylene molecules are bound to Fe + through charge-π and d-π interactions. The results in Figure 4 indicate that the energy levels of the asymptotic C 2 H 2 + (C 2 H 2 ) 2 -Fe + (H 2 O) n complex are higher than those of the TS wn because of the strong, attractive interaction between C 2 H 2 and (C 2 H 2 ) 2 -Fe + (H 2 O) n . Therefore, the acetylene cyclotrimerization depicted in Figure 4 is a barrierless exothermic reaction, at least for n = 1, 8, and 10, within the most stable water cluster structure framework. Assuming that the attractive potential energy between C 2 H 2 + (C 2 H 2 ) 2 -Fe + (H 2 O) n is employed to surmount the TS wn barrier, benzene can be efficiently formed. However, if this entrance potential energy is fully dissipated into other vibrational modes, especially in the water cluster during the formation of the optimized (C 2 H 2 ) 3 -Fe + (H 2 O) n complex, the benzene formation process would be very slow because the barrier height measured from the complex is still greater than 10 kcal/mol, which is much higher than the thermal energy.
Finally, we discuss the dynamical aspects of the acetylene cyclotrimerization reaction catalyzed by the Fe + (H 2 O) n complex. The dynamical aspect should be carefully examined because non-IRC dynamics frequently occur in complicated reaction systems [48,49]. In this case, the reaction product cannot be identified only from the static IRC calculations. It is possible that a large reaction exothermicity can provide additional kinetic energy to the atoms in the reaction system, leading to a deviation from the IRC pathway. In addition, it is important to understand the energy transfer dynamics because the acetylene cyclotrimerization process has a large exothermicity associated with stable benzene formation. Motivated by this, we performed molecular dynamics calculations starting from the transition state structure with a Born-Oppenheimer molecular dynamics (BOMD) keyword implemented in the Gaussian09 program [40], where energy gradients were employed to solve the equations of motion of the nucleus. The BOMD calculations were only performed for the Fe + (H 2 O) 8 cluster. We added a total kinetic energy of 1 kcal/mol for the atoms in the Fe + (C 2 H 2 ) 3 moiety. If the trajectory moved toward the (C 2 H 2 ) 3 -Fe + (H 2 O) 8 reactant complex region, the calculation was terminated, whereas if the trajectory moved toward the benzene formation direction, the calculation was performed up to t = 1 ps. The time increment was set at ∆t = 0.5 fs. Since BOMD calculations generally require a large computational time, even at the B3LYP/def2-SVPP level, we only calculated six trajectories showing benzene formation in this study. Here, it is worth noting that the dissociation of benzene from the Fe + (H 2 O) 8 cluster was not observed in all the six trajectories (see below). A typical example is presented in Figure 6, where the potential energy is plotted as a function of simulation time in Figure 6a, and the kinetic energies for the C 6 H 6 , Fe + , and (H 2 O) 8 moieties are plotted in Figure 6b. In this trajectory, a benzene molecule is formed around t ≈ 0.2 ps. Subsequently, a large amount of exothermic energy was distributed in the vibrational energy of benzene. We can also observe the energy transfer from the benzene vibration to the water cluster. At t = 1 ps, approximately 20 kcal/mol of energy was transferred to the atomic kinetic energy (vibrational energy) of the (H 2 O) 8 cluster, leading to the breakage of a few hydrogen bonds. As mentioned previously, we did not observe the dissociation of benzene from the cluster within 1 ps of simulation time, presumably due to the large binding energy of~50 kcal/mol between C 6 H 6 and Fe + (H 2 O) 8 (see Figure 4). The results for the other five trajectories were found to be very similar, as shown in Figure S6 of the supplementary material.
Conclusions and Future Directions
In this study, we performed DFT-level quantum chemistry calculations for the acetylene cyclotrimerization reaction catalyzed by the Fe + (H 2 O) n cluster to understand the possible benzene formation mechanisms in the interstellar medium. Interestingly, we found that benzene was concertedly formed via a single reaction step, including only one transition state structure from three acetylene molecules. We confirmed that the obtained reaction path was robust to benzene formation by calculating the classical trajectories starting from the transition state structure. This single-step scheme is in high contrast to the bare Fe + catalysis case, in which benzene is sequentially formed via two steps, consisting of a single CC σ-bond formation and the two subsequent CC σ-bond formations. It should be emphasized that the benzene formation barrier height for the Fe + (H 2 O) n cluster catalysis was reduced compared to that for the single-atom Fe + case. Nevertheless, once three acetylene molecules were attached to the Fe + (H 2 O) n cluster and their structure was fully relaxed, the reaction pathway reached a substantial barrier to form benzene. The barrier height was found to be significantly larger than the thermal energy. We plan to extend the present quantum chemistry calculation study to C 2 and C 2 H radicals, which are known to exist in the interstellar medium. In this case, we expect that the benzene precursor molecules, such as C 6 H n (n = 0-5), can be formed via cyclization reaction mechanisms with much lower barriers than the acetylene cyclotrimerization reactions.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules27227767/s1. The following are available online, Figure S1: three Kohn-Sham orbitals for the (C 2 H 2 ) 3 -Fe + (H 2 O) 8 reactant complex; Figure S2: two Kohn-Sham orbitals at the TS structure for the Fe + (H 2 O) 8 cluster case; Figure S3: Comparison of the potential energy curve at fully relaxed ice structure and fixed ice structure; Figure S4: Comparison of the potential energy profiles for the acetylene cyclotrimerization reactions catalyzed by the Fe + (H 2 O) complex calculated using the def2-SVPP and def2-TZVP basis sets; Figure S5: Comparison of the potential energy profiles for the acetylene cyclotrimerization reactions catalyzed by the Fe + (H 2 O) 12 and Fe + (H 2 O) 18 complexes calculated at the B3LYP(D).def2-SVPP level; Figure S6: Potential energy and atomic kinetic energies obtained from the five BOMD trajectories starting from the transition state structure with an excess energy of 1 kcal/mol plotted as a function of simulation time. | 5,227.6 | 2022-11-01T00:00:00.000 | [
"Chemistry",
"Environmental Science",
"Physics"
] |
Gradient-based Gradual Pruning for Language-Specific Multilingual Neural Machine Translation
,
Introduction
In recent years, neural machine translation (NMT) based on the transformer architecture has achieved great success and become the dominant paradigm for machine translation (Sutskever et al., 2014;Vaswani et al., 2017).Multilingual neural machine translation (MNMT), which learns a unified model to translate between multiple languages, has attracted growing attention in the NMT area (Ha et al., 2016;Johnson et al., 2017).The reasons are: 1) From a practical perspective, it significantly reduces the training and inference cost and simplifies deployment in production; 2) Utilizing data from multiple language pairs simultaneously can potentially help improve the translation quality of low-resource or even zero-resource language pairs by transferring knowledge across languages.Despite these benefits, MNMT remains challenging as its performance degrades compared to bilingual counterparts in high-resource languages (Arivazhagan et al., 2019).Previous studies (Wang et al., 2020;Shaham et al., 2023) attribute this degradation to parameter interference: each language is unique and therefore has distinct requirements from the model parameters to capture language-specific information, and fully sharing parameters causes negative interactions across language pairs as languages compete for model capacity.Although naively increasing model size may alleviate parameter interference, large models often suffer from parameter inefficiency and overfitting (Zhang et al., 2020;Arivazhagan et al., 2019).
To address the parameter interference issue more effectively, researchers have explored various methods to allocate language-specific parameters to capture unique linguistic information for each language pair.One line of work focused on introducing extra parameters to the model.For instance, adapter-based approaches (Bapna and Firat, 2019;Philip et al., 2020;Zhu et al., 2021;Baziotis et al., 2022) inject lightweight language-specific modules into the shared model.Another well-known method (Dong et al., 2015) utilizes a shared encoder but a different decoder for each target language.Despite being effective, these methods can become parameter-inefficient as the number of languages in the multilingual model grows.
Another line of work focused on extracting a separate sub-network within the multilingual model for each language pair.In these approaches, each sub-network preserves exclusive parameters to capture the language-specific features and has some parameters shared with other languages (Xie et al., 2021;Lin et al., 2021;Wang and Zhang, 2022;Pham et al., 2022).A recent work (Lin et al., 2021) shows promising results by first training a multilingual model that covers all language pairs, then fine-tuning the trained model on each pair, and in the end, extracting the sub-networks via magnitude pruning in one step.Although this paradigm is intuitive and straightforward, it can be suboptimal due to the following limitations: (1) Magnitude pruning is demonstrated to be ineffective in a transfer learning context (Sanh et al., 2020), potentially affecting NMT systems in the multilingual scenario.
(2) Pruning in one single step after fine-tuning can lead to the removal of crucial weights resulting in lower performance.
This work aims to mitigate the parameter interference issue in MNMT without adding extra parameters and overcome the aforementioned two limitations in Lin et al. (2021).We propose gradient-based gradual pruning for languagespecific MNMT.More specifically, a multilingual base model is first trained, and subsequently, the model is simultaneously fine-tuned and pruned on each language pair.Conducting finetuning and pruning concurrently allows the model to adapt and optimize the parameters while reducing the model size.Instead of the widely used magnitude scores, we opt for the gradient-based scores as the pruning criterion.The percentage of pruned weights is gradually increased from zero to the target level throughout the pruning process.By optimizing the pruning criterion and schedule, we strive to identify optimal sub-networks and limit the parameter interference.Lastly, the resulting sub-networks are integrated into the MNMT model through a final training phase in a language-aware manner.
A large set of experiments is conducted on IWSLT and WMT datasets, showing that our approach leads to a substantial performance gain of 2.06 BLEU on IWSLT and 1.41 BLEU on WMT.Our contributions can be summarized as follows: • Our method leads to a significant boost in medium-and high-resource languages and also a reasonable improvement in lowresource languages, suggesting its effectiveness in alleviating parameter interference.
• We provide a comprehensive study of various pruning criteria and schedules in searching optimal sub-networks in the multilingual translation scenario.
• We provide additional analyses of our method by studying the contribution of various sublayers to the overall performance and exploring the relationship between language-specific sub-networks and language families.
Related Work
Standard multilingual neural machine translation systems translate between multiple languages with a unified model.The model can be jointly trained on multiple language pairs by prepending a special token to the source sentence, informing the model about the desired target language (Johnson et al., 2017).Although fully sharing parameters across languages and joint training can enhance knowledge transfer, MNMT suffers from parameter interference and the lack of language-specific parameters for capturing language-specific information, resulting in performance degradation, especially in high-resource language pairs.Various previous works have explored the idea of partially sharing parameters across languages while allowing for language-specific parameters.Sachan and Neubig (2018) investigates different parameter sharing strategies.Blackwood et al. (2018) compares several methods of designing language-specific attention module.Zhang et al. (2021) studies when and where language-specific capacity matters.Additionally, another widely recognized technique, adapter-based, has attracted substantial interest in recent years.Adapter-based approaches (Bapna and Firat, 2019;Zhu et al., 2021;Baziotis et al., 2022) inject additional lightweight languagespecific modules for different language pairs to capture language-specific information.Although effective, these methods increase the parameters for each language pair and thus result in an inference speed decrease.On the contrary, our work introduces no extra parameters to the model and thus has negligible to no impact on the inference speed.For more details, see Appendix E.
To avoid the inference speed decrease, several works (Dong et al., 2015;Purason and Tättar, 2022;Pfeiffer et al., 2022;Pires et al., 2023) explore designing language-aware modules inside the MNMT model.This way, the model can capture languagespecific information without sacrificing the inference speed.For instance, Dong et al. (2015) employs a single shared encoder across all languages but a unique decoder for each target language.However, the model can suffer from the parameter exploration issue as the number of languages in the multilingual model increases.In contrast, our approach maintains a consistent number of total parameters, irrespective of the number of languages.
Recent works allocate language-specific parameters by extracting a unique sub-network for each language pair within the multilingual model, which avoids introducing additional parameters to the model.Different approaches are explored for subnetwork extraction, such as Taylor expansion (Xie et al., 2021), parameter differentiation (Wang and Zhang, 2022), and model pruning (Lin et al., 2021).In this paper, we focus on model pruning because it showed to be more effective and results in promising performance.
In machine learning, model pruning is widely used to remove redundant weights from a neural network while preserving important ones to maintain accuracy (Han et al., 2015(Han et al., , 2016;;Frankle and Carbin, 2019;Liu et al., 2019;Sun et al., 2020).Two key components of model pruning are the pruning criterion, which determines the relative importance of weights and which weights to prune, and the pruning schedule, which defines the strategy how the target pruning ratio is achieved throughout the pruning process.In terms of pruning criteria, magnitude pruning (Han et al., 2015(Han et al., , 2016)), which removes weights with low absolute values, is the most widely used method for weight pruning.A recent work on large language models (Sanh et al., 2020) proposed movement pruning, which scores weights by using the accumulated product of weight and gradient and removes those weights shrinking toward zero.We refer to this approach as gradient-based pruning in this work.Regarding pruning schedules, two commonly considered options are one-shot pruning (Frankle and Carbin, 2019) and gradual pruning (Zhu and Gupta, 2017).While one-shot pruning removes the desired percentage of weights in a single step after the completion of finetuning, gradual pruning incrementally removes weights, starting from an initial pruning ratio (often 0) and gradually progressing towards the target pruning ratio.
In MNMT, the most similar technique to our approach for extracting sub-networks through model pruning has been proposed by Lin et al. (2021).In their work, sub-networks are searched through magnitude one-shot pruning, which means the model is pruned based on the magnitude of weights in a single step after the completion of fine-tuning.Different from their method, we opt for the gradientbased pruning criterion, which is proved to be more effective than magnitude pruning in a transfer learning context.Besides, in contrast to pruning in one single step (Lin et al., 2021), we gradually increase the ratio from zero to the target value during fine- end for 24: end while tuning, allowing the model to self-correct and recover from previous choices.To the best of our knowledge, this is the first study to explore the effectiveness of gradient-based gradual pruning in the multilingual translation context.
Methodology
Our approach includes three main phases.In the first one (Phase 1 in Algorithm 1), a multilingual model is initially trained using the parameter loglikelihood loss (see Sec. 3.1).Subsequently, the multilingual base model is simultaneously finetuned and pruned on each language pair (Phase 2 in Algorithm 1).In this phase, we adopt the gradientbased pruning criterion as the scoring function for each weight and the gradual pruning schedule to identify sub-network masks for language pairs (see Sec. 3.2).The extracted masks are then jointly used during the final training (Phase 3 in Algorithm 1) as described in Sec.3.3.
Multilingual Neural Machine Translation
In this work, we adopt the multilingual Transformer (Vaswani et al., 2017) as the backbone of our approach.Following Lin et al. (2021), we use a unified model for multilingual NMT by adding two special language tokens to indicate source and target languages.Given a set of N bilingual corpora D all = (D s 1 →t 1 , D s 2 →t 2 , ...D s N →t N ), the multilingual model is jointly trained over the set of all N parallel training corpora.The objective is to minimize the parameter log-likelihood of the target sentence given the source sentence over all corpora.The training loss is formulated as follows: where X i = (x i,1 , x i,2 , ..., x i,I ) and Y = (y i,1 , y i,2 , ..., y i,J ) represent the source and target sentences of one sentence pair in the parallel corpus D s i →t i respectively, with source sentence length I and target sentence length J, and special tokens omitted.The index of the current target word is denoted by j, which ranges from 1 to the sentence length J. θ represents the model parameters.
Identify Sub-networks Via Pruning
Once the MNMT model is trained, sub-networks are identified by applying our pruning approach.Gradient-based pruning criterion.Inspired by Sanh et al. (2020), we first learn an importance score for each weight in the weight matrix targeted for pruning, and then prune the model based on these importance scores during the simultaneous finetuning and pruning process.The importance scores can be represented as follows 1 : where ∂L ∂W i,j is the gradient of loss L with respect to W i,j in a generic weight matrix W ∈ R M ×N of the model, T denotes the number of performed gradient updates, S (T ) i,j denotes the importance score of weight W i,j after T updates.
After scoring each weight using Eq. ( 2) and ranking the score values, we prune the weights having importance scores among the v% (pruning ratio) lowest, regardless of the absolute score values.To this end, a binary mask matrix M ∈ {0, 1} M ×N based on the importance scores is calculated as: 1 For detailed information please refer to Appendix G.
Weights with scores among the lowest v% are assigned a value of 0 in the binary mask matrix and pruned, while the other weights are assigned a value of 1 in the mask and retained.
We generate a mask for each matrix that is targeted for pruning in the model and extract a unique sub-network for each language pair: θ s i →t i = θ 0 ⊙ M s i →t i .Where ⊙ denotes the Hadamard product, s i → t i is the language pair, M s i →t i ∈ {0, 1} |θ| represents the mask of the entire model for pair s i → t i , θ 0 denotes the initial model, and θ s i →t i represents the corresponding sub-network for the pair s i → t i .Gradual pruning schedule.In this work, the pruning ratio (v% in Eq. ( 3)) is gradually increased from 0 to the target value R p through a three-stage process, similar to Zhu and Gupta (2017).In the first stage spanning T 1 training steps, the model remains unpruned with a pruning ratio of 0. In the second stage, which lasts for T 2 training steps, the pruning ratio gradually increases from 0 to the predefined threshold R p .In the third stage, the pruning ratio remains constant at the target pruning ratio R p .This strategy is formalized as follows: R p otherwise (4) where t represents the current training step, R t represents the pruning ratio at step t, R p represents the preset target pruning ratio, T 1 and T 2 represent the total steps of stage 1 and stage 2, respectively.
Joint Training
Once the sub-networks θ s i →t i , i = 1, ..., N for all language pairs are obtained, the multilingual base model θ 0 is further trained with language-aware data batching and structure-aware model updating.For this purpose, batches of each language pair are randomly grouped from the language-specific bilingual corpus (Lin et al., 2021).Given a pair s i → t i , batches B s 1 →t 1 for this pair are randomly grouped from the bilingual corpus D s i →t i .Importantly, each batch contains samples from a single language pair, which differs from standard multilingual training where each batch can contain fully random sentence pairs from all language pairs.The model is iteratively trained on batches of all language pairs until convergence.During back-propagation, only parameters in the sub-network of the corresponding language pair are updated.During inference, the sub-network corresponding to the specific language pair is utilized to generate predictions.Notably, the sub-networks of all language pairs are accommodated in a single model, without introducing any additional parameters.
4 Experiment settings
Data
We run our experiments using two collections of datasets (IWSLT and WMT) having different language coverage and training data sizes.Due to its limited dimensions, the IWSLT data is used to run a deeper analysis of our method while WMT is used to further simulate real-world imbalanced dataset scenarios.IWSLT.We collect 8 English-centric language pairs for a total of 9 languages from IWSLT2014 (Cettolo et al., 2014), with corpus size ranging from 89K to 169K, as shown in Appendix A.1.We first tokenize data with moses scripts (Koehn et al., 2007) 2 , and further learn shared byte pair encoding (BPE) (Sennrich et al., 2016), with a vocabulary size of 30K, to preprocess data into sub-word units.To balance the training data distribution, low-resource languages are oversampled using a temperature of T=2 (Arivazhagan et al., 2019).WMT.We collect another dataset comprising 19 languages in total with 18 languages to-and-from English from previous years' WMT (Barrault et al., 2020).The corpus sizes range from very lowresource (Gu, 9K) to high-resource (Fr, 37M).More detailed information about the train, dev, and test datasets is listed in Appendix A.2.We apply the shared byte pair encoding (BPE) algorithm using SentencePiece (Kudo and Richardson, 2018) to preprocess multilingual sentences, with a vocabulary size of 64K.Since the data is extremely imbalanced, we apply oversampling with a larger temperature of T=5.We categorize the language pairs into three groups based on their corpus sizes: low-resource (<1M), medium-resource (≥ 1M and <10M), and high-resource (≥10M).
Model Settings
In our experiments, we adopt models of different sizes to adjust for the variation in dataset sizes.Given the smaller size of the IWSLT benchmark, we opt for Transformer-small following Wu et al. (2019).For the WMT experiment, we choose Transformer-base.For more training details please refer to Appendix B. To have a fair comparison with Lin et al. (2021), our method is applied to two linear sub-layers: attention and feed-forward.3
Terms of Comparison
We compare our method with two well-known and adopted technologies: the adapter-based (Bapna et al., 2019) and the method utilizing a shared encoder but separate decoders (Dong et al., 2015).
• Adapter.128,Adapter.256, and Adapter.512 -Inject a unique lightweight adapter to the shared model for each language pair with varying bottleneck dimensions 128, 256, and 512, respectively (Bapna et al., 2019).Following Pires et al. ( 2023), we train the entire model, including all adapters, from scratch for training stability.
• SepaDec -Use a shared encoder for all language pairs and a separate decoder for each target language (Dong et al., 2015).
As described in Section 3, our GradientGradual approach employs a gradient-based pruning criterion, and pruning occurs during finetuning, with the pruning ratio gradually increasing.To investigate the effectiveness of our pruning strategy for sub-network extraction in multilingual translation, we further compare our method with three pruningbased variants by combining different pruning criteria and schedules.
• MagnitudeOneshot (LaSS)4 -Extract subnetworks through pruning: the pruning criterion is the magnitude values of the weights; the pruning is performed in one step after the completion of finetuning (Lin et al., 2021).
• MagnitudeGradual -Extract sub-networks through pruning: the pruning criterion is the magnitude values of weights; the pruning ratio increases gradually during finetuning.
• GradientOneshot -Extract sub-networks through pruning: the pruning criterion is gradient-based; pruning is performed in one step after the completion of finetuning.
Evaluation
We report the tokenized BLEU (Papineni et al., 2002) on individual languages on the IWSLT dataset, and average tokenized BLEU of low-, medium-, and high-resource groups on the WMT dataset using the SacreBLEU tool (Post, 2018).We provide detailed results on the WMT dataset in Appendix D.1.In addition, we show the results of win ratio based on tokenized BLEU in Appendix D.2, and the results of COMET (Rei et al., 2020) and chrF (Popović, 2015) scores in Appendix D.3.
Experiment Results on IWSLT
This section shows the results of our approach on the IWSLT dataset, along with a comparative analysis against the MNMT baseline and prior research works.We also provide a comprehensive analysis of our method from various perspectives. 5
Main Results
In Table 1 most languages.The consistent improvements in average scores confirm the necessity of languagespecific parameters in the MNMT model.Subsequently, we report the results of the pruning-based approach MagnitudeOneshot, which extracts subnetworks for language pairs through magnitude pruning in one step.MagnitudeOneshot obtains a 0.50 BLEU score gain, demonstrating the potential of designing language-specific parameters and mitigating parameter interference without increasing the parameter count.Furthermore, our proposed GradientGradual approach, which leverages gradient-based information as the pruning criterion and gradually increases the pruning ratio from 0 to the target value, delivers a substantial improvement of 2.06 BLEU scores over the baseline model and achieves the best performance among all approaches, suggesting the effectiveness of our method in mitigating parameter interference in the MNMT model.
Pruning criteria and schedules
In the last two rows of Table 1 we further explore the individual impact of two key factors that contribute to the performance improvement separately, namely the gradient-based pruning criterion and gradual pruning schedule.MagnitudeGradual leads to a 1.68 BLEU score improvement over the baseline model and a 1.18 BLEU score improvement over MagnitudeOneshot, separately.Gradi-entOneshot leads to a 1.73 BLEU score gain over baseline and a 1.23 BLEU score gain over Mag-nitudeOneshot.The results demonstrate that both the gradient-based pruning criterion and gradual pruning schedule have the potential to significantly improve multilingual translation performance when implemented separately and to surpass the performance of MagnitudeOneshot.Nevertheless, the maximum average improvement is obtained when the gradient-based pruning criterion and gradual pruning schedule are combined, as demonstrated in our proposed GradientGradual approach.
Robustness with respect to pruning ratios
Results presented in Table 1 demonstrate that all 4 methods with different pruning criteria and schedules lead to varying levels of performance improvement.To gain a more comprehensive understanding of the relationship between performance and pruning ratios, we further visualize the average performance of all language pairs across the 4 methods with different pruning ratios.Results in Figure 1 show that the optimal performance of these 4 methods is obtained at slightly different pruning ratios.In addition, although MagnitudeGradual and GradientOneshot yield a considerably higher performance gain (around 1.7 BLEU) than Magni-tudeOneshot (0.5 BLEU) at their optimal pruning ratios, they both show instability across pruning ratios.Specifically, GradientOneshot suffers from a significant performance drop from middle to high pruning ratios, and MagnitudeGradual experiences an unexpected performance drop at the specific pruning ratio of 0.6.In contrast, GradientGradual demonstrates a more robust behavior across a wide range of pruning ratios, consistently outperforming the other methods except for MagnitudeGradual at a very high pruning ratio of 0.9.These results further verify the effectiveness and robustness of our GradientGradual approach.
Which sub-layer matters?
In this work, our approach is applied to attention and feed-forward sub-layers.To better understand where the parameter interference is more severe and where language-specific parameters are essen-tial, we perform ablation experiments by applying our approach to the attention and feed-forward sublayers separately.The results in Table 2 show that applying our approach to feed-forward sub-layers yields a limited average performance gain (+0.49BLEU) while applying it to attention sub-layers leads to a notable gain (+1.22 BLEU), suggesting parameters in attention sub-layers are more language-specific.This finding aligns with the previous work (Clark et al., 2019), which shows that specific attention heads specialize in distinct aspects of linguistic syntax.Given the unique syntax patterns in different languages, parameters in attention sub-layers are possibly more languagespecific and suffer more severe parameter interference.Consequently, applying our approach to attention sub-layers yields a notable gain.However, the largest average performance gain (+2.06 BLEU) is achieved when applying our approach to both attention and feed-forward sub-layers.Our results suggest that parameter interference exists in both sub-layers, but is more severe in attention sub-layers.
Similarity Scores and Phylogenetic Tree
To gain deeper insights into the effectiveness and interpretability of our method and to evaluate its capability to extract high-quality language-specific sub-networks, we compute the similarity of masks obtained with our approach, and we use these similarities to reconstruct the phylogenetic trees of languages.We present the results of En→X language pairs in this section and the results of X→En, which exhibit similar patterns as En→X in Appendix F.
Similarity scores are determined by the proportion of shared parameters between two language pairs (Lin et al., 2021), which can be obtained by dividing the number of shared "1" values in two masks by the number of "1" in the first mask, as illustrated in the equation below: where ∥ • ∥ 0 is L 0 norm, M1 and M2 represent binary masks of two language pairs, ∥M1 ⊙ M2∥ 0 represents the number of shared 1, i.e., the number of shared parameters, in these two language pairs.Intuitively, languages within the same family share a higher linguistic similarity, implying an increased likelihood of shared parameters and higher similarity scores.Conversely, languages that are Table 2: Sub-layer Ablation Results.Baseline denotes the multilingual Transformer model.Attn denotes applying our approach to attention sub-layers only.Ff denotes the approach applied to the feed-forward sub-layers only.
AttnFf represents applying our approach to both the attention and feed-forward sub-layers.linguistically distant from one another tend to possess more distinct language-specific characteristics, which implies lower similarity scores.
Figure 2 shows the similarity scores of En→X language pairs.Italian (It) and Spanish (Es) obtained the highest score, followed by German (De) and Dutch (Nl).The third highest score is obtained by Arabic (Ar) and Farsi (Fa).In addition, we noticed that within the European or Afro-Asiatic language groups, similarity scores between languages are relatively high, while lower scores are often observed when comparing a language from the European group to one from the Afro-Asiatic group.The results demonstrate that similarity scores obtained with our method are highly positively correlated with language family clustering (see Table 14 in Appendix F).This implies the capability of our GradientGradual approach to generate high-quality language-specific sub-network.
Furthermore, to quantify the proportion of language-specific parameters and construct a phylogenetic tree of languages, we compute language distance scores as: Dis(M1, M2) = 1 − Sim(M1, M2).
With these scores calculated between every two language pairs, the phylogenetic tree is constructed according to a weighted least-squares criterion (Makarenkov and Leclerc, 1999) using the T-REX tool (Boc et al., 2012) 6 .Figure 3 demonstrates the constructed phylogenetic tree is highly aligned with the language families shown in Table 14 in Appendix F, confirming the capability of our approach to extract high-quality language-specific sub-networks.
Experiment Results on WMT
On the highly imbalanced WMT dataset with the Transformer-base model, we compare our method with the multilingual baseline and the prior work MagnitudeOneshot, which shares the same philosophy as our method: both our approach and MagnitudeOneshot introduce no additional parameters to the model and have no negative impact on the inference speed.The results in Table 3 show that our method outperforms both the baseline and the MagnitudeOneshot across low-, medium-, and high-resource language pairs, reconfirming the effectiveness of our approach.More specifically, we observe an average improvement of 0.91 BLEU on low-resource, 1.56 BLEU on medium-resource, and 1.67 BLEU on high-resource language pairs over the baseline.These results shed light on three aspects.Firstly, the performance improvement becomes larger when the number of training resources increases, which validates findings in previous research works, suggesting that high-resource language pairs tend to be more negatively affected by the parameter interference issue in a unified multilingual model.As a result, mitigating the interference issue can result in more notable enhancements for high-resource languages.Secondly, while previous works suggest that low-resource languages often benefit from knowledge transfer, our results suggest that parameter interference also harms the performance of low-resource languages.By mitigating the interference issue, the performance of low-resource languages can also be improved.Finally, the consistent improvement of our method compared to MagnitudeOneshot across language groups of all resource sizes suggests that our gradient-based gradual pruning approach is more effective in identifying optimal sub-networks and mitigating the parameter interference issue in the multilingual translation scenario.
Conclusion
In a standard MNMT model, the parameters are shared across all language pairs, resulting in the parameter interference issue and compromised performance.In this paper, we propose gradient-based gradual pruning for multilingual translation to identify optimal sub-networks and mitigate the interference issue.Extensive experiments on the IWSLT and WMT datasets show that our method results in large gains over the normal MNMT system and yields better performance and stability than other approaches.Additionally, we observe that the interference issue can be more severe in attention sublayers and it is possible to reconstruct a reliable phylogenetic tree of languages using the languagespecific sub-networks generated by our approach.All the experiments confirm the effectiveness of our approach and the need for better training strategies to improve MNMT performance.
Limitations Modelling
In this work, we explore our approach in the English-centric multilingual machine translation setting.However, we believe that the effectiveness of our method extends to the real-world non-English-centric scenario.We will explore this direction in future work.
Regarding model capacity, we adopt Transformer-small and Transformer-base for IWSLT and WMT experiments, respectively, to reduce the training cost.Given the extensive coverage of languages in the WMT dataset, using an even larger model like Transformer-big (Vaswani et al., 2017) may further boost the overall translation performance, especially in high-resource directions, but we do not expect any difference in the relative quality of the tested methods (GradientGradual > MagnitudeOneshot > Baseline).This has been confirmed when moving from Transformer-small to Transformer-base in our experiments.
Another limitation of our work is that the current version of our algorithm applies the same pruning ratio to all language pairs and, hence, prunes the same percentage of parameters for each language.However, language pairs with different sizes of available training data may have different optimal pruning ratios.For example, high-resource language pairs may need smaller pruning ratios and preserve sufficient parameters to process and capture more complex information in the abundant data.To potentially improve the gains and have a method able to differently behave with different data conditions, future work could explore a method to automatically identify appropriate pruning ratios for different languages.
Training
In this work, we aim to search for a sub-network for each language pair within the multilingual model to mitigate the parameter interference issue and improve the performance.While our approach avoids introducing additional parameters, the 3-phase training process (training a base model, searching for sub-networks, and joint training) introduces additional complexity to the training pipeline compared to the standard end-to-end multilingual model training, and demands computational resources for the sub-network searching phase.
A.2 WMT dataset
Table 5 provides detailed information about the WMT data.
B Model details
In this section, we provide detailed model information in our experiments.IWSLT.Given the small scale of the data in the IWSLT dataset, we adopt the Transformer-small architecture with 4 attention heads: L = 6, d = 512, n head = 4 and d ff = 1024.WMT.For the WMT experiment, we adopt a Transformer-base architecture with 8 attention heads: L = 6, d = 512, n head = 8 and d ff = 2048.
C Training details
As shown in section 3, our approach includes three phases: training a multilingual base model (Phase 1), identifying sub-networks through pruning (Phase 2), and joint training (Phase 3).In this section, we provide the details of the hyperparameters of these 3 phases in our experiments.
C.1 IWSLT
In Phase 1, we train the multilingual base model with the same set of hyper-parameters as in Lin et al. (2021).More specifically, we optimize parameters with Adam (Kingma and Ba, 2015) (β1 = 0.9, β2 = 0.98), a learning rate schedule of (5e-4,4k), dropout of 0.1 and label smoothing of 0.1.The max tokens per batch is set to 262144.The maximum update number is set to 160K with a checkpoint saved every 500 updates, and the patience for early stop training is set to 30.In Phase 2, we set the max tokens to 16384, and dropout to 0.3.The training steps of 3 stages are set to 4K, 36K, and 40K.The best performance of the final model is achieved with a pruning ratio of 0.6 in this phase.The other settings are as same as in Phase 1.In Phase 3, we keep the same settings as Phase 1, except we apply masks on the model.
C.2 WMT
In Phase 1, the parameters are as same as in Lin et al. (2021).We train the multilingual base model with Adam (Kingma and Ba, 2015) (β1 = 0.9, β2 = 0.98), a learning rate schedule of (5e-4,4k), dropout of 0.1 and label smoothing of 0.1.The max tokens per batch is set to 524288.The maximum update number is set to 600K with a checkpoint saved every 1K updates, and the patience for early stop training is set to 30.In Phase 2, the max tokens per batch are set to 20K, 40K, 80K, and 160K for languages with training data sizes >10K, >100K, >1M, and >10M.The training steps of 3 stages are set to 4K, 16K, and 20K.The best performance of the final model is achieved with a pruning ratio of 0.2 in this phase.In Phase 3, we keep the same settings as Phase 1, except we apply masks on the model.
D.1 Detailed BLEU scores on WMT dataset
Considering the relatively extensive number of languages involved in the WMT dataset, we present average scores for low-, medium-, and high-resource Table 9: Average win ratios of Low (< 1M), Medium (≥ 1M and < 10M) and High (≥10M) resource language pairs of our GradientGradual approach over the MagnitudeOneshot approach groups to offer an overview of performance in different data situations in Table 3.For the detailed results, we provide BLEU scores of languages in low, medium, and high resource groups in Table 6, Table 7, and Table 8, respectively.
D.2 Win ratio results
To gain further insights into the performance of our approach, we present the results of win ratio (WR) based on tokenized BLEU, which denotes the percentage of languages where our approach outperforms the other.We choose the MagnitudeOneshot (LaSS) as a strong baseline to compare with.Table 1 shows that our GradientGradual approach outperforms MagnitudeOneshot across all languages on the IWSLT dataset, i.e., the win ratio is 100%.Table 9 presents the win ratio (WR) results on the more imbalanced WMT dataset.The results show that our approach achieves win ratios of 60%, 86% and 83% on low-, medium-, and high-resource languages, further verifying the superiority of our approach across different data sizes and especially on medium-, and high-resource languages.
D.3 COMET and chrF scores
We present the averaged COMET and chrF scores of all language pairs on the IWLST dataset in Table 10 and the WMT dataset in
E Additional metrics
In this section, we compare our method with various approaches from the perspectives of total model parameter counts (|θ all |), parameter counts of individual languages (|θ l |) during inference, disk storage, inference speed, and the average scores of all language pairs as performance.We present the primary results in Table 12 and detailed inference speed results of our method in Table 13.
The results are obtained with the Transformer-small architecture on the IWSLT dataset.
Parameter count.As shown in In our approach, the sub-network mask matrices are represented as binary matrices and are obtained during training and directly applied to Attention and Feedforward sub-layers during inference.Compared to the compute-intensive multiplication between dense weight matrices, multiplying a binary mask matrix with a weight matrix on some sublayers introduces a minimal overhead and, therefore, has a negligible impact on inference speed.In addition, zero elements in the masked weight matrix could result in faster inference speed due to the possibility of avoiding unnecessary arithmetic operations.We report tokens/second on the IWSLT De→En test set in Table 13.The batch size is always set to 1 and the result is averaged over 5 runs.Speed GPU is measured using a single NVIDIA A100 GPU and Speed CPU is measured using a single-threaded Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz.
While the best performance of our approach is achieved with pruning ratio set to 0.6, as shown in Figure 1, we provide the inference speed results with the pruning ratio ranging from 0.1 to 0.9 to offer a more comprehensive analysis of inference speed.Although Speed GPU and CPU are different in absolute value, they show a similar pattern across different pruning ratios, and we take the Speed GPU as an example to analyze.The results indicate an improvement in the inference speed of our approach when compared with the multilingual NMT baseline within the pruning ratio range of 0.6 to 0.9, and the highest inference speed is obtained when pruning ratio is set to 0.9.We attribute this speed improvement to the avoidance of unnecessary operations with element 0. Additionally, the inference speed is slightly slower than the baseline, with pruning ratio smaller than 0.6, but not statistically significantly different.Performance.Our approach outperforms SepaDe by 1.53 BLEU scores and surpasses the Adapterbased approaches with 128, 256, and 512 bottleneck dimensions by 0.72, 0.73, and 0.51 BLEU scores, respectively.Additionally, compared to MagnitudeOneshot, the work most similar to ours, our approach outperforms by 1.53 BLEU scores.
Based on the comprehensive analysis above, our approach is particularly advantageous when prioritizing performance and inference speed is crucial, and some additional disk requirements are considered acceptable.
F Similarity Scores and Phylogenetic Tree of X→En language pairs
Table 14 reports detailed language family information for the languages in the IWSLT dataset.
In particular, it includes the cluster, branch, and script of each language.Languages belonging to the same cluster and branch are expected to be linguistically closer to each other and have relatively high similarity scores.
Figure 4 shows the similarity scores of X→En language pairs.The results demonstrate that the similarity scores of languages belonging to the same cluster are relatively high.Italian (It) and Spanish (Es) obtained the highest score, followed by German (De) and Dutch (Nl).The third highest score is obtained by Arabic (Ar) and Hebrew (He).Besides, the similarity scores of languages between two distinct language clusters are relatively low, with the lowest score obtained by Dutch (Nl), a European language, and Farsi (fa), an Afro-Asiatic language.Figure 5 shows the corresponding phylogenetic tree obtained with distance scores.
G Gradient-based pruning criterion
In this paper, we apply gradient-based pruning to MNMT.This approach uses gradient-based information to identify the language-specific subnetworks.More specifically, to identify which weights to prune in a given weight matrix W, a scoring matrix S and a binary mask matrix M are introduced in association with the weight matrix.
Each parameter in the score matrix is intended to capture and learn the importance of the corresponding weight, and each element in the binary mask is assigned a value of either 0 or 1 according to whether the corresponding weight is pruned or retained.Weights with relatively low scores in the score matrix are considered less important and assigned a value of 0 in the binary mask matrix.Score parameters are learned and updated iteratively during the training process.The scores of all the weights, both pruned and retained weights, are updated.The updating of scores can change the relative importance of different weights and affect their score distribution.This process enables the model to self-correct by allowing pruned weights to come back.
In the following demonstration, we will show how the learned scores are based on gradient information, as depicted in Eq. (2).In the forward pass of the training process, the masking step, where the output is 1 if the input (in this context, the score) is above a threshold and 0 otherwise, is performed after the linear operation.The output of the linear operation and masking can be calculated as a i = N k=1 W i,k M i,k x k .During backpropagation, the gradients of learnable parameters are computed to update these parameters and facilitate the learning process.However, the masking step, introduces a non-differentiable behavior at the threshold point.Besides, the constant output of 1 or 0 results in a gradient of 0 everywhere it is defined.This can lead to the so-called "vanishing gradient" issue, which arises when the gradients become very small or vanish at some point during backpropagation.As a result, the flow of useful gradient information is hindered, making it difficult to train the model effectively.Thanks to Bengio et al. (2013), we mitigate this issue by employing straight-through estimator.More specifically, during backpropagation, the masking step is ignored and the gradient after the masking step flows "straight-through" to the step before the masking step.As a result, the gradient of loss L with respect to S i,j and W i,j can be calculated as in Eq. ( 5) and ( 6 From Eq. ( 6), we derive ∂L ∂a i = ∂L ∂W i,j 1 M i,j 1 x j .By omitting the binary mask term M i,j as in Sanh et al. (2020), we obtain ∂L ∂a i = ∂L ∂W i,j 1 x j .Inserting the obtained result of ∂L ∂a i into Eq.( 5) yields ∂L ∂S i,j = ∂L ∂W i,j 1 x j W i,j x j .Therefore, the gradient of L with respect to S i,j can be represented as ∂L ∂S i,j = ∂L ∂W i,j W i,j .The importance score S i,j after T gradient updates can be represented as: where T denotes the number of gradient updates, α i is the learning rate during training process.In our method, a specific percentage of weights is pruned based on the distribution of importance score values, regardless of the absolute score values.The learning rate α i , which remains constant across all score parameters, does not impact the distribution and can be disregarded for simplicity without affecting the pruning outcome, as shown in Eq. (2).
Figure 3 :
Figure 3: The built tree positively correlates with language families.It and Es are in the Romance branch; De and Nl are in the Germanic branch.Fa, Ar, and He are similar Afro-Asiatic languages; with Fa and Ar written in Arabic script.
Figure 4 :
Figure 4: Similarity scores (%) of X→En language pairs.The scores are represented on both the x-axis and y-axis.
Figure 5 :
Figure 5: Phylogenetic Tree built with X→En distance scores obtained in our method.The built tree shows a strong positive correlation with language families.It, Es, De, Nl, and Pl are European languages written in Latin; With It, and Es in the Romance branch; De and Nl in the Germanic branch.Fa, Ar, and He are similar Afro-Asiatic languages; with Fa and Ar written →t 1 , Ds 2 →t 2 , ...Ds N →t N ), All = (θs 1 →t 1 , θs 2 →t 2 , ..., θs N →t N ) 20: while θ All not converge do 21:for each pair si → ti in pairs do 22:Further training θs i →t i on Ds i →t i 23:
Table 1 :
Average En ↔ X BLEU score gain of the sub-network extraction methods on the IWLST dataset.
Table 4 :
Statistics of IWSLT data with similar languages organized into groups.
Table 5 :
Statistics of WMT data with similar languages organized into groups.
Table 6 :
BLEU score over baseline of each pair in the low-resource group (< 1M) on the WMT dataset.
Table 7 :
BLEU score over baseline of each pair in the medium-resource group (1M-10M) on the WMT dataset.
Table 8 :
BLEU score over baseline of each pair in the high-resource group (>10M) on the WMT dataset.
Table 10 :
Average COMET and chrF scores across all language pairs of various approaches on the IWSLT dataset.
Table 12 :
Comparison of performance, parameter count, and disk storage among different approaches.
Table 13 :
Tokens/second comparison of our approach against the baseline on the IWSLT De→En test set.agefor storing indices.However, the most significant disk storage requirement is from SepaDec due to the largest number of total parameters. | 9,730 | 2023-01-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Research on Crowdsourcing Price Game Model in Crowd Sensing
: Crowd-Sensing is an innovative data acquisition method that combines the perception of mobile devices with the idea of crowdsourcing. It is a new application mode under the development of the Internet of Things. The perceptual data that mobile users can provide is limited. Multiple crowdsourcing parties will share this limited data, but the cost that the crowdsourcing party can pay is limited, and enough mobile users are needed to complete the perceptual task, making the group wisdom is really played. In this process, there is bound to be a game between the crowds and the mobile users. Most of the existing researches consider a group-aware system. A group of mobile users will directly share or compete for the opportunity of the crowd-holders to do tasks and get paid, the behavior of multiple crowd-source parties, and their bilateral interaction with mobile users. The research is not clear enough and there is no targeted research. This paper will model and analyze the dynamic evolution process of crowd sensing perception. Based on the unique char-acteristics of crowd-source non-cooperative game and crowd-sourced Nash equilibrium, we will develop a perceptual plan for mobile users and use the stability analysis of iterative algorithms to explore a way to better match the capabilities of mobile users and the needs of crowdsourced parties. Our theoretical analysis and simulation results verify the dynamic evolution model of crowdsourcing in group perception and propose a method to improve the efficiency of crowdsourcing.
The prosperity of Crowd sensing computing will inevitably lead to the prosperity of the market-aware market. There will be multiple crowdsourcing parties in the market-aware market. Each crowd-buying party has different functions and different tasks. Users will get their own preferences. Selecting the crowdsourcing party. Since the perceived resources of the mobile users are not unlimited, the perceived data that the user can contribute is certain, and the data will be shared to multiple crowdsourcing parties. The crowdsourcing party will also be based on their own budget. Reasonably arrange the perceived resource demand, complete the task by summoning enough mobile users, and let the wisdom of the perceived resources be fully utilized. In the process of crowd sensing perception, there is a game between the crowdsourcing party and the mobile user, and the research is in the sense of crowd sensing. The relationship between multiple parties in the process, as well as the relationship between multi-package and mobile users, will better match the capabilities of mobile users and the needs of crowdsourcing parties. This paper will model and analyze the dynamic evolution behavior of multiple crowd-sources in group-aware networks [5].
Crowd Sensing Network System
The system architecture consists of three parts: the server platform, the data consumer and the data provider. It includes three layers: the perception layer, the network layer and the application layer. After the server accepts the service request from the data consumer, the perceived task is assigned to the mobile user. Through data backhaul, in this group-aware network system, the cloud server collects and processes the acquired data, and participates in other related activities for the processed result [6][7][8][9].
The crowdsourcing party sends the mobile user to the perceptual participant, and the perceptual participant reports the data to the server after completing the task, and the cloud service center completes the collection and processing of the perceptual data, and the cloud service center provides the data request after the data processing. By. Through the entire crowd sensing system to achieve data sensing, collection and service provision, crowd sensing is a large-scale integrated service with distributed computing properties, the process is shown in Fig. 1. The basic workflow is described as follows: The crowdsourcing party divides a perceptual task (for example, collecting air quality data) into several perceptual subtasks (for example, collecting air quality data for 5 min). Publish tasks and corresponding rewards to mobile users through open calls. Mobile users participating in group-aware tasks need to be paid to compensate for their resource consumption, such as people's time, energy, mobile device power, CPU and other resources.
After the mobile user knows the perceived task, it decides whether to participate in the perceived activity according to its own situation. In the case of multiple crowdsourcing parties in the market, decide which crowdsourcing parties and workloads to participate in. Mobile users participate in perceived activities, collect data, and use privacy protection to report data to the crowdsourcing party. The crowdsourcing party evaluates the perceived data and pays the mobile users.
The crowdsourcing party processes and analyzes all sensory data and builds various groupaware applications such as environmental monitoring, health monitoring, urban management, public safety, and intelligent transportation. The above process may take multiple rounds. Both crowdsourcing and mobile users are likely to adjust their behavior to meet target needs based on market reactions.
Crowd Sensing Perception Incentive Mechanism
Incentive mechanism design is an important issue in the group-aware network. The crowdsourced party relies on the provision of a large number of mobile users to fully utilize the advantages of the group's wisdom. Mobile users also consume their own resources when participating in the perception, such as Set storage space, computing power, communication bandwidth, etc., and there may be a risk of privacy leakage. Therefore, the crowdsourcing party needs to provide compensation to mobile users to compensate for the loss of user resource consumption. The crowdsourcing party only designed one that can make the mobile Incentives for user satisfaction can attract enough users to participate in the provision of information, ensure the goal of data collection, and enable the perception function [10][11][12].
The main goal of the group-aware incentive mechanism is to allow more participants to participate in the provision of information. Under the incentives, the server platform can obtain more sensory data and actively participate in the perception task. The feedback quality is high. Trusted sensory data. The stimulus mode can be represented using the following pattern: The model represents an incentive mechanism (incentive, referred to as I), that is, through a specific incentive (mechanism, referred to as M) to achieve the server platform (server, referred to as S) and participants (participants, referred to as P) utility (U) maximum.
The Problem of the Evolution of Crowd Sensing
The insufficient number of participants in the crowd sensing perception leads to insufficient sensory data. Participants want to get some compensation from the feedback process through feedback from the data, rather than voluntarily providing data free of charge. Because the perception and provision of data also consumes the cost of the participants, such as the lighting of smart devices, computing resources and data bandwidth costs, in the process of participation, individuals who provide data also need to consume a certain amount of energy. Without compensation, it is difficult for mobile users to maintain a certain level of enthusiasm in the provision of long-term perceived data.
The information that may be provided by the group's perceived participants may involve some private information. The information provided by the participants may include various types of information such as text, pictures, videos, etc. Most of the data is sensitive and private and may require a certain time and location. Due to the possible disclosure of privacy, participants may have some consideration for providing group-aware data and choose not to participate in the evolution of the group-aware process.
Problem Description and Modeling
Along with the continuous development of the group-aware network, the multi-functional group-aware applications are constantly emerging, which have a huge impact on all aspects of life. In the group-aware network, all crowd-buying parties purchase sensory services from the mobile user group. Mobile user resources are limited, and the perceived services that can be provided are limited. At this time, there is a competition among the crowdsourcing parties. The crowdsourcing party needs to adjust the compensation (that is, the price) paid to the mobile users, and the reasonable price can attract more. More users contribute data to help complete the perceived task.
Because there are multiple crowdsourcing parties in the market, the prices offered by different crowdsourcing parties are different, and the different quotes of multiple crowdsourcing parties will jointly affect the participants. The choice of crowdsourcing parties. The group-aware network with dynamic interaction between multiple crowdsourcing parties and mobile users needs further research in order to gain a deeper understanding of the group-aware network [13][14][15][16].
There are some challenges in studying dynamic evolution behavior of multi-packaged parties. First, mobile users will transform the crowdsourcing side of their services, and it is not easy to portray this. In addition, the claims of all participants in the group-aware market should be met. This includes selfish, all crowdsourcing parties that pursue the maximization of individual interests, and fully rational, mobile users who consider the most efficient when choosing crowdsourcing. There is currently no work to consider meeting the demands of both the task publisher and the mobile user. Only when the price competition of the crowdsourcing party ends in a steady state, all participants in the group-aware market will be satisfied. Second, market information is not completely revealed, which makes it more difficult to meet the first two constraints.
The group-aware market consists of multiple crowdsourced and participant groups.
We have a set of finite crowds in the group-aware market as M = {1, 2, 3, . . . , m}. note that M ≥ 2, Assume that each crowdsourcing needs a group-aware awareness function, such as collecting vehicle traffic peak maps and marking images. Mobile users are free to choose among multiple task publishers to identify the crowdsourcing parties they want to provide perceived services. We give mobile users a strict meaning, that is, mobile device holders who provide cognitive services to crowdsourcing parties. We will not consider mobile device holders who enjoy group-intelligenceintegrated group-aware capabilities. We view the mobile user community as a continuum of the same type of service provider, also known as a representative service provider.
Interaction process is as follows: The crowdsourcing party issues tasks to the participants, who complete the tasks and receive the rewards provided by the crowdsourcing parties. The perceived participant can get the reward p i for the task of the crowdsourcing party i to do the unit time.
Participants are free to choose whether to participate in the provision of sensory data and how much data is provided to these task publishers. This process is called the provision of sensory data. We use the number of units of time to assess the number of perceived services that a crowdsourcing party attracts. For example, mobile users are willing to contribute ai unit time of perceptual data to the crowdsourcing party i. Mobile users will join the crowdsourcing party that makes them happy.
Description of parameter a i : We assume that a single mobile user can only serve one crowd of people at a time. If a time dimension is added to the group-aware market, assuming that a single mobile user contributes only data per unit time per time slot, then ai is also the number of mobile users joining the crowd-crowding party i in the time slot.
Each task publisher party pursues at the lowest cost and will strategically adjust the price to achieve the desired benefits.
Here we define the perceived contribution of the mobile user to the crowdsourced party as "service supply." In this paper, we use "service supply substitutability" to express the willingness of mobile users to transform their services. this is inspired by the concept of "substitute good" in economics. "Substitute good" has a positive cross elasticity of demand, which means that the demand for goods A will rise as the price of goods B rises. In the group-aware market, unlike goods or services being sold, crowdsourcing parties purchase perceived services. The willingness of mobile users to join crowdsourcing party A will increase as the price offered by crowdsourcing party B rises [17].
Mobile users respond to all crowdsourcing party pricing strategies and develop a perceptual plan that maximizes revenue. The perception plan in turn affects the price adjustments of the crowdsourcing party. We focus on the market segmentation of service offerings and the price adjustment process of crowdsourcing parties.
Mobile user utility is affected by the rewards of providing perceived services and the cost of doing perceived tasks. The utility of mobile users does not always increase. if a perceived task can get higher rewards, usually this perception also consumes more cost. Therefore, the utility of mobile users will be met after reaching a certain level.
The utility function of the mobile user is as follows: where a i indicates that the representative mobile user is willing to contribute a i unit time of perceptual data to the crowdsourcing party i. a = {a 1 , . . . , a n , . . . , a M } represents a collection of service offerings for all crowdsourced parties. p i is the price per unit time-aware data that the crowdsourcing party i pays to the mobile user. c i is the cost of the mobile user contributing unit time-aware data to the crowdsourcing party i. ν ∈ [0.0, 1.0] means "service supply substitutability." when ν = 0.0, mobile users are reluctant to change the crowdsourcing side of their service; ν = 1.0, mobile users change the crowdsourcing side of their service very frequently. Each crowdsourcing party self-adjusts the price to get the most profit. The profit of the crowdsourcing party is derived from the profit minus the cost. The profit of the crowdsourced party is obtained from the perceived service provided by the mobile user, and the cost is paid to the mobile user. The profit function of crowdsourcing party i is: Among them, R i (a i ) = σ a i , C i (a i ) = p i a i , σ > 1 is the system parameter, 0 < p i < σ . a i reflects the profit of the crowdsourcing party i.
The impact of mobile user feedback on crowdsourcing revenue can be divided into three modes: the benefits of crowdsourcing parties without information feedback strategy; the benefits of crowdsourcing parties under full information feedback strategy; and the benefits of crowdsourcing parties under partial information feedback strategies [9][10][11].
If the crowdsourcing party chooses not to feedback the participants' scores, each participant knows only his or her effort in the first stage, and the effort levels of the other participants are private information. The equilibrium effort levels of Contestant A and Contestant B are the same in both phases. That is, the no-information feedback strategy contest is equivalent to a single-stage, single-prize contest. Participant A chooses to maximize effort in the first and second stages to maximize its utility, and similarly, Participant B does the same. In a crowdsourcing competition, if the crowdsourcing party adopts a full information feedback strategy, the participant can know the output of the other participants from the scores fed by the crowdsourcing party and derive the difference between the output of the other participants in the first stage and his own and use backward extrapolation to determine the level of effort in the second stage. Full information feedback is the same as the first stage under partial information feedback, with the difference that under the full information feedback strategy, the crowdsourcing party reveals to the participants that their score difference is a specific value at the end of the first stage of the competition. When the crowdsourcing party adopts a partial feedback strategy, it discloses their first stage score difference to the participants. The score difference function allows participants to predict their competitors' effort levels and thus determine their effort levels. [12][13][14].
The Mobile User Specifies the Awareness Plan
For all the policy sets of crowdsourcing parties, mobile users will make a perception plan, that is, how much data they contribute to each crowdsourcing party respectively, so as to obtain the highest utility.
We can get the perception plan from the utility function of mobile users in equation Eq. (2). I'm going to take π (a) with respect to a, and I'm going to set it to 0, this means how to set the value of a to maximize the utility π (a).
Through simultaneous M equations Eq. (7), We can get every element of a = {a 1 , . . . , a i , . . . , a M }, which is the perception plan of mobile users. Here we use W i (p) instead of a, as shown below For the sake of simplicity, we rewrite the mobile user's perception plan equation Eq. (6) as follows: among them, D 1 p −i and D 2 is a constant, Given all p j (i = j) cases. Their expressions are as follows: Note that the perception plan of mobile users is actually the service supply share of the crowdsourcing party.
Uniqueness of Nash Equilibrium for Crowdsourcing Party
We will prove that the optimal response of each crowdsourced party is unique. Therefore, the Nash equilibrium of the crowdsourcing price game is also unique.
The optimal response BR p −i of the crowdsourcing party is unique.
Prove. Given service supply W i (p), The profit of crowdsourcing party i is: In order to find the optimal response of the crowdsourcing party i, we calculate the derivative of P i to p i : Because the second derivative of P i to p i is non-negative, the profit function P i of the crowdsourcing party i is strictly concave function on p i . Therefore, given the strategy set of any other crowdsourcing party, crowdsourcing optimal reaction strategy for square BR i p −i is unique.
Next, calculate the optimal response strategy of crowdsourcing party i. Set P i to p i ; the first derivative is 0, we have: so let's solve for p i : Simultaneous Eqs. (7), (8) and (13):
Analysis of Experimental Results
We have simulated a group-aware market with two crowdsourcing parties through a series of experiments. We call the two crowdsourcing parties A and B. We pay attention to the following points: First, service supply alternatives to Nash equilibrium The second is the impact of Nash equilibrium convergence on price dynamics; the third is the stable region of the learning rate that makes the price adjustment iterative algorithm converge when a crowdsourcing party cannot observe other crowdsourcing strategies.
The experimental environment is Matlab R2018a running on a Windows 10 system. The default settings are as follows: System parameter σ = 4. The cost of a mobile user doing tasks for crowdsourcing parties A and B respectively is c 1 = 0.1 yuan and c 2 = 0.5 yuan. Service supply alternative change between 0.2 and 0.8. The setting of these parameters gives mobile users a reasonable return. The setting of these parameters also has a certain relationship with the simulation environment.
The Optimal Response of the Crowdsourcing Party
The profit of the crowdsourcing party A is expressed as a function of its price, As shown in Fig. 2. Before a certain point, the profit of the crowdsourcing party A increases as the price increases. Because the higher the price means the crowdsourcing party pays the higher the reward for mobile users, the more mobile users can contribute to perceived services. After this particular point, the profit of crowdsourcing party A decreases as the price increases. Because the increase in profits cannot compensate for the increase in prices. The optimal response is the price at which the crowdsourcing party obtains the highest profit. The optimal response of the crowdsourcing party A increases with the price increase of the crowdsourcing party B, which is a good explanation for the positive cross elasticity of demand.
Nash Equilibrium of Static Non-Cooperative Game in Crowdsourcing Party
As shown in Fig. 3, the optimal response function of crowdsourcing parties A and B crosses at a certain point, which means that at this point, both the crowdsourcing parties A and B choose the optimal response strategy, which is the Nash equilibrium point. The Nash equilibrium price p 2 of the crowdsourcing party B is higher than the Nash equilibrium price p 1 of the crowdsourcing party Alice, because the cost of the mobile user contributing data to the crowdsourcing party B is also higher than the cost of contributing data to the crowdsourcing party A. When the service supply substitutability v is higher, the price of the Nash equilibrium is also higher. Because mobile users change the crowdsourcing of their services at a higher frequency (usually to the more highly paid crowdsourcing party), each crowder needs to increase the price to attract more mobile users.
The Impact of Mobile User Spending on the Price and Profit of Crowdsourcing Parties Under
Nash Equilibrium Fig. 4 respectively show the changes in the price and profit of the crowdsourced party under the Nash equilibrium with the cost of the mobile user (the cost of doing the task for the crowdsourcing party A. We found that the service supply substitutability v has a higher impact on crowdsourcing parties A and B: the price of crowdsourcing party A is only slightly affected, and the price of crowdsourcing party B is decreasing at a higher rate. We explain this phenomenon in this way: when the cost of the mobile user serving the crowdsourcing party B remains unchanged, and the cost of serving the crowdsourcing party A increases, the mobile user gets more benefits from the crowdsourcing party B than the crowdsourcing party. A gains are high. When the service supply alternative v becomes larger, the mobile user shifts to the crowdsourcing party B to provide a perceived service at a higher frequency, so that the share of the service supply obtained by the crowdsourcing party B increases. Therefore, the crowdsourcing party can lower the price at a faster rate while ensuring an increase in profits.
A Stable Region of the Learning Rate that Causes the Price Adjustment Iterative
Algorithm to Converge When the crowdsourcing party cannot observe the strategy of other crowdsourcing parties, the learning rate is crucial for the convergence of the iterative algorithm. In this section, we will explore the stable region of the learning rate that converges the price-adjusted iterative algorithm. The experimental results are shown in the Fig. 5. When the learning rates Q 1 and a 2 are taken from the stable region, the crowdsourcing iterative algorithm can successfully converge to the Nash equilibrium. We have found that there is disturbance when the service supply becomes more substitutable.
Proposal and Analysis of the Problem
Due to the amount of perceived resources provided by participants, there will be competing behavior among multi-party parties. The study found that Nash equilibrium stability does not allow the crowdsourcing party to obtain the largest overall profit. When the crowdsourcing parties cooperate, the optimal overall profit will be obtained. However, the partnership between the crowdsourcing parties is not stable because the partnership is not based on the best response function, which means that there are viable strategies to increase the profitability of one or more crowdsourcing parties [18][19][20]. This is especially true when the game is only played once, because the crowdsourcing party does not have long-term profits to consider. If the game is conducted multiple times, the crowdsourcing party may consider cooperative behavior when considering long-term profits. The group-aware network is in a dynamic change, and there will inevitably be multiple interactions between the crowdsourcing parties. Therefore, in the perfect research system of the dynamic evolution mechanism of group-aware network, the cooperative between multiple parties needs to be studied. It is also necessary to analyze the conditions formed by the cooperation between the crowdsourcing parties. We conduct research on a multi-packaged group perception model in the market [21][22][23].
The Best Price of the Crowdsourcing Party
The crowdsourcing party wants to get the best price for the highest profit. The optimal price is different from the price at which the Nash equilibrium is reached. Therefore, the crowdsourcing parties may cooperate to obtain the highest profit.
The optimal price of the crowdsourcing party will be obtained by the following equation: Among them, M i=1 P i (p) is the sum of the profits of all the crowdsourcing parties. For the special case with only two crowdsourcing parties (i.e., M = 2, i = 1, j = 2), the optimal price is as follows: For a special case with only two crowdsourcing parties (i.e., M = 2, i = 1, j = 2), the price to reach the Nash equilibrium is as follows: It is obvious that the optimal price is different from the price of reaching the Nash equilibrium. This special case can be extended to the case where m > 2 has any number of crowdsourcing parties.
Cooperative Strategy Conditions
A long-term cooperation between a crowdsourcing party and other crowdsourcing parties depends on the long-term benefits. If she can get a higher long-term profit from cooperation than non-cooperation, it will agree to maintain cooperation with other crowdsourcing parties. The crowdsourcing party deviates from the optimal price, that is, she hopes to increase her profit by attracting more perceived services by raising the price. However, other crowdsourcing parties will adopt Nash, which has a higher price than her. Equilibrium prices, the market will punish its behavior. Therefore, the deviation of the crowdsourcing party will get the profit under the deviation strategy in the first stage and get the profit under the Nash equilibrium strategy in the remaining stage. We use an example to explain the crowdsourcing the square repeats the game model. Suppose there are two crowdsourcing parties called A and B. In the first phase, they choose to cooperate with each other. In the second phase, the crowdsourcing party chooses to deviate from the strategy and raise the price to obtain more profit. And the crowdsourcing party B does not know the choice of the current stage when deciding the strategy, continuing to choose cooperation, maintaining the best price. In the third stage, the crowdsourcing party B knows the deviation behavior of a in the previous stage, will punish A, raise the price higher to Nash equilibrium price. In order to deal with this situation, the best choice for the crowdsourcing party is to set the price as the Nash equilibrium price. Here, the cooperation breaks down, and the crowdsourcing parties a and b cannot benefit from it. In the next stage the situation is the same as the third stage.
The best price is that the crowdsourcing parties cooperate with each other and use this price to get the highest overall profit. The optimal response price, that is, crowdsourcing and mutual competition, can be used to achieve Nash equilibrium [24][25][26].
The meanings of related symbols are shown in Tab. 1. The crowdsourcing party i chooses the profit of the Nash equilibrium strategy at a certain stage.
We use γ i to indicate the discount factor, which means that the profit of the current stage of the crowdsourcing party is more valuable than the profit of the future stage. To the crowdsourcing party i, P o i , P d i , P n i is said that it chooses the cooperation, the deviation from cooperation, and the profit of the punishment strategy. The following is a calculation of the long-term profit of the crowdsourcing party i using different strategies.
The crowdsourcing side uses the long-term profit of the cooperative strategy at each stage: The crowdsourcing party uses the long-term profit of the deviation strategy: Then the condition that the crowdsourcing party chooses the cooperation strategy is that the long-term profit of the selection cooperation strategy is higher than the long-term profit of the selection deviation strategy, that is: From this we get the discount factor γ i lower bound: Discount factor γ i when the value is higher than the lower bound, the crowdsourcing parties will cooperate. They use the best price at each stage to get the highest profit, which in turn gives them the highest long-term profit.
We study the cooperation behavior among multi crowdsourcing party. The crowdsourcing party will not get the optimal global profit by adopting the price strategy of Nash equilibrium. When the interaction between crowdsourcing party is repeated, crowdsourcing party may cooperate considering long-term profits. But the cooperation relationship is unstable, because there are always crowdsourcing party who can get higher profits by deviating from the cooperation. We model the cooperative behavior of multi crowdsourcing party as a repeated game and analyze the conditions of the cooperative behavior of multi crowdsourcing party.
We experimented with the simulation of the market-aware market with two crowdsourcing parties to support the previous theoretical analysis. We call the two crowdsourcing parties A and B respectively. The simulation setup of the experiment is the same as the cooperative behavior experiment. The experimental results are shown in Fig. 6. We have found that crowdsourcing parties can indeed benefit from cooperative behavior. A crowd-carrying party maintains cooperation by choosing the best price, which is lower than the best response price. All crowdsourced parties will adopt the optimal response price and gradually reach a Nash equilibrium. A higher optimal response price means that the crowdsourcing party will pay more for mobile users. The cost of competition among crowdsourcing parties will also increase. The deviation price is higher than the optimal price and lower than the optimal reaction price.
Conclusion
This paper conducts an in-depth study of the multi-crowd competition and cooperation behaviors in the group-aware network and explores the research mechanism of the dynamic evolution mechanism of the group-aware network. With the continuous application of group-awareness, users are not only limited in providing perceived services. For specific crowdsourcing parties, and with more reference, they are free to choose private rooms. Crowders need to be attracted to specific users, collect enough data from users, and exert collective capabilities. Maximize the benefits of the crowdsourcing party. Multi-bundle parties will compete for the perceived services provided by mobile users. We model the price competition between multiple crowdsourcing parties into a dynamic non-cooperative game for crowdsourcing. The distributed learning algorithm is designed to converge to Nash equilibrium. Thus, the price of eating crowd sensing is balanced after a certain period of change, rather than endless price adjustment.
Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study. | 7,092.4 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Genuine T, CP, CPT asymmetry parameters for the entangled Bd system
The precise connection between the theoretical T, CP, CPT asymmetries, in terms of transition probabilities between the filtered neutral meson Bd states, and the experimental asymmetries, in terms of the double decay rate intensities for Flavour-CP eigenstate decay products in a B-factory of entangled states, is established. This allows the identification of genuine Asymmetry Parameters in the time distribution of the asymmetries and their measurability by disentangling genuine and possible fake terms. We express the nine asymmetry parameters — three different observables for each one of the three symmetries — in terms of the ingredients of the Weisskopf-Wigner dynamical description of the entangled Bd-meson states and we obtain a global fit to their values from the BaBar collaboration experimental results. The possible fake terms are all compatible with zero and the information content of the nine asymmetry parameters is indeed different. The non-vanishing and are impressive separate direct evidence of Time-Reversal-violation and CP-violation in these transitions and compatible with Standard Model expectations. An intriguing 2σ effect for the Re(θ) parameter responsible of CPT-violation appears which, interpreted as an upper limit, leads to MB¯0B¯0−MB0B0<4.0×10−5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \left|{M}_{{\overline{B}}^0{\overline{B}}^0}-{M}_B{{{}_{{}^0}}_B}_{{}^0}\right|<4.0\times 1{0}^{-5} $$\end{document} eV at 95% C.L. for the diagonal flavour terms of the mass matrix. It contributes to the CP-violating asymmetry parameter in an unorthodox manner — in its cos(ΔM t) time dependence —, and it is accessible in facilities with non-entangled Bd’s, like the LHCb experiment.
Introduction
The BaBar collaboration has demonstrated [1] a 14σ direct evidence of Time Reversal violation in the time evolution of the neutral B 0 d -B 0 d meson system, independent of CP violation or CPT invariance. This result is independent of any particular dynamical framework for discussing the dynamics of the neutral B 0 d -B 0 d system and it is established in terms of asymmetries of observable transition rates. Only the quantum mechanical properties of (i) entanglement of the B d pair before the first decay in a B-Factory, (ii) the decays as filtering measurements for the preparation and detection of the initial and final B meson states in the transition, as well as (iii) the time dependence of the double decay rate intensities, are used. The conceptual basis had previously been discussed in refs. [2,3] and the methodology for an actual experimental analysis given in [4], bypassing the need of the T-reversal of the decay. As emphasized by Wolfenstein [5,6], the T-reverse of a decaying state is not a physical state.
The transitions of interest are between Flavour and CP eigenstate decay products, with the possibility of having an interference of mixing with no-mixing amplitudes, without any need of absorptive parts, something impossible for transitions between flavour-specific states. In addition, a well defined orthogonality between the meson states filtered by both Flavour and CP eigenstate decay products, the so-called Flavour-Tag and CP-Tag [7], is -1 -4. The extraction of the values, or limits, of selected WWA parameters, in particular the one responsible of inducing CPT-violation, from a global fit to the observables. Final values for the genuine Asymmetry Parameters characterizing the three T, CP, CPT symmetry transformations are given.
The paper is organized as follows: in section 2 we review some generalities of the B 0 d -B 0 d effective hamiltonian, the time evolution of an initial entangled state and the basic expressions for the double decay rate intensities. In section 3 we discuss in detail under which conditions the Flavour-CP eigenstate decay channels are truly appropriate for time reversal genuine asymmetries. Then, in section 4, we analyse the experimental asymmetries, normalized as in reference [1], focusing on the connection with the time evolution described in section 2. In section 5 we show how the complete genuine asymmetries can be reconstructed beyond ratios through the addition of a single piece of information on the B 0 d -B 0 d mixing. In section 6 we address how deviations of the conditions discussed in section 3 can contaminate genuine T and CPT asymmetries, quantifying their effect. In section 7 we present the results of a global fit to the experimental results in terms of the basic WWA parameters introduced in section 2 and the final results for the genuine asymmetry parameters. Some conclusions are given in section 8. The known Weisskopf-Wigner approach (WWA) [12] for the time evolution of a one level decaying system concluded with the appearance of an absorptive part in the hamiltonian of the Schrödinger equation governing its time evolution. The generalization to the two level system gives rise to an effective 2 × 2 Hamiltonian matrix with an antihermitian part taking care of the decay channels [13]. These approximations can be obtained using time dependent perturbation theory and have a limited range of validity excluding very short and very long times [14,15].
The evolution Hamiltonian
The effective Hamiltonian of the two meson system B 0 d -B 0 d is H = M − iΓ/2, where the 2 × 2 hermitian matrices M and Γ are respectively the hermitian and the antihermitian parts of H. We follow the notation of [16] for the eigenvalues and eigenvectors together with the complex parameters θ and q/p:
JHEP06(2016)100
Here θ is a CP and CPT violating complex parameter while δ violates CP and T. In terms of physical parameters, except for the phase of q/p which is convention dependent, 2 the effective Hamiltonian can be written as [17] (2.9)
The entangled system
In a B factory operating at the Υ(4S) peak, our initial two-meson state is Einstein-Podolsky-Rosen [18] entangled, 3 which maintains its antisymmetric entangled character in the H eigenstate basis. This implies the antisymmetric character of the two meson state at all times and for any two independent linear combinations of B 0 d andB 0 d . The corresponding evolution is therefore given in a simple way. The transition amplitude for the decay of the first state into |f at time t 0 , and then the second state into |g at time t + t 0 , is where the decay amplitudes of the eigenstates into the final state f are A H,L f ≡ f |T |B H,L . Squaring and integrating over t 0 , the double decay rate I(f, g; t) is obtained: This expression is very useful to realize the following expected symmetry property: up to the global exponential decay factor e −Γ t , the combined transformations t → −t and f g should be the identity. Expanding the t dependence and taking the approximation ∆Γ = 0, valid for the neutral B 0 d states, one can write: with Γ f defined below. Therefore, starting from an entangled state as in eq. (2.10), and using the Quantum Mechanical evolution in eq. (2.11), the following symmetry properties arise from the previous remark: (2.14) 2 Although q/p is phase convention dependent, in the CP or T invariant limits, its phase is fixed relative to the convention adopted for the action of the CP operator on |B 0 d and |B 0 d . 3 See references [19][20][21] to consider corrections to this assumption.
JHEP06(2016)100
They will play an important role in order to assess the independent observables present in the double decay rate measurements. We define as usual the parameters associated to mixing times decay amplitudes For flavour specific channels f = ± +X (f = ± for short in the following), and assuming no wrong lepton charge sign decays, C ± = ±1, R ± = S ± = 0, and thus where N [±,g] = 1±δ 1−δCg . To close this section, notice that the double decay rate or intensity I(f, g; t) has a trivial normalization by construction: summing over final states f and g, and integrating over t, we simply have the norm [16,22] of the initial state |Ψ 0 , For later use, it is convenient to introduce the reduced intensityÎ(f, g; t) , (2.20)
Condition to observe a genuine Motion Reversal asymmetry
The original proposal made in [2,3] to observe a direct evidence of T violation independently of CP violation at B factories, following reference [4] and implemented in [1], contained three ingredients: 1. Analyse Time Reversal symmetry in the B 0 d -B 0 d Hilbert space. Therefore, first one defines a reference transition P 1 → P 2 (t) among meson states and compares with the reversed transition P 2 → P 1 (t). If the probability that an initially prepared state P 1 , evolved to P 1 (t), behaves like a P 2 is then the T violating asymmetry proposed was 2. Going beyond the use of P 1 , P 2 = B 0 d ,B 0 d . If use is made of the transitions B 0 d B 0 d , the corresponding asymmetry is not independent of CP: by construction it is both CP and T violating and very small because it comes from the δ parameter. They introduced the new reference transition B 0 d → B + to be compared with B + → B 0 d . In a decay channel with well-defined CP = + where one can neglect CP violation, the reference transition can be measured by looking to decay events f 1 where a B meson decays to a self-tagging channel ofB 0 d and the other B meson decays later to a CP eigenstate f CP =+ decay where one can neglect CP violation. The main problem was how to measure the reverse transition.
3. Using the entangled character of the initial state was the crucial ingredient to (i) connect double decay rates with specific meson transitions rates and (ii) to identify the reverse transition. If one assumes that observing a f CP =− one filters in that side a B − , then, due to the entanglement, one is tagging the orthogonal state to B − in the opposite side. This state, in the approximation at will, should be a B + . In general, from the entangled state (2.10) we can say that if at time t 1 we observe in one side the decay product f , the (still living) meson at time t 1 is tagged as the state that does not decay into f , |B f , The corresponding orthogonal state B ⊥ f |B f = 0 is given by and is the one filtered by a decay f . What we call the filtering identity -easy to prove [9,23] -defines the precise meaning of the last statement: , the previous quantity is exactly the reduced intensitŷ I(g, f ; t) introduced in eq. (2.20): Therefore, here is the precise connection between meson transition probabilities and double decay rates. By measuringÎ(f 1 , f 2 ; t), -from now on we will use the notation (f 1 , f 2 ) to refer to the first and the second decays considered -we are studying probabilities P 12 (t) for transitions between meson states (B 1 , B 2 ) which, as we have seen, are . In order to compare with P 21 (t), we need to study the reverse transition (B ⊥ f 2 , B f 1 ), but the filtering and tagging applied methods do not give us this transition. Two new decay channels f 1 and f 2 in the reduced double decay rate (f 2 , f 1 ) will give us the transition (B f 2 , B ⊥ f 1 ); therefore, provided these two new decay channels fulfill the following identity this new transition (f 2 , f 1 ) will give the reversed meson transition. For flavour specific decay channels, assuming no wrong lepton charge sign decays, The other channel, a CP one, should also satisfy this last equation, which, combined with equations (3.3) and (3.4), will give the condition these channels should satisfy: The originally proposed decay channels f 2 = J/ψK + and f 2 = J/ψK − satisfy this condition, where K ± are the neutral kaon states filtered by the CP eigenstate decay channels. Consequently, the states B ∓ are well defined and given by equation (3.4) for each of the two decay channels. From now on, we use K S for K + and K L for K − since it is an accurate approximation up to CP violation in the kaon system. Taking into to control potential deviations from condition (3.9), we will use the general parameterisation in terms of the real parameters {ρ, β, ρ , β }. Therefore, by properly comparing double decay rates corresponding to two channels, one from { + , − } and the other from {J/ψK S , J/ψK L } (K S and K L for short in the following), we will be able to measure genuine time-reverse processes provided Any deviation of this measurable relation produces some contamination in time reversal asymmetries and therefore should be conveniently subtracted out. It is important to notice that eq. (3.9) is fulfilled even if ρ = 1. It has to be pointed out that eq. (3.9) guarantees that the considered channels allow to truly compare the transition P 2 → P 1 (t) with the reversed transition P 1 → P 2 (t). Nevertheless, in order to ensure that this motion reversal asymmetry is truly a time reversal asymmetry, one has to use decay channels f such that in the limit of T invariance, S f = 0 [9,23]. For CP eigenstates, T invariance implies S f = 0 provided there is no CPT violation in the corresponding decay amplitude, in accordance with the analysis in reference [8]. This is equivalent to no CP violation in the decay, in the T invariant limit, giving, in addition to eq. (3.11), the condition ρ = 1. We therefore conclude that we -7 -
JHEP06(2016)100
should perform the data analysis with arbitrary parameters ρ, ρ and β and that any deviation from ρ = 1 , ρ = 0 , β = 0 , (3.12) will be a source of fake T violation that should be subtracted out. Notice that in the absence of CP violation in the decays that filter the states B ± , these states would be orthogonal, implying eq. (3.12), and therefore the orthogonality condition in equation (3.9) would be automatically satisfied. Before ending this section it is convenient to clarify that in the absence of wrong flavour decays in [24]); in our parameterisation, this implies clearly showing full compatibility among the condition in eq. (3.9) and the absence of wrong flavour decays. Using more conventional notation in terms of 15)), no wrong flavour decays imply If we impose in addition eq. (3.9), we also have
The BaBar normalization and the independent asymmetries
To avoid strong dependences on the detection efficiencies in the different channels, refer- and S c [f, g] in eq. (2.13) or eq. (2.20), fixed the normalization of the constant term and used the normalized decay intensity in such a way that two quantities, are measured for each pair (f, g). Following eq. (2.14), they verify We are interested in the study of the genuine discrete asymmetries that can be constructed combining one flavour specific channel and one CP channel. Starting from one reference transition, we can generate another three by means of T, CP and CPT transformations. It turns out that because of the relation (4.3), these four transitionsB 0 d saturate all the independent parameters that can be measured with one flavour specific and one CP decays. In table 1 we present the meson state transitions and the corresponding decay channels, and we see how with one reference transition Table 1. Double decay channels, the associated filtered meson states and their transformed transitions under the three discrete symmetries. and its discrete symmetry transformed ones, all the independent parameters are saturated: the order below the g f,g (t) column makes clear that the parameters of these transitions are related to the ones appearing in the column g g,f (t). We conclude that only eight parameters are independent: they are the C[f, g] and S[f, g] corresponding to the decays ( + , K S ), (K L , − ), ( − , K S ) and (K L , + ). Of course, there are at least two independent ways of measuring the same parameter by means of the time-ordering of the two decays. This operation is not a symmetry transformation from the left to the right-hand side of table 1; in order to interpret the information it is very important to know exactly the number of independent parameters in a general framework.
The authors in reference [4] proposed the construction of several CP, T or CPT asymmetries as BaBar did, in order to present genuine and model independent tests of these symmetries. By now, it should be clear that only six independent asymmetries can be constructed out of the eight independent parameters. The three time dependent asymmetries are which can be explicitely expanded as are the six independent asymmetries that can be constructed (we use the same notation of reference [1] for easy comparison). To appreciate the difference among asymmetries that in -9 -
JHEP06(2016)100
a CPT invariant world would be equivalent, we can write them expanding to linear order in Re (θ), Im (θ): ∆C No matter whether CPT Violation is expected to be small, conceptually it is very important to emphasize that ∆S + T = ∆S + CP for several reasons. We have seen that for ∆S + T to be a true T violating asymmetry, eqs. (3.14) and (3.15) should be fulfilled. Therefore, the dominant term in equations (4.14) and (4.15) should be equal: S K S − S K L = 2S K S . But in general ∆S + T and ∆S + CP differ by terms that are CPT violating and CP invariant in ∆S + T , and by terms that are CPT violating and T invariant in ∆S + CP . Only the pieces that do not depend on θ are identical. Similarly for ∆C + T = ∆C + CP : in order for ∆C + T to be a true T violating asymmetry, we need C K S + C K L = 2C K S = 2δ, and thus ∆C + T and ∆C + CP are again equal up to CPT violation in the mixing: they differ by terms that are CPT violating and CP invariant in ∆C + T and by terms that are CPT violating and T invariant in ∆C + CP . It is a very important check to realize that both ∆S + CPT and ∆C + CPT only contain pieces proportional to CPT violating parameters: they have terms proportional to the θ parameter controlling the amount of CPT violation in the mixing. ∆S + CPT also contains S K S + S K L , which should be equal to zero if ∆S + CPT is a true CPT asymmetry (the same condition in eq. (3.9) for a true T asymmetry). Finally, ∆C + CPT contains C K S −C K L , which should vanish provided eq. (3.9) is fulfilled and there is no CPT violation in the decay.
Genuine asymmetry parameters
The time-dependent reduced intensityÎ(f, g; t) involves three coefficients C h , C c and S c in eqs. (2.16), (2.17) and (2.18); nevertheless, as mentioned before, the analysis of reference [1] focused on the ratios C = C c /C h and S = S c /C h in eq. (4.2). Although from the experimental point of view those ratios might be more appropriate, from the theoretical point of view, access to the three independent coefficients would be more desirable: for instance, while an asymmetry in the ratios does imply a symmetry violation, no asymmetry in the ratios may nevertheless come from asymmetries in both the numerator and the denominator. Obtaining the three independent coefficients C h , C c and S c for each pair of decay channels might be particularly interesting for asymmetries in the ratios with values -10 -JHEP06(2016)100 that are, within uncertainties, compatible with zero, like e.g. CPT asymmetries. Is that programme possible? Fortunately, using input information for |q/p| or, equivalently δ, it can be achieved. First, from eqs. (2.16) and (2.17), we have Equation (5.1) is interpreted in the following way: while C[ ± , K S,L ] and C K S,L will be constrained or extracted from the data, through the addition of δ, we can also compute C h [ ± , K S,L ], and thus C c [ ± , K S,L ] and S c [ ± , K S,L ] separately. It is then possible to build T, CP and CPT complete time-dependent asymmetries analog to eqs. (4.4), (4.5) and (4.6), which can also be expanded as We refer to ∆C S h , ∆C S c and ∆S S c in these asymmetries as "genuine asymmetry parameters" since they are the ones which collect the full time-dependent difference of probabilities in transitions among meson states given in eq.
from which the genuine asymmetry parameters in the coefficients C h , C c and S c , up to linear order in θ and δ, follow: It is important to stress from (5.1) that it has a straightforward physical interpretation: from eq. (2.20), the reduced intensity prior to any time evolution iŝ Following the filtering identity in eq. (3.6), this is simply the overlap between |B ⊥ g and |B f :Î Furthermore, it can be easily seen that, if the condition in eq. (3.9) for a genuine Motion Reversal measurement is verified, A T (0) = 0, since and thus This is consistent with the intuitive requirement that a genuine Motion Reversal asymmetry cannot be already present at t = 0, i.e. in the absence of time evolution. Concerning CPT, A CPT (0) = 0 on the same grounds that A T (0) = 0, once the CP properties of the decay states and the absence of CP violation in the decays are considered. One final comment is in order: attending to the previous results, the presence of δ in eq. (5.1), that is at t = 0, is a priori surprising, since it is solely related to the B 0 d -B 0 d mixing; this is simply an artifact due to the use of the mixing times decay quantities in eq. (2.15), as illustrated by the absence of δ in eq. (5.19). In any case we should keep the normalization of eq. (5.1) since we also want to measure deviations from eq. (3.12).
JHEP06(2016)100 6 Genuine T-reverse and fake asymmetries
In section 3 we have discussed how asymmetries like eqs. (4.4) and (4.6) are "contaminated", i.e. can receive contributions which are not truly T-violating; this also applies to the genuine asymmetry parameters introduced in section 5. It occurs when the conditions in eq. (3.12) are not fulfilled. The question is, how can we disentangle fake effects in T and CPT asymmetries due to deviations from the requirements of eq. (3.12)? We illustrate the reasoning using, for example, the asymmetry ∆S T c in eq. (5.15). First, we remind the reader that in terms of all parameters involved in the problem -δ, ρ, β, ρ and β in eq. (3.10), plus the complex θ parameter -, ∆S T c is simply a function ∆S T c (ρ, β, ρ , β , δ, θ). ∆S T c would be a true T-violation asymmetry if ρ = β = 0 and ρ = 1 (eq. (3.12)). It is then possible to do the following separation, at each point in parameter space, when performing a fit to the observables: The term within square brackets, has exactly the desired properties for the fake contribution: independently of β, δ and θ, it vanishes when the conditions eqs. (3.9) and (3.12) are fulfilled. Then, the last term, is the truly T-violating contribution, the genuine T-reverse one. It is then possible to quantify the amounts of fake and genuine T-reverse contributions to T and CPT asymmetries like ∆S + T , ∆C + T , ∆S + CPT , ∆C + CPT , and also, of course, to the T and CPT genuine asymmetry parameters involving the individual C h , C c and S c coefficients. They are explicitely shown in the results of the fit in section 7. In terms of the parameters δ, ρ, β, ρ and β in eq. (3.10), the genuine T-reverse asymmetries are simply obtained for
Results
Following the ideas developed in the previous sections, we now present results obtained from a global fit to the available experimental information. First, we discuss in section 7.1 the basics of the global fit and the main results, including in particular the new best determination of the real part of the CPT violating parameter θ. Then, in section 7.2, we illustrate and discuss several specific aspects of the results: the difference between CP and T asymmetries -as discussed in section 4 -, the separation of genuine T-reverse asymmetries and fake contributions, and finally the sensitivity of different asymmetries to Re (θ) and Im (θ).
Global fit
With the information on the single C[ ± , K S,L ] and S[ ± , K S,L ] coefficients provided by the BaBar collaboration in [1], including full covariance information and separate statistical and systematic uncertainties, supplemented with information on |q/p|, for which we use [25] (obtained without assuming CPT invariance in the B 0 d -B 0 d mixing) we perform a fit in terms of the set of parameters {Re (θ) , Im (θ) , δ, ρ, β, ρ , β } (see eq. (3.10)). Furthermore, we can also address a more restricted situation where no wrong flavour decays (i.e. ∆F = ∆Q) are allowed in B 0 d ,B 0 d → J/ΨK S,L , that is imposing λ K S + λ K L = 0: in terms of the previous set of parameters, that means setting ρ = 1 and β = 0. All the results shown in the following are obtained from a standard frequentist likelihood analysis. An additional bayesian analysis has also been performed with simple flat priors for the basic parameters, yielding almost identical results. Starting with the CPT violating θ parameter, the results that follow from these fits, improve significantly on the uncertainty of the real part quoted by the Particle Data Group (PDG) in [25], based on BaBar [26,27] and Belle [28] results (the PDG uses z for our parameter θ): Re (θ) PDF = ±(1.9 ± 3.7 ± 3.3) × 10 −2 , Im (θ) PDF = (−0.8 ± 0.4) × 10 −2 . Figure 1. Im (θ) vs. Re (θ)sign(R K S ) in the full fit (blue regions, solid contours), and in the fit with λ K S + λ K L = 0 (red regions, dashed contours); dark to light regions correspond, respectively, to two-dimensional 68%, 95% and 99% C.L. here and in all plots in the following. Table 2 collects the results of the global fit to the data [1], while the results in table 3 correspond to the fit with the additional assumption λ K S + λ K L = 0. For completeness, the C K S,L , S K S,L and R K S,L coefficients (see eq. (2.15)) are also displayed. Besides the basic parameters, BaBar asymmetries and genuine asymmetry coefficients are also shown, including separate values of the genuine T-reverse and fake contributions.
It has to be stressed that we do not observe any significant deviation of eq. (3.9). This result confirms the goodness in the selection of the channels in order to constrain the T and CPT asymmetries. We also observe compatibility with the assumption of no wrong flavour decays in the CP final decay channel (results in table 3).
Selected results
For the BaBar asymmetries we obtain, in the present analysis, ∆S + T = −1.317 ± 0.050 and ∆S + T = −1.326 ± 0.033 (asuming no wrong flavour decays). The remarkable improvement on the precision comes from imposing the WWA evolution, that includes symmetries like eq. (4.3). The CP counterpart is the asymmetry ∆S + CP = −1.360 ± 0.038. We now discuss the difference between the genuine T-reverse and CP asymmetry parameters. To illustrate this point, figure 2 shows true T-reverse asymmetries versus CP asymmetries for ∆S and for the genuine asymmetry coefficients ∆S c and ∆C c . The dashed diagonal line would correspond to strict equality among both observables.
In section 6 we have shown that the genuine T-reverse and fake contributions to T and CPT asymmetries could be separated quantitatively. This is particularly relevant for the ∆S + T asymmetry since sizable fake contributions could have weakened the evidence for the time reversal violation observation independent of CP. In figure 3 we show genuine T-reverse vs. fake contributions for ∆S + T and for the genuine asymmetry parameters ∆S T c , ∆C T c and ∆C Genuine T-reverse Fake Genuine T-reverse Fake Genuine T-reverse Fake Table 3. Global fit with λ K S + λ K L = 0, summary of results.
level while the genuine T-reverse one can reach the few percent level, there is no evidence of time reversal violation. For ∆C T c in figure 3(d), fake contributions might be as large as the genuine T-reverse ones and the same conclusion holds. It is to be noticed that, in all cases, there is no significant correlation among genuine T-reverse and fake contributions.
As shown in eq. (7.2), the present analysis improves on the uncertainty on Re (θ) quoted by the PDG; θ, introduced in eq. (2.8), is both CP and CPT violating. It is important to stress that θ can appear not only in CPT asymmetries, but also in T and CP asymmetries, together with, respectively, CP invariant and T invariant terms. It is then interesting to explore which observables could be sensitive to θ from the theoretical point of view, and how could that translate into interesting correlations among observables and θ. For Re (θ), we focus on the genuine asymmetry parameters ∆C h and ∆C c in equations (5.9) to (5.14) since, at leading order in θ, all ∆S c are insensitive to Re (θ). Attending to eq. (5.10), with δ −5 × 10 −4 , ∆C CP are suppressed by δ; furthermore, since Re (θ) enters ∆C T h with a factor R K S + R K L , the genuine T-reverse ∆C T h will be interesting for Im (θ), while the genuine T-reverse ∆C CPT h is proportional to −Re (θ) R K S + Im (θ) S K S . Similar comments apply to genuine T-reverse ∆C T c and ∆C CPT c . For Im (θ), in addition to the previous comment concerning ∆C T h , the genuine T-reverse ∆S CPT c is, to a very good approximation, ∆S CPT c −Im (θ) following eq. (5.17). Notice that. although ∆S CP c has a clean dependence in Im (θ). the dominant term S K S −0.7 masks this potential sensitivity. 4 In some cases, the correlations persist partially even in the presence of fake contributions to T and CPT asymmetries. Figures 4 and 5 illustrate and confirms the previous discussion. It is to be said that, on top of the theoretical expectations, the actual experimental input is central to shape the sensitivity to θ, including in particular the fact that the decay channels including K L give larger uncertainties than their counterpart with K S , and thus CP asymmetries with the K S could be, a priori, better suited to uncover the presence of θ. two decays in a B factory of entangled neutral B 0 d -meson states. By genuine we mean a set of observables, for each symmetry, in yes-no biunivocal correspondence with symmetry violation. In this paper such a goal has been accomplished, and their values have been obtained from the BaBar measurements of the Flavour-CP eigenstate decay channels.
Conclusions
In the course of this study several important results are worth mentioning, including both genuine plus possible fake effects: • The meson states B ± filtered by the observation of the CP eigenstate decay channels J/ΨK ∓ are indeed orthogonal, with extracted values for non-orthogonality ρ = −0.023 ± 0.013, β = 0.013 ± 0.040. B ± are to be used as the meson states, together with B 0 d ,B 0 d , to obtain the transition probabilities for the asymmetry parameters.
• The condition allowing to use Motion Reversal Asymmetry as genuine Time Reversal Asymmetry, not only with the exchange of initial and final meson states but using T-transformed states, is well satisfied with a resulting value ρ = 1.021 ± 0.032 in eq. (3.10). Similarly for CPT Reversal Asymmetry. In addition, there is consistency for no wrong flavour in the decays, as required by eq. (3.13).
• With any normalization in the time dependence of the intensities, a non-vanishing Asymmetry Parameter between the symmetry transformed transition probabilities is a proof of symmetry violation. However, a yes-no biunivocal correspondence is only valid with the precise connection between transition probabilities between meson states and experimental double decay rate intensities of table 1 (see equation (3.6)).
The results obtained for these genuine Asymmetry Parameters are shown in table 2. | 7,840 | 2016-06-01T00:00:00.000 | [
"Physics"
] |
Evidence linking oxidative stress, mitochondrial dysfunction, and inflammation in the brain of individuals with autism
Autism spectrum disorders (ASDs) are a heterogeneous group of neurodevelopmental disorders that are defined solely on the basis of behavioral observations. Therefore, ASD has traditionally been framed as a behavioral disorder. However, evidence is accumulating that ASD is characterized by certain physiological abnormalities, including oxidative stress, mitochondrial dysfunction and immune dysregulation/inflammation. While these abnormalities have been reported in studies that have examined peripheral biomarkers such as blood and urine, more recent studies have also reported these abnormalities in brain tissue derived from individuals diagnosed with ASD as compared to brain tissue derived from control individuals. A majority of these brain tissue studies have been published since 2010. The brain regions found to contain these physiological abnormalities in individuals with ASD are involved in speech and auditory processing, social behavior, memory, and sensory and motor coordination. This manuscript examines the evidence linking oxidative stress, mitochondrial dysfunction and immune dysregulation/inflammation in the brain of ASD individuals, suggesting that ASD has a clear biological basis with features of known medical disorders. This understanding may lead to new testing and treatment strategies in individuals with ASD.
INTRODUCTION
Autism spectrum disorders (ASD) are a group of neurodevelopmental disorders that are defined by behavioral observations including communication and social interaction problems and repetitive behaviors (APA, 1994). ASD affects an estimated 1 out of 88 individuals in the United States (U.S.) (Baio, 2012) with four times more males than females being affected (Rice, 2007). The etiology of ASD is unclear at this time. Although several genetic syndromes, including Fragile X and Rett syndrome, have been associated with ASD; genetic defects account for only a small percentage of ASD cases (Schaefer et al., 2013).
Although many of the cognitive and behavioral features of ASD are thought to arise from dysfunction of the brain, evidence from many fields of medicine has documented physiological abnormalities in organs besides the brain that are associated with ASD, suggesting that, in some individuals, ASD arises from systemic, rather than organ specific abnormalities . Specifically, in recent decades, research and clinical studies have implicated physiological and metabolic systems that transcend specific organ dysfunction, such as immune dysregulation and inflammation, abnormalities in redox regulation and oxidative stress, and dysfunction of energy generation and mitochondrial systems (Ming et al., 2008;Rossignol and Frye, 2012a). In this context, ASD may arise from, or at least involve, systemic physiological abnormalities rather than being a purely central nervous system (CNS) disorder (Herbert, 2005), at least in a subset of individuals with ASD. However, because the CNS is affected in ASD, examining physiological abnormalities in the brain may reveal more about what is abnormal than inspecting abnormalities in blood or urine samples.
Multiple studies have also reported evidence of mitochondrial dysfunction in individuals with ASD (Rossignol and Bradstreet, 2008;Weissman et al., 2008;Giulivi et al., 2010;Guevara-Campos et al., 2010;Shoffner et al., 2010;Zhang et al., 2010;Dhillon et al., 2011;Frye and Rossignol, 2011;Chauhan et al., 2012b;Frye, 2012;Frye and Rossignol, 2012a;Rossignol and Frye, 2012a,b;Frye et al., 2013a,b;Frye and Rossignol, 2013). In some studies, biomarkers of mitochondrial dysfunction have been associated with autistic behaviors or autism severity (Minshew et al., 1993;Mostafa et al., 2005). One systematic review reported that over 30% of children with ASD have biomarkers of abnormal mitochondrial function suggesting that a relatively high percentage of individuals with ASD might have some degree of mitochondrial dysfunction . Another study reported that up to 50% of children with ASD have biomarkers of mitochondrial dysfunction that are valid (that is, they correlate with other biomarkers of mitochondrial dysfunction) and are consistently abnormal (that is, they are repeatedly abnormal) (Frye, 2012). However, like the studies on oxidative stress and ASD, most of the published literature concerning mitochondrial dysfunction has examined blood and urine samples. A number of studies recently have reported evidence of mitochondrial dysfunction in ASD brain samples compared to controls (Palmieri et al., 2010;Chauhan et al., 2011;Anitha et al., 2012Anitha et al., , 2013Ginsberg et al., 2012;Rose et al., 2012b;Tang et al., 2013).
Finally, a number of studies have reported evidence of immune dysregulation and/or inflammation in individuals with ASD (Gupta et al., 2010;Onore et al., 2012;Rossignol and Frye, 2012a;Depino, 2013;Gesundheit et al., 2013;Goines and Ashwood, 2013), including gene changes pertaining to the immune system (Michel et al., 2012;Poultney et al., 2013). In some studies, biomarkers of inflammation or immune dysregulation have been correlated with ASD severity (Mostafa and Kitchener, 2009;Al-Ayadhi and Mostafa, 2011, 2013Khakzad et al., 2012;Mostafa and Al-Ayadhi, 2012) and an elevation in TNF-alpha has been reported in ASD lymphocytes (Malik et al., 2011a) and in amniotic fluid in children who develop autism (Abdallah et al., 2013). Particular interest surrounds elevations found in autoantibodies to brain elements and other important molecular targets such as the folate receptor autoantibody (Connolly et al., 1999;Rossignol and Frye, 2012a;Frye et al., 2013c). Although there have been a large number of studies examining immune abnormalities in ASD, almost all of these studies have examined blood and urine samples. However, some studies have recently reported evidence of brain-related immune dysregulation or inflammation in ASD compared to controls (Vargas et al., 2005;Chez et al., 2007;Garbett et al., 2008;Li et al., 2009;Morgan et al., 2010;Wei et al., 2011;Young et al., 2011;Rose et al., 2012b;Suzuki et al., 2013).
Recently, an interrelationship between oxidative stress, mitochondrial dysfunction, and/or inflammation has been reported in some individuals with autism (James et al., 2009a;Mostafa et al., 2010;Zhang et al., 2010;Rose et al., 2012b;Frye et al., 2013a;Napoli et al., 2013;Theoharides et al., 2013). In this manuscript, we concentrate on studies that have documented these physiological abnormalities specifically in the CNS of individuals with ASD. Reviewing the evidence for these physiological abnormalities specifically in the CNS is important for several reasons. Firstly, the CNS is protected from the rest of the body by the bloodbrain barrier. Although there is evidence that these physiological abnormalities are present in non-CNS tissue in individuals with ASD, it does not necessarily mean that they are present in the CNS. Demonstrating that these abnormalities also affect the brain would suggest that brain dysfunction in individuals with ASD is not necessarily only secondary to systematic abnormalities, but that the same abnormalities that influence peripheral organs also directly influence brain function. Secondly, there are particular patterns of abnormalities in the CNS that are associated with ASD. Indeed, abnormalities in ASD have been reported in the frontal and temporal cortices, the hippocampus and amygdala as well as the cerebellum. Determining whether these physiological abnormalities are also present in these brain areas would provide insight into whether they could be involved in the pathological mechanisms that result in ASD. Thus, this manuscript reviews the evidence for oxidative stress, mitochondrial dysfunction and immune dysregulation/inflammation in the brains of individuals with ASD compared to controls as well as the evidence linking these abnormalities.
STUDIES OF OXIDATIVE STRESS IN THE ASD BRAIN
A number of studies have reported evidence of oxidative stress in post-mortem brain samples obtained from individuals with ASD compared to controls (Table 1). These studies have demonstrated a decrease in GSH, the major cellular antioxidant, oxidative damage to proteins, lipids and deoxyribonucleic acid (DNA) as well as alternations in the activity of enzymes important in redox metabolism.
Several studies have reported GSH abnormalities in the brain tissue of individuals with ASD. In one study of 10 individuals with autism and 10 age-matched controls, GSH/GSSG and reduced GSH levels were both significantly lower in the cerebellum and temporal cortex in the autism group compared to controls (Chauhan et al., 2012a). Another study of 15 individuals with autism and 15 controls reported significantly lower GSH and GSH/GSSG levels in the cerebellum and Brodmann area 22 (BA22) in the autism group. Interestingly, these markers of GSH metabolism did not correlate with age, suggesting that the oxidative stress observed was a chronic condition (Rose et al., 2012b).
Enzymes important for redox metabolism have also been found to be altered in brain tissue derived from individuals with autism. In a study of temporal lobe brain samples (BA21) taken from 20 individuals with autism and 25 controls, Tang et al. observed a decrease in superoxide dismutase 2 activity in brain samples from the autism group (Tang et al., 2013). Activities of glutathione peroxidase, glutathione-S-transferase, and glutamate cysteine ligase were each significantly decreased in the cerebellum of the brains of 10 individuals with autism compared to Diminished superoxide dismutase 2 activity in BA21 in ASD group; 8-oxo-deoxyguanosine higher in temporal lobe in ASD group 10 controls as reported by Gu et al. (2013b). Finally, significantly lower methionine synthase mRNA along with lower levels of homocysteine and cystathionine were observed in the frontal cortex (BA 9,22,41,42,or 46) in the brains of 10 individuals with autism (ages 4-30 years) compared to 10 age-and sexmatched controls, suggestive of adaptive responses to oxidative stress (Muratore et al., 2013). Some studies have reported oxidative damage to brain lipids in ASD. One of the first studies reported a significant increase in lipofuscin containing cells, a marker of oxidative stress, in 3 language areas of the brain (BA 22, 39, and 44) in 8 males with autism compared to 7 male controls (López-Hurtado and Prieto, 2008). Another study reported significant immunoreactivity to 3 markers of oxidative damage (carboxyethyl pyrrole (CEP) and iso [4]levuglandin (iso[4]LG)E2-protein adducts as well as heme oxygenase-1) in cerebellar, hippocampal, and BA39 brain samples from 5 subjects with autism but not in any of 5 controls (Evans et al., 2008). Finally, one study of 8 children with autism and 8 controls reported significantly higher levels of lipid hydroperoxides (an oxidative stress marker) in the temporal cortex and cerebellum in the autism group; mitochondrial dysfunction was also observed in this study in the same areas (Chauhan et al., 2011).
Other studies have documented significant oxidation of proteins in brain tissue by examining levels of 3-nitrotyrosine (3NT). One study which examined brain tissue from 9 individuals with autism and 10 controls reported that 3NT in the cerebellum was significantly elevated in the autism group and was significantly correlated (r = 0.80) with cerebellar mercury levels (Sajdel-Sulkowska et al., 2008). In a follow-up study, these investigators found that neurotrophin-3, a neurotrophin critical for normal brain growth and differentiation, was positively correlated (r = 0.83) with 3NT in the cerebellum samples from 8 individuals with ASD compared to 7 controls. The investigators suggested that increased neurotrophin-3 could affect brain development and growth and lead to cerebellar overgrowth, and that this increase in neurotrophin-3 could be due to increased levels of oxidative stress in the developing brain (Sajdel-Sulkowska et al., 2009). The same group examined 2 children with autism and 2 age-matched controls and reported that 3NT varied widely in the autism brain samples compared to a uniformly low level in the control brains; the elevated levels of 3NT in the two autism cases were found in areas of the brain responsible for sensory and motor coordination, speech processing, memory and social behavior (orbitofrontal cortex, Wernicke's area, cerebellar vermis and pons) (Sajdel-Sulkowska et al., 2011). A recent study of 15 individuals with autism and 15 controls reported a significant elevation of 3NT in the cerebellum and BA22 in the autism group (Rose et al., 2012b). Finally, one study of 6 individuals with non-syndromic autism and 6 controls examined brain tissue from the superior temporal gyrus (BA 41/42 or 22) and reported a 3-fold higher level of oxidative damage to mitochondrial proteins in the autism group compared to controls (Palmieri et al., 2010).
Other studies have documented increased oxidation of DNA in the brain tissue of individuals with ASD. In one study of 20 individuals with autism and 25 controls, 8-oxo-deoxyguanosine was elevated in the temporal lobe of the autism group (Tang et al., 2013). In another study of 8 individuals with autism and 7 controls, the levels of 8-hydroxydeoxyguanosine were 63% higher in the cerebellum of the autism group (p = 0.23) (Sajdel-Sulkowska et al., 2009). Another study of 15 individuals with autism and 15 controls reported significant elevations in 8-oxodeoxyguanosine in the cerebellum and BA22 (superior temporal lobe) in the autism group. In this study, DNA damage was found to significantly correlate with the GSH/GSSG redox ratio (Rose et al., 2012b). However, a study of 10 individuals with autism (ages 4-30 years) and 10 age-and sex-matched controls reported similar 8-hydroxyguanosine, a marker of oxidative damage to RNA, in the frontal cortex (BA 9,22,41,42,or 46) (Muratore et al., 2013).
Secondly, biomarkers indicative of increased reactive oxygen species have been reported. Abnormal levels of glutathione (low reduced glutathione, elevated oxidized glutathione and depressed glutathione redox ratio) have been found in two brain areas in 2 studies: the temporal cortex (Chauhan et al., 2012a;Rose et al., 2012b) and cerebellum (Rose et al., 2012b), and in one study a response to oxidative stress (increased heme oxygenase-1) has been reported in the parietal and frontal lobes and the cerebellum (Evans et al., 2008).
Lastly, two studies have demonstrated that certain essential enzymes involved in controlling reactive oxygen species and producing glutathione are reduced in three brain areas, including the temporal lobe (Tang et al., 2013), cerebellum (Tang et al., 2013), and frontal cortex (Muratore et al., 2013).
Despite this converging evidence of oxidative stress in several brain areas in multiple studies, it is clear that there are several limitations to this evidence. First, because of the limited number of brain tissue samples generally available, the majority of studies have only examined a limited number of brain samples (one study only used 2 samples in each group). Thus, although most studies have included a brain sample from the temporal area or cerebellum, specific areas of the brain were not consistently analyzed across studies. However, because many of these studies demonstrate significant effects despite small sample sizes, the effects found are rather robust across ASD subjects and brain regions. This raises the possibility that these processes are pervasive in ASD. Since it is well accepted that the general ASD population is composed of heterogeneous phenotypes, larger samples sizes are needed to determine if subgroups of children with ASD exist who have significant differences in brain redox environments.
Still, there are larger questions that must be answered beyond confirming the notion that oxidative stress is present in the brain of children with ASD. For example, it is possible that the reduced transportation of folate into the brain as a consequence of the folate receptor alpha autoantibody or mitochondrial dysfunction could reduce the function of methylation and glutathione metabolism specifically within the brain leading to some of the findings described above (Frye et al., 2013c). However, many of these same findings reported for the brain (oxidative damage to lipids, protein and DNA, glutathione abnormalities, reduced function of enzymes essential for regulating oxidative stress) have been found in the blood, immune cells and cell lines derived from individuals with ASD, thereby raising the question of whether these findings are specific for the brain or whether they represent a more general process (Frye et al., 2013c;Frye and James, 2014). Lastly, the etiology of these abnormalities is not clear, as both increases in pro-oxidant influences and reductions in antioxidant defenses have both been associated with ASD (Chauhan and Chauhan, 2006) and it is clear that there are no simple genetic abnormalities that account for these findings (Frustaci et al., 2012;Frye and James, 2014).
STUDIES OF MITOCHONDRIAL DYSFUNCTION IN THE ASD BRAIN
Evidence of mitochondrial dysfunction in the brain of individuals with autism has been reported by using magnetic resonance imaging techniques. Other studies have examined postmortem brain samples from individuals with ASD compared to controls. Such studies have demonstrated decreases in electron transport chain (ETC) complex and tricarboxylic acid (TCA) cycle enzyme activities, as well as differences in mitochondrial gene expression in the brain tissue of individuals with autism compared to controls ( Table 2).
The first study to report evidence of mitochondrial dysfunction in the brain in individuals with ASD was a study of 11 ASD individuals and 11 typically developing (TD) controls which reported abnormal levels of brain markers of mitochondrial function in the dorsal prefrontal cortex measured by Phosphorus-31 magnetic resonance spectroscopy (MRS) (including phosphocreatine, αATP, α-adenosine diphosphate, dinucleotides and diphosphosugars) that significantly correlated with the severity of language and neuropsychological deficits in the ASD group but not in the control group (Minshew et al., 1993). Recent studies have demonstrated that Phosphorus-31 MRS is sensitive in detecting metabolic disturbances in children with mitochondrial dysfunction and ASD when muscle and/or brain is examined (Golomb et al., 2014). Most MRS studies have not Indeed, studies have consistently demonstrated a decrease in NAA in global white and gray matter of the brain in children with ASD, and in the gray matter of the parietal cortex, the cerebellum, and the anterior cingulate cortex in both children and adults with ASD (Ipser et al., 2012). Other investigators have recently suggested that some of these age-specific changes in gray matter NAA may be different in ASD children compared to TD and developmentally delayed children (Corrigan et al., 2013). NAA is a particularly important brain metabolite as it is not only a marker of neuronal integrity, but a marker of mitochondrial dysfunction (Clark, 1998) since it is exclusively synthesized by the mitochondria of neurons. One study used MRS to measure NAA and lactate in 9 children with ASD and 5 control siblings in the frontal, temporal and cerebellar areas. An elevation in lactate was found in the frontal lobe of one of the 9 children (11%) with ASD, and NAA was reduced in the cerebellum of the ASD group as compared to the control group (Chugani et al., 1999). Another study of 45 children with ASD, 15 children with developmental delay and 13 TD children reported reduced NAA concentrations in the ASD group compared to the TD controls, but no significant difference in mean lactate levels (Friedman et al., 2003). A more recent study of 54 children with ASD, 22 children with developmental delay and 54 TD children found no significant difference in mean lactate levels between groups using 1H-MRS (Corrigan et al., 2012). Although elevations in lactate have been investigated using 1H-MRS in children with ASD, it is likely that the lack of findings in this population is due to the poor sensitivity of 1H-MRS for identifying lactate elevations in the brain (Rossignol and Frye, 2012c). Several studies have examined ETC function in the brain of children with ASD. One study examined 8 children with autism and 8 controls (4-10 years of age) and reported significantly lower ETC complex activities in the cerebellum, frontal cortex and temporal cortex of the autism group (Chauhan et al., 2011). In another study of temporal lobe brain samples (BA 21) taken from 20 individuals with autism and 25 controls, there were decreased ETC complex I and IV activities and protein content in the ASD group (Tang et al., 2013). In one study of 15 individuals with ASD and 15 controls, the mean activity of the TCA cycle enzyme aconitase was significantly decreased in the cerebellum and temporal lobe (BA22) in the autism group (Rose et al., 2012b). Finally, another study of 14 individuals with autism and 12 controls reported mean reductions in ETC complexes I (31%) and V (36%) activities as well as in pyruvate dehydrogenase (35%) in the frontal cortex in the autism group. This study also reported a higher mitochondrial DNA (mtDNA) copy number compared to nuclear DNA in 3 different mitochondrial genes in the autism group; these latter findings might be related to mitochondrial proliferation (Gu et al., 2013a).
Several studies have examined changes in the expression of mitochondrial genes in brain samples of individuals with autism. Decreased mitochondrial ETC complex gene expression was found in the cerebellum and BA19 (occipital) brain tissue from 9 individuals with autism compared to 9 controls (Ginsberg et al., 2012). In another study, reduced expression of mitochondrial ETC genes, including 11 genes of complex I, five genes of complex III, five genes of complex IV, and seven genes of complex V were reported in the anterior cingulate gyrus, thalamus, and motor cortex derived from 8 patients with autism compared to 10 controls (Anitha et al., 2013). Other studies have examined non-ETC mitochondrial gene expression in brain tissue from individuals with autism. For example, decreased expression of mitochondrial genes, including metaxin 2, neurofilament, SLC25A27, light polypeptide (NEFL) and solute carrier family 25 were found in 8 patients with autism (in the anterior cingulate gyrus, motor cortex and thalamus) compared to 10 controls (Anitha et al., 2012).
One study examined changes in proteins that regulate mitochondrial dynamics. Higher levels of mitochondrial fission proteins and lower levels of mitochondrial fusion proteins were found in temporal lobe brain samples (BA 21) from 20 individuals with autism compared to 25 controls (Tang et al., 2013).
Some studies have implicated an association between mitochondrial dysfunction and oxidative stress in the brain tissue of children with autism by showing that these two problems may coexist in the same brain tissue samples. For example, in temporal lobe brain samples (BA 21), decreased mitochondrial function was found along with increased biomarkers of oxidative damage to DNA and decreased superoxide dismutase 2 activity (Tang et al., 2013). In another study looking at cerebellum, frontal cortex and temporal cortex in 4-10 year olds, brain tissue markers of oxidative stress were observed along with reduced ETC complex activities in the autism group (Chauhan et al., 2011). As previously mentioned, one study of 6 individuals with non-syndromic autism and 6 controls reported a higher level of oxidative damage to mitochondrial proteins in the superior temporal gyrus (BA 41/42 or 22) in the autism group compared to controls. In this study, cytochrome C oxidase (complex IV) activity was also higher in the individuals with autism (Palmieri et al., 2010). Lastly, one study correlated markers of oxidative stress with TCA enzyme function; aconitase activity was inversely correlated with GSH/GSSG in the cerebellum and the temporal lobe (BA 22) in 15 individuals with ASD compared to 15 controls (Rose et al., 2012b).
Overall these studies provide support for mitochondrial dysfunction in the brain of individuals with ASD. MRS studies using both Phosphorus-31 and 1H techniques have examined energy metabolites in the brain of individuals with ASD, although many more studies have used the latter technique. Phosphorus-31 MRS has found abnormal energy metabolites in the frontal cortex (Minshew et al., 1993;Golomb et al., 2014) while 1H-MRS has found a reduction in NAA in the global white and gray matter and the parietal, anterior cingulate and cerebellum areas (Ipser et al., 2012). ETC function has been reported to be depressed in frontal (Chauhan et al., 2011), temporal (Chauhan et al., 2011;Tang et al., 2013), and cerebellar (Chauhan et al., 2011) brain tissue derived from individuals with ASD, with ETC complex I most commonly reported depressed. Other studies noted decreases in the activity of non-ETC mitochondrial enzymes (aconitase, pyruvate dehydrogenase) in frontal (Gu et al., 2013a), temporal (Rose et al., 2012b), and cerebellar (Rose et al., 2012b) tissue derived from children with ASD. Depressed expression of ETC genes in the occipital and cerebellar areas and of ETC and non-ETC genes in the cingulate, thalamus and frontal areas have been reported (Anitha et al., 2013). In addition, changes in genes that control mitochondrial dynamics have been noted in the temporal lobe (Tang et al., 2013).
As with studies on oxidative stress, studies of mitochondrial function in the brain of individuals with ASD are mostly based on small numbers of samples, involve a wide variety of methods, and study various regions of the brain without consistency across studies. Despite these limitations, these studies demonstrate that mitochondrial dysfunction is consistently found in the brain of individuals with ASD. The studies on ETC function are consistent with studies performed on muscle tissue as both demonstrate ETC complex I deficiency as the most prevalent ETC complex abnormality . Several studies have provided powerful evidence for the correspondence between oxidative stress and mitochondrial dysfunction in the same brain samples (Chauhan et al., 2011;Rose et al., 2012b;Tang et al., 2013). This is one step forward to understanding the interaction between oxidative stress and mitochondrial dysfunction. Future studies could make such evidence more powerful by correlating these abnormalities with peripheral markers of mitochondrial dysfunction and oxidative stress as well as examining clinical characteristics.
STUDIES OF INFLAMMATION AND IMMUNE DYSREGULATION IN THE ASD BRAIN
Inflammation and immune dysregulation have been observed in both brain tissue and cerebrospinal fluid (CSF) samples from individuals with ASD. In brain tissue, increases in cytokines, expression in immune-related genes, microglial cell activation and other biomarkers of inflammation have also been reported ( Table 3).
While some studies have reported microglial activation in individuals with autism compared to controls, others have studied differences in the spatial organization of microglial cells in individuals with autism. Microglia are immune cells in the CNS which are activated to eliminate damaged cells or infectious agents through the process of phagocytosis. However, when the microglia are chronically activated, they may increase inflammation through the release of proinflammatory cytokines and free radicals (Dheen et al., 2007). The first study to examine microglia in ASD reported significant activation of microglia and reactive astroglia in the middle frontal gyrus, anterior cingulate gyrus and cerebellum in 11 patients with autism compared to 6 controls (Vargas et al., 2005). The second study examined the dorsolateral prefrontal cortex in 13 males with autism and 9 controls and reported marked microglial activation in 5 out of the 13 autism cases (38.5%) and mild microglial activation in another 4 cases (30.8%) (Morgan et al., 2010). Finally, one study of 20 men with autism and 20 age-and IQ-matched controls reported Activation of microglia and astroglia in the middle frontal gyrus, anterior cingulate gyrus and cerebellum in ASD group; anti-inflammatory cytokine tumor growth factor-1 and pro-inflammatory macrophage chemoattractant protein-1 were increased in these areas in ASD group; IFN-gamma, MCP-1, TGF-beta2, and IL-8 increased in CSF of ASD group Microglial activation in multiple brain regions in ASD group evidence of microglial activation using positron emission tomography in multiple brain regions (cerebellum, brainstem, corpus callosum, fusiform gyri, superior temporal gyri, anterior cingulate, orbitofrontal, and parietal lobes) in the autism group (Suzuki et al., 2013).
Other studies have noted differences in microglial spatial organization on the microscopic and macroscopic levels in individuals with ASD. One study reported that microglia in the dorsolateral prefrontal cortex were frequently located closer to neurons in 13 individuals with autism compared to 9 controls (Morgan et al., 2012a), while another study reported that microglial cells were associated with reactive astrocytes in 11 patients with autism (Vargas et al., 2005). Another study examined microglia from the fronto-insular and visual cortex in 11 individuals with autism and 12 controls and reported significantly more microglia in the fronto-insular cortex in the autism group; however, this study did not examine microglial activation (Tetreault et al., 2012). Reactive gliosis has been reported in association with microglia activation in one study of 11 patients with autism (Vargas et al., 2005), while another study of 8 males with autism and 7 male controls suggested reactive gliosis along with a greater density of glial cells in 3 different brain areas associated with language (BA 22: speech recognition; BA 44: speech production; and BA 39: reading) in the autism group (López-Hurtado and Prieto, 2008).
Other studies have examined the expression of inflammatory genes in brain tissue of individuals with autism. One study reported increased transcription levels of several immune-related genes in the superior temporal gyrus of 6 individuals with autism compared to 6 controls, consistent with neuroimmune activation (Garbett et al., 2008). Two studies examined the NF-KappaB pathway in the brains of individuals with autism. One study reported that NF-KappaB expression in the orbitofrontal cortex was increased in 9 individuals with autism compared to 9 controls . Another study of 7 individuals with autism and 7 controls reported no significant difference in NF-KappaB expression in the cerebellum and frontal cortex (Malik et al., 2011b).
One study of 15 individuals with ASD and 15 controls reported elevated 3-chlorotyrosine levels, a biomarker of chronic inflammation, in the cerebellum and temporal cortex in the ASD group, along with increased markers of oxidative stress and mitochondrial dysfunction (Rose et al., 2012b).
Several studies examined cytokines and other inflammatory markers in the CSF of individuals with ASD. The first study to report findings related to inflammation in the CSF of patients with autism reported significantly elevated IFN-gamma, MCP-1, TGF-beta2, and IL-8 in 6 children with autism compared to 9 child and adult controls (Vargas et al., 2005). One uncontrolled study of 10 children with autism who had a history of regression in language and eye contact examined CSF for inflammatory changes. In this case series, the mean TNF-alpha concentration in CSF was 104.10 pg/mL compared to concurrent serum levels of 2.78 pg/mL (Chez et al., 2007), suggesting production of TNFalpha in the CNS rather than from systematic inflammation or inflammation of a peripheral tissue. Lastly, one study did not find evidence of immune dysregulation in the CSF in individuals with ASD. In this study of 12 children with autism and 27 controls with other neurological disorders, CSF quinolinic acid and neopterin were significantly lower and biopterin was significantly elevated in the autism group .
The evidence reviewed above clearly supports the notion that there are alterations in the immune system upon examination of the brain in individuals with ASD. The strongest evidence for activation of the immune system are the studies which demonstrated histological evidence of microglia cell changes in the frontal (Vargas et al., 2005;Morgan et al., 2010Morgan et al., , 2012aTetreault et al., 2012), cingulate (Vargas et al., 2005) and cerebellum (Vargas et al., 2005). Neuroimaging supports these histological findings (Suzuki et al., 2013). Evidence for disruption in immune regulation is supported by elevations in proinflammatory cytokines in brain tissue from the frontal (Li et al., 2009), cingulate (Vargas et al., 2005), and cerebellum (Wei et al., 2011) and in CSF (Vargas et al., 2005;Chez et al., 2007) derived from individuals with ASD and elevations in the expression of genes regulating proinflammatory pathways in the temporal (Garbett et al., 2008) and frontal areas in individuals with ASD. Although some studies have reported some negative or inconsistent results (Vargas et al., 2005;Zimmerman et al., 2005;Malik et al., 2011b), the majority of studies point to an activation of the innate immune system in the brain of individuals with ASD and some of the findings, particularly the cytokine elevations, parallel abnormal elevations in cytokines reported in non-CNS tissue in children with ASD. Although these studies suffer from small sample sizes and inconsistency in brain areas examined, together they provide support for more comprehensive research into the role of inflammation and immune dysregulation in the brain of children with ASD.
DISCUSSION
Although ASD is defined by observations of behaviors and is thus classified as a psychiatric disorder, recent evidence has pointed to physiological abnormalities in ASD, suggesting that ASD has a clear biological basis with features of known medical disorders. A number of studies using peripheral biomarkers have linked oxidative stress, mitochondrial dysfunction and immune dysregulation in individuals with ASD (James et al., 2009a;Mostafa et al., 2010;Zhang et al., 2010;Napoli et al., 2013;Theoharides et al., 2013;Frye et al., 2013a). Recently, studies have examined possible interactions between these abnormalities in the brain of individuals with ASD. Indeed, many of the reviewed studies have been published since 2010, including studies examining oxidative stress (8/12 studies, 67%), mitochondrial dysfunction (11/13, 85%) and immune abnormalities (8/14, 57%).
Furthermore, four studies reported that oxidative stress and mitochondrial dysfunction were linked in the brain of individuals with ASD (Palmieri et al., 2010;Chauhan et al., 2011;Rose et al., 2012b;Tang et al., 2013), whereas two studies reported a connection between oxidative stress and inflammation in the brain (López-Hurtado and Prieto, 2008;Rose et al., 2012b). One of these studies linked low GSH levels, oxidative stress, mitochondrial dysfunction and inflammation in the brain of individuals with ASD (Rose et al., 2012b). Interestingly, only one study reported no evidence of inflammation in the CSF of individuals with ASD , whereas another study found a similar oxidative stress marker in both groups, although several findings suggestive of an adaptive response to oxidative stress were observed in the brains of individuals with autism (Muratore et al., 2013).
A recent systematic review of 112 individuals with ASD and concomitant mitochondrial disease found that only about 21% had a genetic abnormality that could account for the reported mitochondrial problem . Mitochondrial dysfunction found in some individuals with autism could be related to inflammation or immune dysregulation. For example, TNF-alpha is known to inhibit mitochondrial function (Suematsu et al., 2003;Vempati et al., 2007;Samavati et al., 2008) and elevations in TNF-alpha, from individuals with ASD compared to controls, have been reported in lymphocytes (Malik et al., 2011a), amniotic fluid (Abdallah et al., 2013), CSF (Chez et al., 2007) and brain samples (Li et al., 2009). GSH protects mitochondria against the adverse effects of TNF-alpha (Fernandez-Checa et al., 1997) and GSH deficiency can lead to impaired mitochondrial function (Vali et al., 2007). Interestingly, TNF-alpha (also known as cachexin) is known to decrease mitochondrial enzymatic function, including cytochrome c oxidase (complex IV) activity (Remels et al., 2010). Some studies, including one using brain tissue, have reported complex IV overactivity in individuals with ASD rather than an inhibition of this complex (Palmieri et al., 2010;Frye and Naviaux, 2011). It may be that active inflammation results in a compensatory increase in complex IV activity so that this complex becomes overactive once the inflammation has subsided (Frye and Naviaux, 2011); such a scenario could explain why complex IV is found to be both reduced and increased in multiple sclerosis lesions depending upon whether the lesion is undergoing active inflammation or whether the inflammation has subsided (Lu et al., 2000;Mahad et al., 2009).
In addition, oxidative stress may lead to impaired mitochondrial function (Fernandez-Checa et al., 1997). For example, the mitochondrial ETC is the predominant source and major target of free radicals (Fernandez-Checa et al., 1998;Trushina and McMurray, 2007). Free radicals impair mitochondrial function (Fernandez-Checa et al., 1997;Wallace, 1999). Mitochondria are protected from oxidative stress by GSH (Fernandez-Checa et al., Table 4 | Regions affected along with function/findings; organized by oxidative stress, mitochondrial dysfunction and immune dysregulation.
Thalamus
Sensory and motor relaying and gating, cortical rhythm generator
Pons
Autonomic function, eye movements, motor and sensory relay Oxidative stress: • Elevated 3NT levels (Sajdel-Sulkowska et al., 2011) 1998), although other antioxidant systems are also important (such as MnSOD). However, mitochondria lack the enzymes to produce GSH and are dependent on GSH production in the cytosol (Enns, 2003;James et al., 2009a), although mitochondria do possess glutathione reductase and therefore can regenerate GSH from GSSG. Depletion of GSH can lead to mitochondrial impairment and make cells more vulnerable to damage from free radicals which originate in the mitochondria (Fernandez-Checa et al., 1997). Several studies have reported lower GSH levels (James et al., , 2006(James et al., , 2009b along with a lower mitochondrial GSH reserve (James et al., 2009a) in individuals with ASD compared to controls. Oxidative stress appears to be a common feature in individuals with ASD (Chauhan and Chauhan, 2006;Frustaci et al., 2012) and may play a role in the mitochondrial dysfunction reported in some children with ASD Rossignol and Frye, 2012b).
A number of studies have reported that biomarkers of oxidative stress, mitochondrial dysfunction and immune dysregulation are correlated with autism severity. While none of the studies examining brain tissue in autism correlated findings with autism severity, some studies reported that brain areas affected by oxidative stress, mitochondrial dysfunction and immune dysregulation are areas responsible for brain functions that are typically impaired in ASD (Table 4). For example, areas involved in speech processing (Sajdel-Sulkowska et al., 2011;Rose et al., 2012b), memory, social interaction, and sensory and motor coordination (Sajdel-Sulkowska et al., 2011) were reported as having oxidative stress in some studies. Therefore, some of these physiological abnormalities in the brain may account for certain symptoms of autism. It is possible that treatment of these abnormalities may lead to a reduction in autism behaviors. For example, a number of studies have reported improvements in autism using nutritional supplements and medications which can support mitochondrial function (Geier et al., 2011;Frye and Rossignol, 2012b;Rossignol and Frye, 2012b;Fahmy et al., 2013), reduce oxidative stress (Dolske et al., 1993;Chez et al., 2002;Adams and Holloway, 2004;Rossignol, 2009;Adams et al., 2011;Rossignol and Frye, 2011;Hardan et al., 2012;Ghanizadeh and Moghimi-Sarani, 2013), and decrease inflammation (Stefanatos et al., 1995;Shenoy et al., 2000;Boris et al., 2007;Bradstreet et al., 2007;Asadabadi et al., 2013;Taliou et al., 2013). Additional studies are needed to determine if these types of treatments lead to changes in oxidative stress, mitochondrial dysfunction and immune abnormalities reported in the brains of some individuals with ASD.
CONCLUSIONS AND PERSPECTIVES
Overall, the studies reviewed above provide support for the idea that oxidative stress, mitochondrial dysfunction and inflammation/immune dysfunction, which are physiological abnormalities identified in non-CNS tissue in children with ASD, are also found to affect the CNS. A few studies demonstrated the connection between these physiological abnormalities. However, there were several limitations to the studies reviewed, including small sample sizes and inconsistencies in the techniques and biomarkers studied and the brain areas examined. Because of these limitations, at this time, it is difficult to know if the findings are localized to a certain portion of the brain or whether these abnormalities are more diffuse. Another challenge is whether or not these abnormalities can be generalized to all children with ASD, or if they represent a subgroup of children with ASD. However, the consistent positive findings across studies suggest that these effects are not subtle and may be important in the pathological mechanisms that disrupt brain function in ASD.
Of interest is that many of the physiological abnormalities noted in the brain of children with ASD are also found in various other neurocognitive and psychiatric diseases. For example, studies in both Alzheimer and Parkinson disease have implicated dysfunction of ETC function (primarily complex IV in Alzheimer and primarily complex I in Parkinson) and in the dynamics of mitochondrial fission and fusion (Moran et al., 2012b), two specific mitochondrial abnormalities that have been reported in ASD. Interestingly, the mitochondria is being investigated for its role in axonal degeneration and repair in multiple sclerosis (van Horssen et al., 2012). Along with mitochondrial dysfunction, oxidative stress and inflammation have also been implicated in a wide range of neurodegenerative diseases including Alzheimer, Parkinson, multiple sclerosis, amyotrophic lateral sclerosis and Friedreich's ataxia (Calabrese et al., 2005;Nuzzo et al., 2013). Like ASD, inflammation, oxidative stress and mitochondrial dysfunction are seen in a wide range of psychiatric disorders (Dantzer et al., 2008;Ng et al., 2008;Shao et al., 2008;Burke and Miller, 2011). Thus, these pathophysiological mechanisms appear to be shared by many diseases that have cognitive and behavioral symptoms. Therefore, future research will need to investigate which pathophysiological mechanisms are shared among these diseases. Such knowledge may lead to novel treatments and strategies for preventing these pathophysiological processes, and thus neurocognitive and psychiatric diseases, from developing.
ACKNOWLEDGMENTS
The review did not receive any financial or grant support from any sources. | 9,137.8 | 2014-04-22T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Natural Trans-spliced mRNAs Are Generated from the Human Estrogen Receptor-α (hERα) Gene
The human estrogen receptor-α (hERα) gene is a complex genomic unit exhibiting alternative splicing and promoter usage in a tissue-specific manner. During the investigation of new hERα mRNA variants by rapid amplification of 5′ cDNA ends, we identified a cDNA in which the acceptor site of exon 1A, into which the different leader exons are normally alternatively spliced, was spliced accurately the 3′ extremity of exon 1A (scrambled 1A→1A hERα cDNA). Reverse transcription-PCR and S1 nuclease mapping analysis revealed that 1A→1A hERα transcripts were not circular RNAs constituted by exon 1A only but corresponded to linear polyadenylated hERα RNAs composed of the eight coding exons of the hERα gene and characterized by a duplication of exon 1A. Genomic Southern blot experiments excluded the hypothesis of duplication of hERα exon 1A in the human genome. Therefore, these data suggested that 1A→1A hERα transcripts were likely generated by trans-splicing. The production of such transcripts by trans-splicing of pre-mRNAs generated from a chimeric gene formed by a single hERα exon 1A, exon 2, and their flanking intronic regions was demonstrated in transient transfection experiments. Therefore, in addition to the alternative cis-splicing, the hERα gene is also subject to natural trans-splicing.
The estrogen receptor-␣ (ER␣) 1 is a ligand-inducible transcription factor that belongs to the steroid, thyroid hormone, and retinoic acid receptor family (1)(2)(3). As all members of this family, it modulates transcription of specific sets of genes by interacting either in a protein/DNA manner with cognate DNA sequences called responsive elements or in a protein/protein manner with other transcriptional factors (1)(2)(3)(4)(5).
ER␣ is a key component of a wide range of biological processes. Its main role is in the control of the reproductive functions such as the establishment and maintenance of female sex differentiation characteristics, reproductive cycle, and pregnancy (6,7). ER␣ is also involved in liver, fat, and bone cell metabolism, cardiovascular and neuronal activity, and embry-onic and fetal development (6,7). Finally, due to the mitogenic effect of its ligand, ER␣ is intimately associated with the biology of endometrium and breast cancers (8 -10).
ER status is used clinically both as a prognostic factor and as a target in the therapy of breast cancers (9). Patients with ER-positive tumors have a better prognosis than those with tumors that lack ER expression. The benefits of the anti-estrogen therapy are almost limited to these patients, although quite a number of ER-positive tumors do not respond to endocrine therapy (8,9). The resistance to hormonal therapy has often been associated with genetic defects within ER biology (11,12). Thus, the identification of the molecular mechanisms controlling ER␣ expression and function and those that may impair ER␣ biology turned out to be a crucial step for understanding the involvement of the estrogen receptor into several physiological and pathological processes.
Mapped to the long arm of chromosome 6 (13), the human ER␣ gene is over 140 kb in length with a coding region split into eight exons (14). Our laboratory has recently shown that this gene is in fact a complex genomic unit exhibiting alternative splicing and promoter usage in a tissue-specific manner (15,16). Using the rapid amplification of cDNA ends (RACE) methodology, we have isolated and characterized several new hER␣ cDNA isoforms and demonstrated that the hER␣ transcripts are produced from a single gene by the use of multiple promoters (16). Most of these hER␣ transcripts (A-F) encode a common ER␣ protein, hER␣ 66, but differ in their 5Ј-untranslated region as a consequence of an alternative splicing of several upstream exons (1B-1F) to a common acceptor site located in exon 1A, 5Ј to the initiation of translation codon. A new class of hER␣ transcripts that lack the first coding exon (exon 1A) of the ER␣ gene was also identified (17). These ⌬1A hER␣ transcripts originate from the E and F hER␣ promoters and encode the new N-terminal 173-amino acid truncated hER␣ 46 isoform (17).
During the RACE investigation, we amplified a hER␣ cDNA fragment in which the 3Ј extremity of exon 1A was spliced directly to the acceptor site of the same exon 1A that normally receives the alternative upstream exons 1B-1F. In this present study, we demonstrate that this RACE product was not an artifact but rather results from the amplification of a hER␣ cDNA with a duplication of exon 1A. The new hER␣ transcripts correspond to trans-spliced mRNA.
RNA Isolation-Total RNA from cell lines and tissues was extracted with TRIzol (Invitrogen) as described by the manufacturer. Total RNA from human mammary gland, human endometrium, human brain, human liver, and human skeletal muscle were purchased from CLON-TECH. Human pituitary RNA was kindly provided by Professor J. Duval (Université de Rennes, Rennes, France).
Plasmid Construction-The pCR-hER␣ Luc plasmid was constructed as follows: the coding region of the luciferase gene was amplified from the pGL2 vector (Promega) using flanking primers with BamHI restriction sites and was then inserted in the BamHI site of pCR 3.1 (Invitrogen) to obtain the pCR 3.1 Luc plasmid. The genomic fragments, a, b, and c (see Fig. 6), were amplified from the GHER 1 and 3 clones in Bluescript (14) using the following primers: XbaI-a5Ј (5Ј-ACGTTCTA-GATCGCGTTTATTTTAAGCCCAGTCTT-3Ј) and XhoI-a3Ј (5Ј-ACGTC-TCGAGCAGGTAGTAGGGCACCTGCTG-3Ј) for fragment a; XhoI-b5Ј (5Ј-ACGTCTCGAGGAGAACGAGCCCAGCGCCTAC-3Ј) and M13 primer for fragment b; and M13 primer and KpnI-c3Ј (5Ј-ACGTGGTA-CCAGCATAGTCATTGCACACTGC-3Ј) for fragment c. Fragment a was digested by XbaI and XhoI and subcloned into the XbaI/XhoI site of pST 1 Blue vector to form pST 1 blue vector a. Fragment b was digested by XhoI and EcoRI (site contained in Bluescript sequences) and subcloned into the XhoI/EcoRI site of pST 1 Blue vector a to form pST 1 blue vector aϩb. Fragment c was digested by EcoRI and KpnI and subcloned into the EcoRI/KpnI site of pST 1 Blue vector aϩb to form pST 1 blue vector aϩbϩc. Finally, pST 1 blue vector aϩbϩc was digested by NheI and KpnI, and the fragment NheI-aϩbϩc-KpnI was then inserted in NheI/KpnI site of the pCR 3.1 Luc plasmid to form the pCR-hER␣ Luc plasmid.
RACE-The trans-spliced hER␣ mRNA (1A31A) was cloned by an inverse PCR method (18). Reverse transcription of MCF7 total RNA (10 g) and second-strand synthesis were performed using a commercial kit (Invitrogen) as recommended by the manufacturer except that the hER␣ gene-specific primer IV (5Ј-CTCACAGGACCAGACTCCATAAT-GGTA-3Ј) located in exon 2 was used instead of the usual oligo(dT) primer (see Fig. 1). Subsequently, the cDNA was circularized in the presence of T4 DNA ligase and submitted to 35 rounds of PCR amplification using the sense primer X (5Ј-ACTCAACAGCGTGTCTCCGAG-3Ј) and the antisense primer VI (5Ј-TTGGATCTGATGCAGTAGGGC-3Ј) (see Fig. 1). The main PCR product was subcloned in the TA cloning vector pCR 2.1 (Invitrogen) and then sequenced by the dideoxy chain termination method.
2.5 l of the reverse transcriptase reactions resulting from primer I were used in two rounds of 30-cycle PCR amplification (see Fig. 5B). The 5Ј primer and nested primer used were X and VIII, respectively. The 3Ј primer II (5Ј-ATTATCTGAACCGTGTGGGAG-3Ј) and the nested primer III (5Ј-CGTGAAGTACGACATGTCTAC-3Ј) were from the 3Јuntranslated region of hER cDNAs (exon 8). Both rounds of amplification were performed using the Expand TM long template PCR system (Roche Molecular Biochemicals) as recommended by the manufacturer.
Finally, single-stranded cDNAs reverse-transcribed from primer L1 were subjected to either a 30-cycle PCR amplification using the 5Ј primer VIII and the 3Ј primer L2 (5Ј-CGGGCCTTTCTTTATGTTTT-3Ј) (see Fig. 6A) or two rounds of 30-cycle PCR amplification using the 5Ј primer X and nested primer XI with the 3Ј primer L2 and nested primer VI (see Fig. 6B).
Modified S1 Nuclease Mapping-Biotinylated single-stranded DNA templates were used to prepare highly labeled single-stranded DNA probes by extension from a specific primer with T7 DNA polymerase in the presence of [␣-32 P]dCTP (3000 Ci/mmol) (19). The origin of probe 1A31A (see Fig. 2A) template was an RT-PCR product that was amplified using the upstream primer XII (5Ј-GGCCCGCCGGCATTCTA-CAG-3Ј, located in exon 1A) with the downstream primer V and then subcloned downstream of T7 in the TA cloning vector pCR 2.1 (Invitrogen). A PCR reaction was performed using a biotinylated T7 primer with M13 reverse primer to obtain probe 1A31A template. To prepare the template used to make the probe 1A31A-2 (see Fig. 4B), an RT-PCR reaction was performed with the 5Ј primer XIII (5Ј-GGCCCGC-CGGCATTCTACAGGTGGCCCGCCGGTTTCTGAC-3Ј, the primer mapping the splice junction 1A31A) and the 3Ј primer XIV (5Ј-CAGAT-TCCATAGCCATAC-3Ј, located in exon 2). The RT-PCR product was subcloned downstream of T7 in the TA cloning vector pCR 2.1 (Invitrogen), and a PCR reaction was performed using a biotinylated T7 primer with M13 reverse primer.
Biotinylated PCR products were bound to streptavidin-coated magnetic beads (Dynal) as recommended by the manufacturer, and the nonbiotinylated DNA strands were removed by denaturation with 0.1 M NaOH. 1A31A and 1A31A-2 S1 single-stranded DNA probes were obtained by extending the respective V (in exon 1A), and XIV (in exon 2) primers annealed to the corresponding biotinylated single-stranded template. After elution of the single-stranded DNA probes by alkaline treatment and magnetic separation, the probe was then purified on a sequencing gel. 10 5 cpm of probe were coprecipitated with 30 g of total RNA and then dissolved in 20 l of hybridization buffer (80% formamide, 40 mM PIPES, pH 6.4, 400 mM NaCl, 1 mM EDTA, pH 8) denatured at 70°C for 10 min and hybridized overnight at 55°C. Digestion with S1 digestions were then carried out as described previously (20), and the samples were electrophoresed through a denaturing polyacrylamide/ urea gels.
Southern Blot-20 g of human genomic DNA (CLONTECH) were digested with EcoRI and BamHI restriction enzymes (Roche Molecular Biochemicals), resolved on 0.8% agarose gel, transferred to a nylon membrane (Hybond Nϩ, Amersham Biosciences), and hybridized with the random primed 32 P-labeled probe 1A, as recommended by the manufacturers. Probe 1A is a genomic fragment from exon 1A (ϩ171 to ϩ610 (21)) obtained by PCR amplification.
RESULTS
Evidence for the Existence of 1A31A hER␣ Transcripts with the Donor Site of Exon 1A Joined to the Acceptor Site of Exon 1A-To amplify new 5Ј mRNA extremities of the hER␣ gene, a 5Ј RACE approach based on a variation of the inverse PCR technique was performed on MCF7 hER␣ cDNA synthesized from primer IV located in exon 2 (Fig. 1A). Sequence analysis of the main RACE product (282 bp) showed that it corresponded to scrambled 1A31A hER␣ transcripts with the donor site of exon 1A joined to the acceptor site of exon 1A (Fig. 1B), a site where the alternative upstream exons 1B-1F are normally spliced (16). It should be noted that the hER␣ cDNA circularization step in the 5Ј RACE approach was not required to amplify the 1A31A hER␣ RACE product, which might explain its abundance. To confirm the existence of such hER␣ transcripts, an S1 nuclease mapping experiment was performed on total RNA from various tissues or cell lines. The singlestranded DNA S1 probe 1A31A was prepared from a 1A31A hER␣ RT-PCR product as described under "Experimental Procedures." This probe included 3Ј end exon 1A sequences spliced to 5Ј end exon 1A sequences and thus would not be completely protected if the standard transcripts were the only species presents. After hybridizing probe 1A31A with the RNA samples and S1 nuclease digestion, two protected fragments of 296 and 316 nucleotides were detected ( Fig. 2A). As expected, the smallest fragment corresponded to normal A-F hER␣ mRNAs, which remained homologous to probe 1A31A as far as the acceptor splice site of exon 1A and then diverged in their 5Ј ends from probe complementary to 1A31A sequences (16). The level and pattern of distribution of these hER␣ mRNAs were as described previously (16). The second protected fragment corresponded in size to a full protection of a hER␣-specific se- Open boxes indicate the unique (1A-1F) and the two first common (1A, 2) exons encoding each hER␣ mRNA isoform. The initiation codon AUG and the acceptor splice site position in exon 1A are indicated. The approximate locations of primers used for the RACE are shown by short arrows. Primer IV, located in exon 2, was used to prime hER␣ cDNA synthesis by reverse transcriptase. After cDNA circularization, the 5Ј RACE products were amplified with the sense primer X and the antisense primer VI located in the coding part of exon 1A. The oligonucleotide probe (P1) from the common part of exon 1A was used to confirm the specificity of the PCR products. In B, the hER␣ RACE products were amplified from MCF7 total RNA as described above. Yeast total RNA was used as a negative control. PCR products were electrophoresed through an agarose gel and transferred by Southern blot to a membrane that was then hybridized with the oligonucleotide probe P1 as described under "Experimental Procedures." Positions of migration of the molecular size markers are shown on the left side of the figure. In C, the sequence of the main RACE product revealed a hER␣ cDNA with the donor site of exon 1A joined to the acceptor site of exon 1A.
FIG. 2. 1A31A hER␣ transcript distribution analysis. In A, S1 nuclease mapping analysis was performed as described under "Experimental Procedures" with the single-stranded probe 1A31A and 30 g of total RNA from various sources as indicated at the top of each lane. Yeast total RNA was used as a negative control. The location and the size of the single-stranded probe 1A31A and the protected fragments obtained after S1 digestion are indicated. The probe was specific for 1A31A hER␣ transcripts but was also able to partially protect the A/F hER␣ mRNA isoforms (⌺-(1A31A hER␣ transcripts)) up to the splice site position. B, RT-PCR analysis. Approximate locations of primers are shown by short arrows. Primer V, located in exon 1A, was used to prime hER␣ cDNA synthesis by reverse transcriptase using total RNA from various sources as indicated at the top of each lane. Yeast total RNA was used as a negative control. The PCR amplification of 1A31A hER␣ cDNA was performed with the sense primer X and the antisense primer VI, both located in exon 1A. An oligonucleotide probe P1 was used to confirm the specificity of the PCR products. quence of the probe and therefore resulted from a hybridization with 1A31A hER␣ transcripts ( Fig. 2A). It was only weakly detected from MCF7 and T47D RNA samples. To study the tissue distribution of 1A31A hER␣ transcripts by a more sensitive approach, an RT-PCR analysis was performed on the RNA samples reverse-transcribed from a hER␣ gene-specific primer (V) chosen in exon 1A (Fig. 2B). This study showed that 1A31A hER␣ transcripts were detected by RT-PCR in tissues and cell lines expressing a relatively high level of normal hER␣ transcripts, for instance the mammary gland and the cell lines MCF7, T47D, and ZR75, which derive from this tissue, the endometrium, and the liver (Fig. 2B). It should be noted, however, that no amplification of 1A31A hER␣ transcripts was obtained from ovary despite the detection of normal hER␣ transcripts by S1 nuclease mapping in this tissue.
1A31A hER␣ Transcripts Likely Result from a Trans-splicing Reaction-To determine whether a hER␣ exon 1A duplication is present in the genome, human genomic DNA was digested with EcoRI and BamHI restriction enzymes and hybridized with an exon 1A probe (Fig. 3). The results of the Southern blot of both genomic digestions revealed a single hybridizing band, the size of which was in total agreement with the restriction enzyme map of the GHER␣ clones published previously (14). Therefore, exon 1A of the hER␣ gene was not duplicated.
Two other mechanisms might explain the detection of 1A31A hER␣ transcripts: 1) the formation of circular 1A31A hER␣ transcripts constituted by exon 1A only or 2) a transsplicing reaction occurring naturally between two hER␣ pre-mRNAs (Fig. 4A). In this last case, 1A31A hER␣ transcripts should contain additional exons of the hER␣ gene. To discriminate between these two hypotheses, an S1 nuclease mapping experiment was carried out using a probe designed to protect trans-spliced hER␣ transcripts with a 1A31A-2 exon organization. Thus, if a trans-splicing reaction occurs for the hER␣ gene, then the corresponding protected fragment would be 624 nucleotides in size. On the other hand, the protection of a circular 1A31A hER␣ transcript by probe 1A31A-2 would give rise to a fragment of the size of exon 1A, 521 nucleotides. The S1 nuclease mapping analysis of MCF7 total RNA by probe 1A31A-2 is shown in Fig. 4B. In addition to the 604-nucleotide fragment that results from a protection of probe 1A31A-2 by normal hER␣ transcripts up to the acceptor splice site of exon 1A, the results also showed a protected fragment of 624 nucleotides in size, thus demonstrating the trans-splicing origin of the scrambled 1A31A hER␣ transcripts. This result was strengthened by the detection of the 1A31A-2 protected fragment in the RNA poly(A ϩ ) fraction, which indicated that 1A31A hER␣ transcripts are polyadenylated molecules. Finally, no protected fragment corresponding in size to circular 1A31A hER␣ transcripts was seen in this S1 nuclease mapping experiment.
Since 1A31A hER␣ transcripts should contain the remaining exonic segments of the hER␣ gene that a trans-splicing process would be likely to generate, the exonic organization of 1A31A hER␣ transcripts was investigated. Firstly, to verify that a full exon 1A was present in 5Ј to the 1A31A junction, hER␣ transcripts from various sources of RNA were reversetranscribed from primer V in exon 1A, and two rounds of PCR were then performed to amplify a fragment of 1A31A hER␣ cDNAs containing the anticipated sequences as illustrated in Fig. 5A. A PCR product of the expected size was amplified from the tissues or the cell lines in which the 1A31A hER␣ transcript was detected previously by RT-PCR (Fig. 5B). The specificity of this product was further confirmed by Southern blot using the exon 1A-specific oligonucleotide probe P2. Secondly, to demonstrate that full-length 1A31A hER␣ transcripts had hER␣ sequences from exon 1A through to exon 8 (3Ј to the 1A31A junction), PCR analysis was performed on singlestrand cDNAs synthesized using a hER␣ gene-specific primer (I) chosen from the hER␣ mRNA 3Ј-untranslated region sequences (exon 8, Fig. 5B). 1A31A hER␣ cDNAs were amplified by two rounds of PCR using the 3Ј primer II and nested primer III located upstream from primer I in exon 8 in combination with the 5Ј primer X and nested primer VIII (Fig. 5B). It should be noted that the first round of PCR amplified both 1A31A and normal hER␣ cDNAs. Only the second round allowed to be specifically amplified 1A31A hER␣ cDNAs. Results showed that the size of the amplified cDNAs was as expected, and after Southern blotting, the hybridization of these PCR products with various oligonucleotide probes recognizing specifically the different eight coding exons of the hER␣ gene demonstrated that sequences from exon 1A to exon 8 were present in 1A31A hER␣ transcripts (Fig. 5B only shows the results obtained with the exon 1A-specific oligonucleotide probe P2). In conclusion, these data clearly demonstrated the existence of a new class of hER␣ mRNAs that presents a duplication of exon 1A and which is likely generated by a trans-splicing event between two hER␣ pre-mRNAs.
A Chimeric Gene Containing hER␣ Exon 1A, the 5Ј Part of Exon 2, and Their Flanking Intronic Sequences Generate Trans-spliced 1A31A Transcripts-To further define the mechanism generating 1A31A hER␣ transcripts, a chimeric gene called pCR-hER␣ Luc was constructed and analyzed for its ability to generate 1A31A trans-spliced transcripts after transient expression in the MCF7 cell line. pCR-hER␣ Luc was formed by the cytomegalovirus promoter, the hER␣ genomic region from exon 1B to an EcoRI restriction site in the 3Јflanking intronic region of exon 1A, a part of hER␣ exon 2 and its 5Ј-flanking sequence to an EcoRI restriction site, the luciferase coding region, and the 3Ј-untranslated region of the bo- vine growth hormone (see "Experimental Procedures" for the construction of pCR-hER␣ Luc) (Fig. 6A). To discriminate 1A31A hER␣ cDNAs generated from the chimeric gene from those arising from the standard hER␣ gene expression in MCF7 cells, an XhoI restriction site was created in the 3Ј extremity of exon 1A of pCR-hER␣ Luc gene. Thus, total RNA prepared from MCF7 transiently transfected with pCR-hER␣ Luc was used to reverse-transcribe hER␣ Luc mRNA from primer L1 located in the 5Ј end of the luciferase coding region. As illustrated in Fig. 6A, hER␣ Luc mRNA was accurately matured since the size of the hER␣ Luc cDNA PCR-amplified between exon 1A and the luciferase coding region indicated that exon 1A was spliced as expected to exon 2 with the removal of the ϳ2.5-kb intronic region. Then, in attempt to amplify trans-spliced 1A31A hER␣ Luc cDNAs, two rounds of PCR were performed on hER␣ Luc cDNAs reverse-transcribed from primer L1 as described in Fig. 6B. The result showed one main PCR product, the size of which was in agreement with the one expected from the amplification of 1A31A cDNAs (Fig. 6B). This result was confirmed by Southern blotting and by hybridization of the PCR product with two oligonucleotide probes, P1 and P3, that recognized the 5Ј and 3Ј extremities of exon 1A, respectively. Finally, the digestion of the amplified cDNA by the restriction enzyme XhoI generated two fragments of the expected size, which were selectively identified by the two probes P1 and P3, thus confirming the pCR-hER␣ Luc chimeric gene origin of the amplified trans-spliced 1A31A cDNA. Used as a control, trans-spliced 1A31A hER␣ cDNAs generated from the endogenous hER␣ gene were also analyzed in parallel. These data demonstrated that a single model gene FIG. 4. 1A31A hER␣ transcripts are linear polyadenylated molecules. A, schematic diagram of the two hypotheses proposed to explain the detection of 1A31A hER␣ transcripts: 1) circular RNAs constituted by exon 1A only or 2) linear RNAs formed by a trans-splicing reaction occurring naturally between two hER␣ pre-mRNAs. B, S1 nuclease mapping experiment carried out to discriminate between these two hypotheses. It was performed as described under "Experimental Procedures" with the single-stranded probe 1A31A-2, designed to protect linear trans-spliced hER␣ transcripts with a 1A31A-2 exon organization, and 30 g of MCF7 total RNA, 30 g of MCF7 poly(A) Ϫ RNA, or 0.1 g of MCF7 poly(A) ϩ RNA mixed with 30 g of yeast total RNA. Yeast total RNA (30 g) was used as a negative control. The location and the size of the single-stranded probe 1A31A-2 and the protected fragments obtained after S1 digestion are indicated. The probe was specific for 1A31A hER␣ transcripts but was also able to partially protect the A/F hER␣ mRNA isoforms (⌺-(1A31A hER␣ transcripts)) up to the splice site position. The probe was designed to contain vector sequence in its extremity (denoted by the thinner black line) to discriminate between undigested probes (Ͻ) and specific protected fragments. Positions of migration of the molecular size markers are shown on the left side of the figure.
formed by hER␣ exon 1A, exon 2, and their flanking intronic regions is able to generate trans-spliced 1A31A hER␣ transcripts and therefore contains all information required for this process.
DISCUSSION
In this investigation, we have demonstrated that the human estrogen receptor-␣ gene is able to generate novel hER␣ mRNAs by trans-splicing. The new class of hER␣ mRNAs presents a duplication of exon 1A and is referred to as transspliced 1A31A hER␣ transcripts.
Heterogeneity in the 5Ј ends of mRNAs generated by alternative promoter usage and splicing is a common feature among the members of the steroid/thyroid hormone/retinoic acid receptor gene family (22)(23)(24)(25). For the human, mouse, rat, or chicken ER␣ genes, several 5Ј end variants of ER␣ mRNAs produced by the splicing of alternative untranslated upstream exons to the first translated exon were reported (16, 26 -28). Most of these ER␣ mRNA variants were identified by 5Ј RACE. Surprisingly, when applied to human ER␣ mRNAs, this approach allowed us to amplify a new type of hER␣ cDNA. This variant, in contrast to the other amplified hER␣ cDNAs (A-F hER␣ cDNAs (16)), presented the 3Ј part of exon 1A in 5Ј to the acceptor of exon 1A, where normally the alternative upstream exons 1B-1F are spliced. The scrambled 1A31A hER␣ cDNA showed an accurate junction between the donor site of exon 1A and the acceptor site of exon 1A, which might indicate that it arises from a natural phenomena rather than a RACE artifact.
The existence of the 1A31A hER␣ transcripts in estrogen target cells was further confirmed by RT-PCR and S1 nuclease mapping experiments. In contrast to a previous report on a genomic rearrangement in an estrogen-independent subclone of the MCF7 human breast cancer cell line in which hER␣ exons (exons 6 and 7) were duplicated in an in-frame fashion (29), the hypothesis of a duplication of hER␣ exon 1A in the human genome was ruled out after a genomic Southern blot experiment. Results always revealed a single hybridizing band, demonstrating that this segment of the hER␣ gene is not duplicated. Furthermore, 1A31A hER␣ transcripts were detected in several healthy tissues, excluding a genomic rearrangement origin associated with a pathological process.
Exon scrambling is an event that has often been also associated with circular RNA molecules (30,31). Such RNAs were described for the testis-determining gene Sry in adult mouse testis (32), the human ets-1 gene (33), the human cytochrome p-450 2C18 gene in epidermis, and the rat androgen-binding protein gene in testis (31). Exons skipped during alternative pre-mRNA processing could be indeed present in a circular molecule that has the donor site of the 3Ј exon joined to the acceptor site of the 5Ј exon. Accordingly, if 1A31A hER␣ transcripts are the result of such a process, it would be expected that they are circular molecules composed of one single exon, exon 1A, joined at the 5Ј and 3Ј splice junctions, as in the case of the circular sry transcript in adult mouse testis (32). However, the presence in the transcripts of the other coding exons FIG. 5. Exonic organization of 1A31A hER␣ transcripts. A, RT-PCR analysis of the exonic organization in 5Ј to the 1A31A junction. Primer V, located in exon 1A, was used to prime hER␣ cDNA synthesis by reverse transcriptase using total RNA from various sources as indicated at the top of each lane. Yeast total RNA was used as a negative control. Primer VIII, which is located between the acceptor splice site of exon 1A and the ATG, was then used in a first round of PCR amplification with primer VI, which is nested to primer V in exon 1A. This PCR reaction should give rise to two hER␣ products as indicated on the schematic diagram. The shortest product corresponds to an amplification inside of exon 1A. To specifically reamplify 1A31A hER␣ cDNAs, a second round of PCR reaction was performed with primers IX and VII. B, RT-PCR analysis of the exonic organization in 3Ј to the 1A31A junction. Primer I, located in exon 8, was used to prime hER␣ cDNA synthesis by reverse transcriptase using total RNA from various sources. Yeast total RNA was used as a negative control. Primer X, located in the 3Ј part of exon 1A, was used in a first round of PCR amplification with primer II in exon 8. As mentioned previously for panel A, the first PCR reaction should give rise to two hER␣ products (see the schematic diagram). To specifically reamplify 1A31A hER␣ cDNAs, a second round of PCR reaction was performed with primers VIII and III. An oligonucleotide probe from exon A1 (P2) was used to confirm the specificity of the PCR products in panels A and B. Positions of migration of the molecular size markers are shown on the left side of the figures. of the hER␣ gene as well as the fact that they are polyadenylated molecules clearly demonstrates that 1A31A hER␣ transcripts are not circular RNAs but rather linear molecules that probably result from a trans-splicing event between two hER␣ pre-mRNAs.
Trans-splicing is a post-transcriptional process occurring during the mRNA maturation in which RNA segments of two independent transcripts are spliced together to generate a new mRNA specie. This mechanism was first demonstrated in trypanosomes (34) and subsequently reported in nematodes (35), flatworms (36), and plant cell organelles (37). In mammalian cells, trans-splicing events were suggested by computer analysis (38) and then by in vitro and in vivo experiments (39 -42). Mammalian cell extracts have been demonstrated to have the ability to join RNA segments together by trans-splicing (39). More recently, trans-splicing reactions between synthetic pre-mRNA substrates were shown in in vitro studies and require either a downstream 5Ј splice site or exonic enhancers (40,41). Finally, SV40 transcripts trans-spliced to each other were detected in cells transformed by an early SV40 DNA fragment (42). Naturally occurring pre-mRNA trans-splicing in mammalian cells has not been frequently reported. The first indications of its existence were based on cDNA sequencing experiments, but alternative cis-splicing could not be excluded. Recently, additional reports strengthened the idea that transsplicing events occur in mammalian cells and contribute to mRNA generation. In rat liver cells, Caudevilla et al. (43) have identified carnitine octanoyltransferase mRNA variants with a duplication of exon 2 or exons 2 and 3, which is not found in genomic DNA. Splicing experiments carried out in vitro with exon 2 plus the 5Ј-and 3Ј-adjacent intronic sequences indicated that accurate joining of two exons 2 occurs by the trans-splicing mechanism. An mRNA variant of the acyl-CoA:cholesterol acyltransferase was also shown to derive from two discontinuous precursor RNAs produced from different chromosomes (44).
Corroborating these data, we demonstrated in the present FIG. 6. A chimeric gene pCR-hER␣ Luc containing hER␣ exon 1A, the 5 part of exon 2, and their flanking intronic sequences generates trans-spliced 1A31A transcripts. In A, the plasmid pCR-hER␣ Luc was obtained after insertion of the PCR-amplified hER␣ genomic fragments a-c in pCR-Luc between the promoter CMC and the luciferase coding region, as described under "Experimental Procedures." The XhoI restriction site created in the 3Ј extremity of exon 1A is indicated by an asterisk. To test the maturation of hER␣ Luc transcripts, total RNA prepared from MCF7 transitively transfected with pCR-hER␣ Luc was used to reverse-transcribe hER␣ Luc mRNA from primer L1 located in the luciferase coding region. A 35-cycle PCR reaction was then performed with primers L2 and VIII on hER␣ Luc cDNA as well as on the pCR hER␣ Luc DNA used as a control. B, RT-PCR amplification of trans-spliced 1A31A transcripts generated from the pCR-hER␣ Luc chimeric gene. Total RNA prepared from MCF7 transitively transfected with pCR-hER␣ Luc was used to reverse-transcribe hER␣ Luc mRNA from primer L1. Primer X, which is specific for exon 1A, was then used in a first round of PCR amplification with primer L2, which is nested to primer L1. A second round of PCR reaction was performed with the primers XI and VI as illustrated on the schematic diagram. As a control, the endogenous trans-spliced 1A31A hER␣ transcript was also reverse-transcribed from primer IV and PCR-amplified using the primer XI and VI. After purification, the PCR products were or were not digested with the restriction enzyme XhoI, electrophoresed through an agarose gel, and transferred by Southern blot to a membrane, which was then hybridized with the oligonucleotide probes P1 and P3 specific for the 5Ј and 3Ј regions of exon 1A, respectively. study that natural trans-spliced 1A31A mRNA is generated from the hER␣ gene, and such a process can be mimicked in vivo with a chimeric gene containing hER␣ exon 1A, the 5Ј part of exon 2, and their flanking intronic sequences.
If 1A31A hER␣ transcripts result from a trans-splicing event, it is likely that this process would generate two products: a long mRNA having a duplication of exon 1A (1A31A hER␣ mRNA) and a short mRNA lacking exon 1A (⌬1A hER␣ mRNA). RT-PCR analysis clearly confirmed, in several ER␣positive tissues or cell lines, the presence of the long 1A31A trans-spliced hER␣ mRNA composed by the eight coding exons of the hER␣ gene including a duplication of the first coding exon, exon 1A. Upstream to the main open reading frame that encodes hER␣ 66, 1A31A hER␣ mRNA presents a second open reading frame, shared by the trans-spliced exon 1A and the 5Ј part of the acceptor exon 1A, which would encode for a protein of 156 amino acid residues, equivalent to the A/B domain of hER␣. Such a protein will be unable to bind directly DNA but would contain the transactivation domain AF1, which could constitutively act by interacting with coactivators such as SRC-1 or the p68 and p72 RNA helicases (45,46) and as a result could act independently or could modulate the binding of the hER␣ protein to its target sites. Short mRNAs lacking exon 1A have been reported previously (17). They were shown to originate from the E and F hER␣ promoters and to be produced by the splicing of exon 1E directly to exon 2. The underlying mechanism was assumed to be alternative cis-splicing, but trans-splicing cannot be excluded. The ⌬1A hER␣ transcripts encode an isoform of hER␣, hER␣ 46, lacking the first 173 amino acids present at the N terminus of hER␣ 66 and therefore devoid of the A/B domain having the AF1 transactivation function. Detected in the ER␣-positive breast carcinoma cell line MCF7, hER␣ 46 acts as an AF1-competitive inhibitor of hER␣ 66 (17).
In conclusion, the hER␣ gene was already known to be a genomic unit exhibiting significant alternative cis-splicing activities between the different untranslated first exons (16) and the coding exons (47). In this report, we demonstrate for the first time that the hER␣ gene is also subject to trans-splicing, a mechanism that generates hER␣ mRNA variants with a different exonic organization and potentially encoding new ER␣ proteins. In light of the central role that the ER␣ gene plays in the physiology of several tissues, in particular those involved in the reproductive function, such as endometrium and breast tissue, it is obvious that an increase in the occurrence of such a process might have physiological and/or pathological consequences for these tissues. The mechanisms that may modulate the trans-splicing activity of the ER␣ gene remain to be defined. | 7,945 | 2002-07-19T00:00:00.000 | [
"Biology"
] |
A Joint Automatic Modulation Classification Scheme in Spatial Cognitive Communication
Automatic modulation discrimination (AMC) is one of the critical technologies in spatial cognitive communication systems. Building a high-performance AMC model in intelligent receivers can help to realize adaptive signal synchronization and demodulation. However, tackling the intra-class diversity problem is challenging to AMC based on deep learning (DL), as 16QAM and 64QAM are not easily distinguished by DL networks. In order to overcome the problem, this paper proposes a joint AMC model that combines DL and expert features. In this model, the former builds a neural network that can extract the time series and phase features of in-phase and quadrature component (IQ) samples, which improves the feature extraction capability of the network in similar models; the latter achieves accurate classification of QAM signals by constructing effective feature parameters. Experimental results demonstrate that our proposed joint AMC model performs better than the benchmark networks. The classification accuracy is increased by 11.5% at a 10 dB signal-to-noise ratio (SNR). At the same time, it also improves the discrimination of QAM signals.
Introduction
As NASA enters a new era of space exploration, communication links shift from point-to-point communications to network topologies. There are more diverse types of wireless links in space communications, such as planetary earth to earth, planetary earth to spacecraft, and space to earth, etc. [1]. To improve situational awareness, we seek to develop an ACM algorithm based on DL that is capable of identifying common signals in satellite communications and thus can efficiently identify users and distinguish interference sources.
Traditional AMC methods are mainly divided into two categories: the likelihood-based (LB) AMC [2,3] and the feature-based (FB) AMC [4][5][6]. The LB-AMC approach is based on the Bayesian theory in order to obtain the best estimate of modulation by minimizing the probability of misclassification, but it has the disadvantages of high computational complexity and narrow applicability. The purpose of FB-AMC is to find features that can distinguish different modulated signals, such as wavelet domain features [4], cyclic spectrum [5], and high-order statistics [6]. Furthermore, the performance of the FB classifier is significantly influenced by the quality of the features.
DL is a data-driven artificial intelligence approach that uses multilayer neural networks to extract data features automatically. O'Shea et al. [7] first proposed using a convolutional neural network (CNN) to process the IQ signals directly, and the average recognition rate was 75% at 10 dB in the RadioML 2016.10a dataset which includes 11 modulation classes. However, this CNN network only has two layers, so its classification performance is limited. Subsequently, O'Shea et al. [8] proposed an improved ResNet network to improve recognition performance. In addition, multiple deep CNNs were applied to boost the performance of AMC in [9][10][11]. However, these CNNs mostly used convolutional kernels with 3 × 1 dimensions, which cannot capture the long-term temporal features of IQ signals. Meanwhile, West [12] et al. first proposed a CLDNN network combining a CNN network and a long short-term memory (LSTM) network, which can extract long-term temporal features. This network got an average recognition accuracy of 85% at 0 dB in the RadioML 2016.10a dataset.
More and more neural networks are being used to improve AMC's performance. However, these networks are not very good at identifying intra-class diversity signals. Yu Wang et al. [13] proposed a data-driven fusion model which combines two CNN networks, one trained on the IQ signal dataset, and the other trained on the constellation map dataset. Inspired by face recognition, Hao Zhang et al. [14] proposed a two-stage training network that improves the model's ability to capture small intra-class scattering. The central loss function supervises the first stage, and the cross-entropy loss function supervises the second stage. Kumar Yashashwi et al. [15] used an attention model to synchronize and normalize signals, which improves the model's recognition of intra-class diversity signals. However, these works all face the problem of poor generalization ability. If we substitute another dataset, these methods may not be applicable.
Summarizing the previous work, we can find that the improvement of neural networks in AMC is achieved by improving the ability to extract signal timing features. However, IQ signals contain not only timing features but also phase features. Therefore, when building a neural network, we consider extracting both the timing and phase features of the signal. In addition, the neural network is weak in extracting intra-class features. Thus, cascading a network trained on different datasets [13] or cascading a network supervised by other loss functions [14] still results in limited generalizability. Therefore, we choose to group 16QAM and 64QAM signals with similar intra-class features into the same class and identify them through use of the expert feature method so as to solve the problem in disguise. Figure 1 shows a satellite intelligent receiver system based on a zero-IF architecture. The AMC-driven intelligent receiver can identify the modulation type of the original signal without any prior information. Moreover, it can help subsequent modules, such as symbol synchronization, channel equalization, and signal demodulation [16]. The workflow of this intelligent receiver is as follows: the RF signal first passes through the mid-pass filter BPF and low-noise amplifier LNA for frequency selection and amplification. Then, the signal is sent to the mixer and the local oscillator frequency for mixing to generate the in-phase component I and the quadrature component Q. Next, the I and Q signals are amplified, filtered, sampled and extracted to create the digital IQ baseband signal. Finally, the acquired IQ baseband signal is input into the AMC model to complete the identification of signal modulation type.
The Joint AMC Model
This paper proposes a joint automatic identification model that combines the IQCLNet network and expert feature methods. As shown in Figure 2, the model is used to identify 11 modulated signals widely used in modern communication systems. When the receiver acquires the unknown signals, the first stage will be made by IQCLNet to identify them. In addition, QAM16 and QAM64 are considered the same class and named QAMS in this stage. Then, the second stage uses the expert feature method to construct parametric features in order to identify QAMS.
IQCLNet Network
In electromagnetic signal recognition, most DL network structures are borrowed from the network design in image identification. In image processing, the input pixel data format is M × N, which has an isotropic nature in the spatial relationship. Thus, the shape of the convolution kernel is generally square to perform the symmetric operation between two dimensions. However, in signal processing, the input IQ data format is N×. N is the time sampling point, reflecting the time series characteristics. Two corresponds to I and Q, reflecting the phase characteristics [17]. IQ data do not have the same nature between the two dimensions, so they cannot be operated symmetrically as image processing.
At present, the processing of IQ signals by DL networks mostly uses one-dimensional convolution kernels to extract the time-series features of the signals [7,[18][19][20] while ignoring the phase characteristics, so we must design a network which can extract the different features in two dimensions of the IQ signals. In this paper, we create the network structure as shown in Figure 3. Within each channel, the first convolutional layer uses a 1 × 2 convolutional kernel to extract the phase features of the signal, and the output data dimension is changed from N × 2 to N × 1; then, a 3 × 1 convolutional kernel is used to extract the short-time sequence features of the signal, followed by a cascaded layer of LSTM to extract the long-time sequence features of the signal [21]. Finally, the fully connected layer maps the output data to a more discrete space for classification.
In the classification stage, adoption of the adaptive average pooling layer occurs first, and the mean value of each channel eigenvalue is used as a new eigenvalue that not only can reduce the parameters of the fully connected layer, but also improve the generalization performance of the network. Next, using only one fully connected layer for classification in order to reduce the number of parameters and the amount of computation, the specific implementation is: first set the output value of the adaptive average pooling to be consistent with the number of channels C, and then form a fully connected layer with the input C and the output L with the number of categories L. This operation makes the network compatible with different input lengths' IQ signal. Between the output of the convolutional layer and the activation function, a Batch Normalization (BN) operation is added to increase the robustness and convergence speed of the model; the feature extraction layer uses ReLU as the activation function and Softmax as the output function of the classification layer. The specific parameters of the IQCLNet network are shown in the following Table 1:
Expert Feature Method
The amplitude-phase modulated signal model of the QAM signal at the receiver side is expressed as: where r(t) is the received signal; g(t) is the shock response of the shaping filter; T 0 is the codeword period; f c is the carrier frequency; θ c is the carrier phase; ε is the timing offset; N is the number of observation symbols; α i is the zero-mean smooth complex random sequence, i.e., the transmit codeword sequence; √ s and ϕ i are the amplitude and phase of α i , respectively; and ω(t) is a stationary additive Gaussian noise with a zero mean and a one-sided power spectral density N 0 [22].
QAM signal modulation information is not only reflected in the phase variation but also in the amplitude variation. However, QAM signals have many types of phase change, which should not be suitable for intra-class identification. Under ideal conditions, the number of 16QAM, 64QAM, and 256QAM signal amplitude takes 3, 9, and 32, which have significant differences. The authors in [23] mention that the zero-center normalized instantaneous amplitude tightness characteristic parameter (µ 42 ) reflects the denseness of the instantaneous amplitude distribution. Therefore, we can use µ 42 to distinguish each order of QAM signals. µ 42 is defined in Equation (8), where a cn is the zero-center normalized instantaneous amplitude.
Dataset
In this paper, we use a popular open-source dataset Radio ML2016.10a [24]. This dataset has 11 classes of modulated signals with SNR ranging from −20 dB to 18 dB, and the length of a single sample is 128. The details are shown in Table 2. All experiments are conducted on this dataset. Figure 4 demonstrates three feature extraction methods for IQ signals processed by convolutional neural networks. Among them, (a) is the method adopted in this paper, (b) is the method adopted in [18,19], and (c) is the method adopted in [7,20]. The experimental results of the three different convolution methods on the dataset are shown in Figure 5. The average recognition rates of (a), (b), and (c) are 62.8%, 58.2%, and 59.4%, respectively. (a) As the structure proposed in this paper, phase feature extraction is performed on the signal first, and then timing feature extraction is performed, so that it has the highest recognition rate. (b) first convolves with a 1D filter and then flattens the data, but this method can only extract time-domain features and cannot use phase features; (c) is the same as the first step in (b). After extracting temporal features, (c) performs dimensionality reduction in the IQ direction using MaxPooling. However, this method loses the amplitude and phase information of the signal. Furthermore, the convolution method (a) used in this paper shows better recognition results in both low SNR and high SNR conditions. Thus, the superiority of IQCLNet in this paper is verified.
The Effectiveness of IQCLNet Network
The joint AMC model we designed requires that the IQCLNet network can identify the QAMS effectively, so that the subsequent expert feature method can identify 16QAM and 64QAM accurately. Therefore, we provide the confusion matrices of IQCLNet under four different SNR conditions. The confusion matrix is a method of accuracy evaluation. The column represents the predicted category, the row represents the real category, and the darker the color of the square where the row and column intersect, the higher the accuracy is. As shown in Figure 6, the network cannot identify any signal under the extremely low SNR of −20 dB. When the SNR is −12 dB, the QAMS signal can already be distinguished from other signals. With the SNR improvement, the QAMS signal recognition can reach 100%. These experimental results demonstrate the effectiveness of the IQCLNet network.
Expert Feature Method Experiments
After solving the experimental verification of the IQCLNet network, we need to construct a classifier to distinguish the QAM signal. First, we should calculate the µ 42 of 16QAM and 64QAM signals at different SNRs. The two signals' feature parameter curves are shown in Figure 7a. We can distinguish the two signals clearly through the µ 42 feature parameter. Thus, taking the average of the µ 42 of the two signals as the threshold, according to the size relationship between µ 42 and the threshold line, we can distinguish 16QAM and 64QAM. Moreover, the two curves intersect at 0 dB, which may make the two signals difficult to distinguish around this SNR. Now we have constructed a classifier that can distinguish QAM signals. Next, we use it to identify the QAMS signal output by IQCLNet. At the same time, we use IQCLNet to identify the QAMS signal directly for comparison. The experimental results are shown in Figure 7b.
The IQCLNet method's recognition accuracy steadily improves with the increase of SNR, and it tends to be stable under high SNR conditions. Moreover, the overall average recognition rate is 60.4%. Compared with the IQCLNet network, the recognition effect of the expert feature method is significantly improved under low SNR conditions. However, since the characteristic curves of 16QAM and 64QAM intersect around 0 dB, the recognition rate will drop. Moreover, the overall average recognition rate is 77.9%, an increase of 17.5% compared to exclusive use of the IQCLNet method. This proves that the expert feature method has obvious advantages in identifying QAM signals.
The Joint AMC Model Results
We can easily derive the total recognition rate of the joint AMC model after getting the recognition rate of QAMs. We select three models in Table 3 as comparison networks. CNN2 [7] is the first classical structure that uses a convolutional neural network to recognize modulation; CLDNN [12] is a classical structure in speech recognition tasks that has been successfully transplanted into the field of electromagnetic signal recognition. CNN_LSTM [25] is a well-designed network structure based on CLDNN which uses fewer parameters and obtains higher recognition accuracy. All three networks have been validated on the RML2016.10a dataset. The classification accuracies of all models are shown in Figure 8, taking 0 dB as the dividing line between high and low SNR. Under low SNR conditions, the average recognition rates of the three baseline networks are 30.8%, 29.1%, and 31.1%, respectively. The IQCLNet network and Joint AMC model are 34.1% and 41.7%. Under high SNR conditions, the average recognition rates of the three baseline networks are 74.1%, 79.6%, and 83.4%, respectively. The IQCLNet network and Joint AMC model are 89.2% and 90.9%. Experimental results show that, compared with the three baseline networks that only extract the timing features of the IQ signal, the IQCLNet, with its additional phase feature extraction, is more effective. Moreover, since the recognition rate of 16QAM and 64QAM is improved. The joint AMC model that adds an expert feature method after the IQCLNet network is further improved, especially at low SNR. In addition, we provide the confusion matrixes of IQCLnet and the Joint AMC model at 10dB SNR in Figure 9a,b. It can be seen that the joint AMC model improves the recognition ability of 16QAM and 64QAM. Due to the limitations of volume, mass, and power consumption, and the influence of environmental factors such as space radiation and extreme temperature, the computing power and storage space of space-borne computers are very different from those of groundbased computers. Although deep neural networks have the advantages of strong feature extraction ability and high recognition accuracy, they also have the disadvantages of many network parameters and a large amount of calculation. Therefore, we compare the number of parameters and training time of all networks, and the experimental results are shown in Table 4. Compared with other networks, our proposed IQCLNet network has fewer parameters and higher computational efficiency, which is more conducive to deployment to satellite in-orbit applications.
Conclusions
In this paper, we propose an innovative joint AMC model to identify different modulated signals. The model is based on the high performance of the forward deep learning network IQCLNet, which can separate the QAMs accurately. Then, expert feature methods are used to construct feature parameters in order to identify 16QAM and 64QAM. It is concluded that the joint AMC model exhibits better recognition performance and intraclass diversity discrimination ability than the baseline network. In future research, we can consider communication as an end-to-end reconstruction optimization task, and use autoencoders to learn channel models, encoding and decoding implementations without prior knowledge. | 3,995.6 | 2022-08-29T00:00:00.000 | [
"Computer Science"
] |
Self-Lubricating Polytetrafluoroethylene / Polyimide Blends Reinforced with Zinc Oxide Nanoparticles
1School of Materials Engineering, Nanjing Institute of Technology, Nanjing, Jiangsu 211167, China 2Intelligent Composites Laboratory, Department of Chemical and Biomolecular Engineering, The University of Akron, Akron, OH 44325, USA 3Division of Machine Elements, Luleå University of Technology, 97187 Luleå, Sweden 4State Key Laboratory of Materials-oriented Chemical Engineering, Nanjing University of Technology, Nanjing, Jiangsu 210009, China 5College of Chemistry and Chemical Engineering, Northeast Petroleum University, Daqing, Heilongjiang 163318, China
Introduction
Lubrication is critical to the operational safety and reliability of industrial manufacturing and processing.Lubrication technology has been widely used in industrial applications, including roller bearings, journal bearings, and gears.Efficient lubrication is valuable to dissipate frictional heat, extend fatigue life, and reduce friction and wear [1].Existing lubrication systems rely on the use of synthetic lubrication oil or mineral oil, which cannot be used in strictly regulated fields such as pharmaceutical, food, and health care industries due to the potential product contamination [2].Solid lubricants are considered the best option to control friction and wear if the usage of liquid lubricants is not allowed.However, solid lubricants should meet certain requirements in practical applications, such as mechanical strength, stiffness, fatigue life, thermal expansion, and damping [3].
Polymers are extensively used as solid lubricants in dynamic mechanical parts due to their unique properties such as high strength, light weight and excellent wear, and solvent resistance [4].Polytetrafluoroethylene (PTFE), also named "teflon," is well known for its extremely low friction coefficient and excellent chemical resistance [5,6].However, the major drawbacks of PTFE, poor wear resistance and severe creep deformation, restrict its wide use in practical applications.Therefore, fibrous fillers (glass fiber, carbon fiber, and whisker) [7][8][9] and spherical nanoparticles [10] are added in PTFE to improve the wear resistance.PTFE has also been demonstrated as effective filler in other polymers to improve the tribological property of the polymer blends [11,12].Polyimide (PI), a class of high performance engineering plastics, is well known for its excellent mechanical properties and stability at high temperature, as well as superior dielectric properties and good chemical resistance, which have found wide applications in aerospace, automobile, and microelectronics industry [13].However, the intrinsic large friction coefficient and high wear rate of pure PI limit its use in dynamic motion systems [11,14].Tremendous efforts have been devoted to reduce the friction coefficient and wear rate of PI by means of incorporating fibers [4,15], nanometer particles [16,17], solid lubricant [18], and so forth.
To the best of our knowledge, the effect of ZnO nanoparticles on the tribological and mechanical properties of PI based nanocomposites has rarely been studied.In this work, PTFE/PI blend polymer was reinforced by different loadings of ZnO nanoparticles.The optimal loading was explored in association with greatest tribological and mechanical properties.The microstructures of the worn surface, transfer film, and impact-fractured surface were also examined to understand the reinforcing effect of ZnO in the nanocomposites.
Experimental
2.1.Materials.Polyimide powder (YS-20, 1-10 m) was purchased from Shanghai Research Institute of Synthetic Resins.PTFE was commercially obtained from DuPont (7A-J, 25 m commercial product).The ZnO nanoparticles (average diameter ≤60 nm) are purchased from Nanjing Haitai nano materials Co. Ltd.Before use, all the materials were dried at 110 ∘ C for at least 6 hours in oven.
Preparation of PTFE/PI Composites.
In this work, the mass fraction of PTFE in polymer blend is fixed at 15 wt%.ZnO nanoparticles were added into the PTFE/PI blend with different mass ratios: 1, 2, 3, 5, 8, and 12 wt%, respectively.The mixture is weighted accordingly and blended mechanically.Then, the powder mixture was compressed under the pressure of 20.0 MPa and heated to 365 ∘ C in a mold with heating rate of 8 ∘ C/min.The compressed composite was held at 365 ∘ C for 45 min and then cooled down to ambient temperature in the mold while keeping the pressure unchanged.For friction and wear tests, the block was cut into a ring-shaped sample with 26.0 mm outer diameter, 22.0 mm inner diameter, and 2.5∼3.0 mm in shoulder height, as seen in Figure 1(c).
Tribological Tests and Characterization.
The friction and wear tests were conducted with a ring-on-ring friction configuration, Figure 1(a).The counter-face material is AISI1045 steel with hardness of HRC 52, Figure 1(b).Sliding was performed under ambient dry friction conditions over a period of 1 hour at sliding velocity of 0.69 (550 rpm) or 1.4 m/s (1115 rpm) and load of 100 N or 200 N.The ambient temperature is controlled at 25 ∘ C and relative humidity at 50 ± 5%.Before test, the surfaces of each specimen and counterpart ring were polished to 800-grit finish with surface roughness of 0.2∼0.4m and then cleaned with alcohol.The friction force was measured using a torque shaft fixed with strain gauges, and the friction coefficient was calculated from the friction force.At the end of each test, the wear volume loss was calculated from the weight loss of each specimen.Three duplicate tests were carried out to minimize the data scattering, and the average of the data was reported.
The morphology of worn surface and impact-fractured surfaces of the composites was characterized by scanning electron microscope (SEM, QUANTA-200).The transfer films on the steel ring were examined by optical microscope.
Mechanical Tests.
The tensile tests were carried out on a Universal Tester (Model CMT4254) at room temperature.The deformation rate was 5 mm/min.The impact tests were performed on an impact test machine (Model XJJ-5).Impact and tensile tests were conducted according to Chinese National Standard GB/T16420-1996 and GB/T16421-1996, respectively.For tensile test sample preparation, each composition was molded into a narrow-waisted dumbbellshaped specimen, and the size of the narrow part is 30 × 5 × 3 mm.For impact test, unnotched specimens with dimensions of 40 × 3 × 2 mm (with the distance of support of 20 mm) were fractured by the impact mass at a speed of 2.9 m/s and impact input energy of 1.0 J.All the reported values were average of five effective measures.
Results and Discussion
3.1.Tribological Properties. Figure 2 shows the wear volume loss and friction coefficient of ZnO/PTFE/PI nanocomposites as a function of ZnO loading at 100 N load and sliding speed of 1.4 m/s.With increasing ZnO loading, both wear volume loss and friction coefficient decrease and reach the minimum value at 3 wt% and continuously goes up afterwards.Specifically, the wear volume loss of the nanocomposites reinforced with 3 wt% ZnO is 20% less than that of the PTFE/PI blend polymer.This loading effect is consistent with most of the literature reports that an appropriate loading of nanomaterials leads to optimized performance [25,26].In this work, ZnO can be well dispersed and completely covered by surrounding polymers at relatively lower loadings, for example, 3 wt%.With further increasing ZnO loading, polymer chains are not sufficient enough to cover the extremely large surface area exposed by ZnO nanoparticles and thus nanoparticles tend to agglomerate and negatively affect the interfacial bonding with polymer matrix.The weak interfacial bonding becomes weak joint, which can be easily peeled off and thus worse tribological property was observed.Similar phenomenon was also reported in carbon nanofiber/PTFE/PI nanocomposites [27].
The effects of sliding speed on wear volume loss and friction coefficient of various specimens at a load of 200 N are shown in Figures 3 and 4, respectively.It can be seen that the variation of both wear volume loss and friction coefficient experiences the same trend under sliding speeds of 0.69 and 1.4 m/s.The wear volume loss of ZnO/PTFE/PI nanocomposites decreases to the minimum value and then increases afterwards.The lowest wear loss is achieved with 3 wt% ZnO loading at higher sliding speed of 1.4 m/s, while larger ZnO loading of 8 wt% is required to reach the minimum value at relatively lower speed of 0.69 m/s.The ZnO nanoparticles serve as rolling balls between the friction interfaces, which would definitely reduce the interfacial friction and improve the tribological properties [23].At higher sliding speed, larger shear force and friction energy facilitate the "pulling-off" of ZnO nanoparticles from polymer matrix and accumulation at the interface.Therefore, optimal amount of "rolling" ZnO nanoparticles would be accumulated at the interface from the nanocomposites with relatively lower ZnO loadings [27].Generally, the friction coefficient decreases with increasing ZnO loading, Figure 4.Moreover, significantly lower friction coefficient is observed at higher sliding speed.At 1.4 m/s, more friction heat is generated at the sliding surface, which raises up the contact surface temperature and causes the "softening" of polymer chains.By further accumulation of friction heat, micromelting occurs that switches plastic polymer chain into elastoplastic which facilitates the reduction of friction coefficient [28].
To explore the role of ZnO on friction and wear behaviors of ZnO/PTFE/PI nanocomposites, the worn surfaces and transfer films of PTFE/PI, 3 wt% ZnO/PTFE/PI, and 12 wt% ZnO/PTFE/PI composites were comparatively investigated by SEM and optical microscope, Figures 5 and 6.
In Figure 5(a), the worn surface of PTFE/PI shows obvious nicks, shallow furrows, and plastic deformation, which indicates the dominant adhesive wear behavior [11].After incorporating 3 wt% ZnO, antiplough and anticut capacities of PTFE/PI composites can be significantly improved, Figure 5(b).Further increasing ZnO loading to 12 wt% peeled debris and deep furrows can be observed on the worn surface, Figure 5(c).These observations are in good agreement with the tribological results obtained in Figures 2-4.
Figure 6 displays the optical micrographs of transfer films formed on the counterpart steel ring after friction test.The transfer film of PTFE/PI appears to be rough and discontinuous, Figure 6(a), which could be easily scaled off from the wear track and negatively affects the wear resistance during sliding.Apparently, the transfer film for 3 wt% ZnO is continuous, uniform, and smooth, which is helpful to maintain a stable friction and thus the best tribological properties are obtained, Figure 6(b).When ZnO loading increases to 12 wt%, the transfer film becomes nonuniform, which reveals that an excess amount of ZnO would hinder the formation of a smooth transfer film, in Figure 6(c).As a result, the wear volume loss was increased considerably at higher filler content.These results reveal that a suitable loading of ZnO in the PTFE/PI polymer blend will favor the formation of smooth and continuous transfer films and contribute to the enhanced tribological properties.
Figure 7 illustrates the roles of ZnO nanoparticles during friction at different loadings."Rolling ball" effect dominates the interfacial friction at relatively lower ZnO loading due to the good dispersion and particle-polymer interaction, Figure 7(a).Excess amount of ZnO in the nanocomposites leads to severe agglomeration, which affects the integrity of the nanocomposites in bulk and damages the transfer films at the friction interface, Figure 7(b).Therefore, peeled debris has been observed on the worn surface (Figure 5(c)) and also rough and noncontinuous transfer film on counterpart ring (Figure 6(c)).
Mechanical Properties.
The mechanical properties of pure PTFE/PI and ZnO/PTFE/PI composites are shown in Table 1.The impact strength, tensile strength, and elongationat-break of PTFE/PI composites increase initially with the increase of ZnO loading and reach the maximum value at 3 wt% ZnO and then decrease with further increasing filler content.Compared with PTFE/PI, the impact strength, tensile strength, and elongation-at-break of the 3 wt% ZnO/PTFE/PI composites are increased by 85, 5, and 10%, respectively, compared to PTFE/PI blend.The results indicate that the mechanical properties of PTFE/PI composites are well consistent with the tribological properties.Similar correlations between tribological and mechanical properties were reported as well by Zhang et al. in PEEK composites [29].
Due to its high specific surface area of ZnO, a strong interfacial interaction between ZnO and PTFE/PI matrix would be expected.ZnO is distributed uniformly in the matrix and covered with polymer chains completely when the filler content is relatively low, for example, 3 wt% in this work.It is well known that polymer containing a preexisting crack is a result of external stress and a small craze is often formed at the tip of the crack.ZnO could serve as binder at the craze region to delay the craze growth rate.In this regard, extra energy would be required to debond ZnO from polymer matrix before crack further develops, which contributes to a significant improvement in tensile strength [30].However, the interfacial bonding between PTFE/PI matrix and ZnO would be damaged by severely agglomerated ZnO in excess amount, which accounted for the poorer tensile strength of ZnO/PTFE/PI composites at larger ZnO loading [23].SEM micrographs of the impact-fractured surfaces of pure PTFE/PI and 3 wt% and 8 wt% ZnO/PTFE/PI are shown in Figures 8(a)-8(c).Obviously, the fracture surface appears relatively smooth for pure PTFE/PI but rough for 3 wt% and 8 wt% ZnO/PTFE/PI.The enhancement of impact strength can be explained by cavitation mechanism of microsized rigid particles [31], where three stages are included: stress concentration, debonding, and shear yielding.ZnO nanoparticles act as stress concentrators, which lead to debonding at the interface between ZnO and polymer matrix.Nanoparticles on the fracture surface have been considered effective evidence to support the debonding claim [32].ZnO nanoparticles marked by arrows in Figures 8(b) and 8(c) clearly indicate the debonding and subsequent plastic void growth of the polymer.As shown in Figure 8(b), the voids caused by debonding can be clearly seen on the fracture surface, which absorbs large quantity of energy upon fracture, reduces stress transfer to crazing zone, and therefore enhances mechanical properties [33].However, the increase of the voids will reduce the bond strength and the impact strength of the
Figure 1 :
Figure 1: The schematic diagram of tribological testing configuration.
Figure 2 :
Figure 2: Effect of ZnO loading on wear volume loss and friction coefficient.Load: 100 N; sliding speed: 1.4 m/s. | 3,112.2 | 2015-01-01T00:00:00.000 | [
"Materials Science"
] |
Possible Implication of a Single Nonextensive $p_T$ Distribution for Hadron Production in High-Energy $pp$ Collisions
Multiparticle production processes in $pp$ collisions at the central rapidity region are usually considered to be divided into independent"soft"and"hard"components. The first is described by exponential (thermal-like) transverse momentum spectra in the low-$p_T$ region with a scale parameter $T$ associated with the temperature of the hadronizing system. The second is governed by a power-like distributions of transverse momenta with power index $n$ at high-$p_T$ associated with the hard scattering between partons. We show that the hard-scattering integral can be approximated as a nonextensive distribution of a quasi-power-law containing a scale parameter $T$ and a power index $n=1/(q -1)$, where $q$ is the nonextensivity parameter. We demonstrate that the whole region of transverse momenta presently measurable at LHC experiments at central rapidity (in which the observed cross sections varies by $14$ orders of magnitude down to the low $p_T$ region) can be adequately described by a single nonextensive distribution. These results suggest the dominance of the hard-scattering hadron-production process and the approximate validity of a"no-hair"statistical-mechanical description of the $p_T$ spectra for the whole $p_T$ region at central rapidity for $pp$ collisions at high-energies.
Introduction
Particle production in pp collisions comprises of many different mechanisms in different parts of the phase space. We shall be interested in particle production in the central rapidity region where it is customary to divide the multiparticle production into independent soft and hard processes populating different parts of the transverse momentum space separated by a momentum scale p 0 . As a rule of thumb, the spectra of the soft processes in the low-p T region are (almost) exponential, F(p T )∼exp(−p T /T ), and are usually associated with the thermodynamical description of the hadronizing system, the fragmentation of a flux tube with a transverse dimension, or the production of particles by the Schwinger mechanism [1][2][3][4][5]. The p T spectra of the hard process in the high-p T region are regarded as essentially power-like, F(p T )∼p −n T , and are usually associated with the hard scattering process [6][7][8][9][10]. However, it was found already long time ago that both description could be replaced by simple interpolating formula [11], ⋆ Presented by G.Wilk a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>d e-mail<EMAIL_ADDRESS>that becomes power-like for high p T and exponential-like for low p T . Notice that for high p T , where we are usually neglecting the constant term, the scale parameter p 0 becomes irrelevant, whereas for low p T it becomes, together with power index n, an effective temperature T = p 0 /n. The same formula re-emerged later to become known as the QCD-based Hagedorn formula [12]. It was used for the first time in the analysis of UA1 experimental data [13] and it became one of the standard phenomenological formulas for p T data analysis.
In the mean time it was realized that Eq. (1) is just another realization of the nonextensive distribution [14] with parameters q and T , and a normalization constant A, that has been widely used in many other branches of physics. For our purposes, both formulas are equivalent with the identification of n = 1/(q−1) and p 0 = nT , and we shall use them interchangeably. Because Eq. (2) describes nonextensive systems in statistical mechanics, the parameter q is usually called the nonextensivity parameter. As one can see, Eq. (2) becomes the usual Boltzmann-Gibbs exponential distribution for q → 1, with T becoming the temperature. Both Eqs. (1) and (2) have been widely used in the phenomenological analysis of multiparticle productions (cf., for example [15][16][17][18][19][20][21][22][23][24][25][26][27][28]) 1 .
We shall demonstrate here that, similar to the original ideas presented in [11,12], the whole region of transverse momenta presently measurable at LHC experiments (which spans now enormous range of ∼14 orders of magnitude in the measured cross-sections down to the low-p T region) [17][18][19] can be adequately described by a single quasi-power law distribution, either Eq. (1) or Eq. (2). We shall offer a possible explanation of this phenomenon by showing that the hard-scattering integral can be cast approximately into a non-extensive distribution form and that the description of a single nonextensive p T distribution for the p T spectra over the whole p T region suggests the dominance of the hard-scattering process at central rapidity for high-energy pp collisions.
Nonextensive distribution for p T spectra in pp collisions
The possibility of two components in the transverse spectra implies that its complete description will need two independent functions with different sets of parameters, each dominating over different regions of the transverse momentum space. The presence of two different components will be indicated by gross deviations when the spectrum over the whole transverse space is analyzed with only a single component. An example for the presence of two (or more) components of production processes can be clearly seen in Fig. 1 of [31], in the p T spectra in central (0-6%) PbPb collisions at √ s NN =2.76 TeV from the ALICE Collaboration, where two independent functions are needed to describe the whole spectra as described in [32,33]. For our purposes in studying produced hadrons in pp collisions, where the high p T hard-scattering component is expected to have a power-law form with a power index n, either (1) or (2) can be written as where n is the power index, T is the 'temperature' parameter, and m and m T = m 2 + p 2 T are the rest mass and transverse mass of the produced hadrons which are taken to be the dominant particles, the pions. It came as a surprise to us that for pp collisions at √ s NN = 7 TeV, the p T spectra within a very broad range, from 0.5 GeV up to 181 GeV, in which cross section varies by 14 orders of magnitude, can still be described well by a single nonextensive formula with power index n = 6.6 [34]. The good fits to the p T spectra over such a large range of p T with only three parameters, (A, n, T ), raise intriguing questions : • Why are there only three degrees of freedom in the spectra over such a large p T domain? Does it imply that there is only a single component, the hard scattering process, contributing dominantly over the whole p T domain? If so, are there supporting experimental evidences from other correlation measurements? • Mathematically, the power index n is related to the parameter q = 1 + 1/n in non-extensive statistical mechanics [14]. What is the physical meaning of n? If n is related to the power index of the parton-parton scattering law, then why is the observed value so large, n∼7, rather than n ∼ 4 as predicted naively by pQCD?
• Are the power indices for jet production different from those for hadron production? If so, why ?
• Do multiple parton collisions play any role in modifying the power index n?
• In addition to the power law 1/p n T , does the differential cross section contain other additional p T -dependent factors? If they are present, how do they change the power index?
These questions were discussed and, at least partially, answered in [35]. Before proceeding to our main point of phenomenological considerations we shall first recapitulate briefly the main results of this attempt to reconcile, as far as possible, the nonextensive distribution with the QCD where, as shown in [35], the only relevant ingredients from QCD are hard scatterings between constituents resulting in the production of jets which further undergo fragmentation, showering, and hadronization to become the observed hadrons.
Approximate Hard-Scattering Integral
The answers to the questions posed above will be facilitated with an approximate analytical form of the hardscattering integral. We start with the relativistic hardscattering model as proposed in [6] 2 and examined in [5,10,35]. We consider the collision of projectiles A and B in the center-of-mass frame at an energy √ s in the reaction A + B → c + X, with c coming out at midrapidity, η∼ 0. Upon neglecting the intrinsic transverse momentum and rest masses, the differential cross section in the lowest-order parton-parton elastic collisions is given by The parton-parton invariant cross section is related to dσ(ab→cX ′ )/dt by In the infinite momentum frame the momenta can be written as We denote light-cone variable x c of the produced parton c as x c = (c 0 + c z )/ √ s. The constraint ofŝ +t +û = 0 gives We consider only the special case of c coming out at . After integrating over x a , we obtain To integrate over x b , we use the saddle point method, write , and expand f (x b ) about its minimum at x b0 . We obtain then that For simplicity, we assume G a/A and G b/B to have the same form. At θ c ∼ 90 0 in the CM system, the minimum value of f (x b ) is located at and we get the hard-scattering integral For the case of If the basic process ab → cX ′ is gg → gg or ab → cX ′ is qq ′ → qq ′ , the cross sections at θ c ∼ 90 o [36] are In both cases, the differential cross section behave as
Parton Multiple Scattering
As the collision energy increases, the value of x c gets smaller and the number of partons and their density increase rapidly. Thus the total hard-scattering cross section increases as well [8]. The presence of a large number of partons in the colliding system results in multiple hardscatterings of projectile parton on partons from target nucleon. We find that for the process of a → c in the collision of a parton a with a target of A partons in sequence without a centrality selection, the c T -distribution is given by [35] where the terms on the right-hand side correspond to collisions of the incident parton with one, two and three target partons, respectively. Here, the quantity A is the number of partons in the nucleon as a composite system and is the integral of the parton density over the parton momentum fraction. This result shows that without centrality selection in minimum-biased events, the differential cross section will be dominated by the contribution from a single parton-parton scattering that behaves as α 2 s /c 4 T (cf. previous analysis on the multiple had-scattering process in [37][38][39]). Multiple scatterings with N > 1 scatterers contribute to terms of order α 2N
The Power Index in Jet Production
From the above results one gets the approximate analytical formula for hard-scattering invariant cross section σ inv , for The power index n has here the value 4 + 1/2. Its value can be extracted by plotting (ln σ inv ) as a function of (ln c T ) (then the slope in the linear section gives the value of n, and the variation of (ln σ inv ) at large (ln c T ) gives the value of g a and g b ). One can also consider for this purpose a fixed x c and look at two different energies (as suggested in [40]), We follow an alternative method and analyze the p T spectra using a running coupling constant, where we have chosen Λ QCD to be 0.25 GeV to give α s (M 2 Z ) = 0.1184 [41]. We identify Q as c T and have chosen C=10 both to give α s (Q∼Λ QCD ) ∼ 0.6 in hadron spectroscopy studies [42] and to regularize the coupling constant for small values of Q(c T ). We search for n by writing the invariant cross section Eq. (16) for jet production as [47] pp at 7 TeV 0.5 |η| < 0. 5 5.39 In the literature [43,44] the index g a for the structure function of a gluon varies from 6 to 10. Following [43] we shall take g a = 6. As shown in Fig. 1 and Table I [46], the power index is n=4.78 for R = 0.2, and is n=4.98 for R = 0.4 (Table I). The power index is n=5.39, for CMS jet differential cross section in pp collisions at √ s = 7 TeV at the LHC within |η| < 0.5 and R = 0.5 [47]. This latter n value exceeds slightly the expected value of n = 4.5.
Except for the CMS data at 7 TeV that may need further re-examination, the power indices extracted for hadron jet production and listed in Table I are in approximate agreement with the value of n=4.5 in Eq. (16) and with previous analysis of Arleo et al. [40], indicating the approximate validity of the hard-scattering model for jet production in hadron-hadron collisions, with the predominant α 2 s /c 4 T parton-parton differential cross section as predicted by pQCD.
Production to Hadron Production
The results in the last section indicates that the simple hard-scattering model, i.e., Eq. (18), adequately describes the power index of n ∼ 4.5 for jet production in high-energy pp collisions. However, the power index for hadron production is considerable greater, in the range of n∼ 6 − 10 [34,40]. What is the origin of the increase in the power index n?
A jet c evolves by fragmentation, showering, and hadronization to turn the jet into a large numbers of hadrons in a cone along the jet axis. The showering of the partons will go through many generations of branching. If we label the (average) momentum of the i-th generation parton by p (i) T , the showering can be represented as T . Each branching will kinematically degrade the momentum of the showering parton by a momentum fraction, ζ=p (i+1) T . At the end of the terminating λ-th generation of the showering, and hadronization, the p T of a produced hadron is related to the c T of the parent parton jet by It is easy to prove that if the generation number λ and the fragmentation fraction z are independent of the jet c T , then the power law and the power index for the p T distribution are unchanged [35]. We note however that in addition to the kinematic decrease of p T as described by (20), the showering generation number λ is governed by an additional criterion on the virtuality, which measures the degree of the off-themass-shell property of the parton. From the different parton showering schemes in the PYTHIA [48], the HERWIG [49], and the ARIADNE [50], we can extract a general picture that the initial parton with a large initial virtuality Q decreases its virtuality by showering until a limit of Q 0 is reached. The downgrading of the virtuality will proceed as There is a one-to-one mapping of the initial virtuality Q with the transverse momentum c T of the evolving parton as Q(c T ) (or conversely c T (Q)). Because of such a mapping, the decrease in virtuality Q corresponds to a decrease of the corresponding mappedc T as c T =c (0) . The cut-off virtuality Q 0 maps into a transverse momentum c T 0 =c T (Q 0 ). In each successive generation of the showering, the virtuality decreases by a virtuality fraction which corresponds, at least approximately, in terms of the corresponding mapped parton transverse momentumc (i) T , to a decrease by a corresponding transverse momentum fraction,ζ=c (i+1) T . The showering will end in λ generations such that We can infer a relation between c T and the number of generations, λ, Thus, the showering generation number λ depends on the magnitude of c T . On the other hand, kinematically, the showering processes degrades the transverse momentum of the parton c T to that of the p T of the produced hadron as given by Eq. (20), depending on the number of generations λ. The magnitude of the transverse momentum p T of the produced hadron is related to the transverse momentum c T of the parent parton jet by We can solve the above equation for p T as a function of c T and obtain where µ = ln ζ/lnζ, and µ is a parameter that can be searched to fit the data. As a result of the virtuality ordering and virtuality cut-off, the hadron fragment transverse momentum p T is related to the parton momentum c T nonlinearly by an exponent 1 − µ.
After the fragmentation and showering of the parent parton c T to the produced hadron p T , the hard-scattering cross section for the scattering in terms of hadron momentum p T becomes Upon substituting the non-linear relation (24) between the parent parton moment c T and the produced hadron p T in Eq. (24), we get Therefore under the fragmentation from c to p, the hardscattering cross section for AB → pX becomes where Thus, the power index n for jet production can be significantly changed to n ′ for hadron production because the greater the value of the parent jet c T , the greater the number of generations λ to reach the produced hadron, and the greater is the kinematic energy degradation. By a proper tuning of µ, the power index can be brought to agree with the observed power index in hadron production. For example, for µ=0.4 one gets n ′ =6.2 and for µ = 0.6 one gets n ′ =8.2. Because the parton branching probability, parton kinematic degradation, and parton virtuality degradation depend on the coupling constant and the coupling constant depends on the parton energy, we expect the quantity µ to depend on the pp collision energy. Consequently, n ′ may change significantly with the collision energy.
Regularization of the Hard-Scattering Integral
The power-law (28) has been obtained for high p T . In order to apply it to the whole range of E T , we need to regularize it by the replacement, The quantity m T 0 measures the average transverse mass of the detected hadron in the hard-scattering process. The differential cross section d 3 σ(AB → pX)/dyd p T in (28) is then regularized as In the above equation for the production of a hadron with a transverse momentum p T , the variable c T (p T ) refers to the transverse momentum of the parent jet c T before fragmentation. We can relate p T with c T by using the empirical fragmentation function of Ref. [51] and we get [35] This can be regarded as a linearized approximation of Eq. CMS [17], ATLAS [18], and ALICE Collaborations [19] are shown in Fig. 2. We find that the experimental data gives n ′ =5.69 and m T 0 =0.804 GeV for √ s=7 TeV and n ′ =5.86 and m T 0 =0.634 GeV for √ s=0. 9 TeV. This indicates that there is indeed a systematic change of the power index n from jet production to a larger value n ′ in hadron production. The fits to the low p T region for the ALICE data can be improved, with a larger power index n ′ as we shall see below in Section 9.
Further Approximation of the Hard-Scattering Integral
We would like to simplify further the p T dependencies of the structure function in Eq. (31) and the running coupling constant as additional power indices in such a way that will facilitate subsequent phenomenological comparison. For parton c coming at mid-rapidity, the quantities x a0 , x b0 , and x c in Eqs. (10) and (31) are The structure function factor and the denominator factor in Eq. (31) can be approximated for high energies with √ s ≫ c T as We can relate c T with p T by Eq. (32) and further approximate the right-hand side of the above equation in a form that is advantageous for subsequent purposes. For high energy with large √ s, we make the approximation We therefore estimate that n g ∼ 0.04 and 0.007 for √ s = 0.9 and 7 TeV respectively.
The running coupling constant α s is a monotonically decreasing function of Q(c T ). It can be written approximately as where n α can be chosen to minimize errors by matching α s at two points of p T . If we match α s (p T ) at p T =Λ QCD =0.25 GeV and at p T =100 GeV, then n α = 0.36. If we match α s at p T =Λ QCD and at p T =20 GeV, then n α = 0.46. As a consequence of the above simplifying approximations, we can write the hard-scattering integral Eq. (31) in the approximate form where n = n ′ + n g + n α , and n ′ is the power index after taking into account the fragmentation process, n g the power index from the structure function, and n α from the coupling constant. We note that the predominant change of the power index from jet production to hadron production arises from the fragmentation process because n g and n α are relatively small. In reaching the above equation, we have approximated the hard-scattering integral F(p T ) that may not be exactly in the form of 1/[1 + m T /m T 0 ] n into such a form. It is easy then to see that upon matching F(p T ) with A/[1 + m T /m T 0 ] n according to some matching criteria, the hard-scattering integral F(p T ) will be in excess of 1/[1 + m T /m T 0 ] n in some region, and will be in deficit in some other region. As a consequence, the ratio of the hardscattering integral F(p T ) to the fitting 1/[1+m T /m T 0 ] n will oscillate as a function of p T . This matching between the physical hard-scattering outcome that contains all physical effects with the approximation of Eq. (37) may be one of the origin of the oscillations of the experimental fit with the non-extensive distribution (as can be seen below in Fig. 3).
Nonextensive Distribution as a Lowest-Order Approximation of the Hard-scattering Integral
In the hard-scattering integral Eq. (37), if we identify and consider produced particles to be relativistic so that m T ∼E T ∼p T and E T ∼E at mid-rapidity, then we will get the nonextensive distribution of Eq. (2) as the lowest-order approximation for the QCD-based hard-scattering integral.
The convergence of Eq. (37) and Eq. (2) can be considered from the viewpoint of the reduction of a microscopic description to a statistical-mechanical description. From the microscopic perspective, the hadron production in a pp collision is a very complicated process, as evidenced by the complexity of the evolution dynamics in the evaluation of the p T spectra in explicit Monte Carlo programs, for example, in [48][49][50]. If one starts from the initial condition of two colliding nucleons, there are many intermediate and complicated processes entering into the dynamics, each of which contain a large set of microscopic and stochastic degrees of freedom. Along the way, there are stochastic elements in the picking of the degree of inelasticity, in picking the colliding parton momenta from the parent nucleons, the scattering of the partons, the showering evolution of scattered partons, the hadronization of the fragmented partons. Some of these stochastic elements cannot be definitive and many different models, sometimes with untestable assumptions, have been put forth. In spite of all these complicated stochastic dynamics, the final result of Eq. (37) of the single-particle distribution can be approximated to depend only on three degrees of freedom, after all is done, put together, and integrated. The simplification can be considered as a "no hair" reduction from the microscopic description to nonextensive statistical mechanics in which all the complexities in the microscopic description "disappear" and are subsumed behind the stochastic processes and integrations. In line with statistical mechanics and in analogy with the Boltzmann-Gibbs distribution, we can cast the hard-scattering integral in the non-extensive form in the lowest-order approximation as [52] where E= m 2 + p 2 and E=E T =m T at y=0. Here, the parameter q is related physically to the power index n, the parameter T related to m T 0 and the average transverse momentum, and the parameter A related to the multiplicity (per unity rapidity) after integration over p T . Given a physically determined invariant cross section in the loglog plot of the cross section as a function of the transverse hadron energy as in Fig. 3, the slope at large p T gives the power index n (and q), the average of E T gives T (and m T 0 ), and the integral over p T gives A. Fig. 3 gives the comparisons of the results from Eq. (40) with the experimental p T spectra at central rapidity obtained by different Collaborations [17][18][19]. In these calculations, the effective temperature parameter is set equal to T =0.13 GeV, and the parameters of A, q and the corresponding n are given in Table 2. The dashed line (an ordinary exponential of E T for q → 1) illustrates the large discrepancy if the distribution is described by Boltzmann-Gibbs distribution. The results in Fig. 3 shows that Eq. (40) adequately describes the hadron p T spectra at central rapidity in high-energy pp collisions. We verify that q increases slightly with the beam energy, but, for the present Herein the temperature is set to be the same for all curves and equal T = 0.13 GeV, and the normalization constant in units of GeV −2 /c 3 . The corresponding Boltzmann-Gibbs (purely exponential) fit is illustrated as the dashed curve. For a better visualization both the data and the analytical curves have been divided by a constant factor as indicated. The ratios data/fit are shown at the bottom, where a roughly log-periodic behavior is observed on top of the q-exponential one. Data are taken from [17][18][19].
energies, remains always q ≃ 1.1, corresponding to a power index n in the range of 6-8 that decreases as a function of √ s. What interestingly emerges from the analysis of the data in high-energy pp collisions is that the good agreement of the present phenomenological fit extends to the whole p T region (or at least for p T greater than 0.2 GeV/c, where reliable experimental data are available) [34]. This is being achieved with a single nonextensive distribution. On the other hand, theoretical analysis demonstrates that the hard-scattering integral can be written as a nonextensive distribution with only three degrees of freedom, in the lowest-order approximation. It is reasonable to infer that the dominant mechanism of hadron production over the whole range of p T at central rapidity and high energies is the hard-scattering process.
The dominance of hard-scattering also for the production of low-p T hadron in the central rapidity region is supported by two-particle correlation data where the twoparticle correlations in minimum p T -biased data reveals that a produced hadron is correlated with a "ridge" of particles along a wide range of ∆η on the azimuthally away side centering around ∆φ ∼ π [16,53,54]. The ∆φ ∼ π (back-to-back) correlation indicates that the correlated pair is related by a collision, and the ∆η correlation in the shape of a ridge indicates that the two particles are partons from the two nucleons and they carry fractions of the longitudinal momenta of their parents, leading to the ridge of ∆η at ∆φ ∼ π.
Conclusions and Discussions
Particle production in high-energy pp collisions at central rapidity is a complex process that can be viewed from two different and complementary perspectives. On the one hand, there is the successful microscopic description involving perturbative QCD and nonperturbative hadronization at the parton level where one describes the detailed mechanisms of parton-parton hard scattering, parton structure function, parton fragmentation, parton showering, the running coupling constant and other QCD processes. On the other hand from the viewpoint of statistical mechanics, the single-particle distribution can be cast into a form that exhibit all the essential features of the process with only three degrees of freedom. The final result of the process can be summarized, in the lowest-order approximation, by a power index n which can be represented by a nonextensivity parameter q=(n + 1)/n, the average transverse momentum m T 0 which can be represented by an effective temperature T =m T 0 /n, and an multiplicity constant A that is related to the multiplicity per unit rapidity when integrated over p T . Such a reduction from microscopic description to a statistical mechanical description can be shown both from theoretical considerations by obtaining a simplified and approximate hard-scattering integral, and also by comparing with experimental data. In the process, we uncover the dominance of the hard-scattering hadronproduction and the approximate validity of a "no-hair" statistical-mechanical description for the whole transverse momentum region in pp collision at high-energies. We emphasize also that, in all cases, the temperature turns out to be one and the same, namely T = 0.13 GeV.
What we may extract from the behavior of the experimental data is that scenario proposed in [11,12] appears to be essentially correct excepting for the fact that we are not facing thermal equilibrium but a different type of stationary state, typical of violation of ergodicity (for a discussion of the kinetic and effective temperatures see [55, 56])).
As a concluding remark, we note that the data/fit plot in the bottom part of Fig. 3 exhibit an intriguing rough log-periodicity oscillations, which suggest corrections to the lowest-order approximation of Eq. (37) and some hierarchical fine-structure in the quark-gluon system where hadrons are generated. This behavior is possibly an indication of some kind of fractality in the system. Indeed, the concept of self-similarity, one of the landmarks of fractal structures, has been used by Hagedorn in his definition of fireball, as was previously pointed out in [21] and found in analysis of jets produced in pp collisions at LHC [57]. This small oscillations have already been preliminary discussed in Section 8 and in [58,59], where the authors were able to mathematically accommodate these observed oscillations essentially allowing the index q in the very same Eq. (40) to be a complex number 4 (see also Refs. [60,61]; more details on this phenomenon, including also discussion of its presence in recent AA data, can be found in [33]). | 7,148.4 | 2014-12-01T00:00:00.000 | [
"Physics"
] |
Expansion of Child Tax Credits and Mental Health of Parents With Low Income in 2021
This cross-sectional study uses Household Pulse Survey data to investigate the association between the 2021 Child Tax Credit expansion and mental health among parents with low income.
Findings
In this cross-sectional study of a weighted sample of 546 366 adults, the expanded CTC benefits were associated with an approximate one-fourth decrease in anxiety symptoms for the primary beneficiaries whose household income was less than $35 000.
Introduction
Mental health is a pressing public health concern, serving as a fundamental pillar of both individual well-being and the broader health of society.The COVID-19 pandemic has substantially heightened concerns about mental health.[4][5] In July 2021, in response to the economic downturn caused by the pandemic, the US government temporarily expanded the Child Tax Credit (CTC), an income transfer program established in 1997 to provide financial aid for families with children, as part of the American Rescue Plan Act of 2021.7][8] For instance, additional income reduces financial burden and material hardships.It also allows parents to invest in resources that promote family well-being and child development.
Conversely, a scarcity of resources contributes to heightened stress levels, with detrimental outcomes for mental health.Additionally, resource constraints may impede effective parenting and strain family relationships, further exacerbating parental mental health issues.
The 2021 CTC expansion marked a pivotal transformation in US social policy.It raised the credit amount per child, expanded eligibility to include previously excluded parents with low income, and transitioned to a monthly advance payment system instead of lump sum disbursements.Prior to the expansion, the CTC was only partially refundable, leading to some families missing out on the full benefit.The changes took effect on July 15, 2021, with the Internal Revenue Service delivering advanced monthly installments of up to $250 per child aged 6 to 17 years and up to $300 per child younger than 6 years.This expansion aimed to provide financial relief and promote child development by making the credit nearly universal with more generous credit amounts and periodic distributions. 9,10ile we know that the 2021 CTC expansion has been shown to improve various aspects of family well-being and economic outcomes, 2,[9][10][11][12] it is important to note that there are currently only a handful of relevant studies on the association of the policy with mental health.The aim of our study was to fill this crucial gap by identifying the association of the 2021 CTC expansion with parental mental health as measured by depression and anxiety symptoms using a refined methodology.Furthermore, we examined heterogenous policy outcomes across various demographic characteristics based on findings from prior studies 11,13,14 in order to distinguish groups that may have been particularly vulnerable during the pandemic and investigate how individuals with different socioeconomic backgrounds responded to additional income from the policy change.Thus, we hypothesized that the policy expansion was associated with improved mental health among parents with low income.The HPS comprises a 20-minute online questionnaire.While the survey is limited in its relatively lower response rate than other federally sponsored surveys, it offers unique advantages in delivering timely and comprehensive information ranging from demographic characteristics to mental health indicators. 15The current study analyzes data spanning from April 14, 2021, to January 10, 2022 (weeks 28-41).The policy expansion, marked by the first monthly payment on July 15, 2021, allowed us to create 2 distinct periods: the pre-policy expansion period (weeks 28-33) and the post-policy expansion period (weeks 34-41).It should be noted that the monthly payments were delivered over 6 months from July 15 to December 15, 2021, and households received the remainder of their credit in a lump sum payment after filing their taxes in the spring of 2022. 16To mitigate potential confounding effects from variations in payment delivery methods, we focused our post-policy expansion period on July to December 2021.
Measures
The main mental health outcomes of interest consisted of self-reported depression and anxiety symptoms.Depression in the HPS was measured using a modified version of the 2-item Patient Health Questionnaire.The 2 questions asked how often respondents have been bothered by having little interest or pleasure in doing things and by feeling down, depressed, or hopeless.Similarly, anxiety symptoms were assessed using the 2-item Generalized Anxiety Disorder scale.Respondents were asked about the frequency of feeling nervous, anxious, or on edge and by not being able to stop or control worrying.The response categories of both parts of the questionnaire included not at all, several days, more than half the days, and nearly every day, with scores ranging from 0 to 3. We combined the scores for depression and anxiety, respectively, and created 2 binary variables indicating a high risk of depression or anxiety for scores of 3 or higher. 17 should be noted that for the pre-policy expansion period, both depression and anxiety measures inquired about experiences over the past 7 days.However, in the post-policy expansion period, the questionnaire changed, and these measures now cover the last 2 weeks, potentially introducing measurement errors.To address the inconsistency in reference periods between the preand post-policy phases, we conducted a sensitivity test using alternative outcome measures.As the reference categories of the questions for depression and anxiety are based on the number of days (not at all, several days, more than half the days, or nearly every day), we adjusted these categories to a scale from 0 to 1, making the measures time insensitive.Specifically, we assigned 0 to not at all, while several days, more than half the days, and nearly every day were scored as 1.Subsequently, if the sum of the 2 questions for each symptom score was 2 or higher, we coded each depression and anxiety indicator as 1.The findings presented in eTable 4 in Supplement 1 indicate that the change in the reference period did not qualitatively change our findings, despite some inconsistencies in statistical significance in certain cases (ie, where the odds ratios [ORs] for depression were statistically significant in some cases, including the model for the full sample).
We used living with children in the households as a proxy for CTC treatment status.It is important to note that parents or guardians can still be eligible for the CTC even if they do not live with qualifying children under certain conditions.For instance, they may still qualify if they have lived with the children for more than half of the year.However, in this study, to ensure clarity and avoid complications related to family complexity and tax rules, we defined CTC eligibility based on the presence of resident children in the household.One methodological concern with our treatment variable is that there may be systematic differences between adults with children (eligible for the CTC) and those without (not eligible for the CTC).Such differences could potentially introduce bias.
We address such differences by balancing these 2 groups using a propensity score matching (PSM) method, as discussed in the following section.
Statistical Analysis Propensity Score Matching
We matched adults living with children to those not living with children using a PSM method to minimize potential biases in the estimates of the association of the expanded CTC with mental health that may arise from imbalances between the 2 groups.Importantly, due to the cross-sectional design of the HPS, we cannot track the same observations longitudinally.In other words, it is not possible to match the 2 groups based on preexisting variables.In this context, we use Aerts and Schmidt's 18 multiple matching processes to convert cross-sectional data into quasi-panel data, enabling us to balance the treated and control groups using repeated cross-sectional data, as illustrated in the eFigure in Supplement 1.The following household characteristics were included to generate propensity scores: age, sex, self-reported race and ethnicity (Hispanic; non-Hispanic Black; non-Hispanic White; or non-Hispanic Asian, other race and ethnicity, or multiple races [grouped because of small sample sizes]), marital status, education (high school graduate or below, some college, and bachelor's degree or higher), household income (<$25 000, $25 000-$34 999, $35 000-$49 999, $50 000-$74 999, $75 000-$99 999, $100 000-$149 999, $150 000-$199 999, orՆ$200 000), and employment status.We included the race and ethnicity variable as prior studies showed that the pandemic had differential effects based on race and ethnicity.The public data did not disclose the races and ethnicities that comprised the other category because of small numbers.In the first matching stage (matching A), we matched individual i in the treated group (eligible for CTC) in the post-policy expansion period t 1 with a nontreated (not eligible for CTC) twin h in the same period.In this stage, we used a 1-to-1 nearest neighbor matching method without replacement.Subsequently, in the second matching stage (matching B), we used the matched samples from matching A to match CTC-eligible individuals in the post-policy expansion period with a twin k from the pre-policy expansion period t 0 .In the final matching stage (matching C), using matched samples from matching A, we matched non-CTC-eligible individuals in the post period with a twin j in the pre period.In matching B and C, we used a nearest neighbor matching method with replacement, allowing the control units to be matched to multiple treated units, thus preserving observations.These 3 stages of matching enabled us to create balanced samples across the treated and nontreated units and over time and to address potential biases in estimating the effects of the CTC policy expansion.eTable 1 in Supplement 1 illustrates the results from the PSM balance diagnostics.It shows that bias between the CTC-eligible and non-CTC-eligible groups was significantly reduced (by Ն90%) for most of the variables used in the PSM process.
Triple-Difference Model
This study examined the association of the expanded CTC with mental health using a difference-indifference-in-differences, or triple-difference, model with PSM samples.We used a logistic regression method because our mental health measures were binary.The triple-difference model is constructed as follows: where Ŷ iwst is the dependent variable for individual i surveyed in week w residing in state st.The POST variable is an indicator variable set to 1 for the post-policy expansion period after the first payment of the expanded CTC on July 15, 2021, and coded 0 before July 15.The CTC variable takes a value of 1 if individuals were living with CTC-qualifying children, which we used as a proxy for CTC eligibility, and coded 0 if otherwise.The POVERTY variable equals 1 if the household income was less than $35 000.This variable captures individuals who were potentially excluded from the CTC before the policy expansion and became newly eligible in response to the policy expansion in 2021.The tripleinteraction term estimates the effects of the expanded CTC on mental health for the primary beneficiaries of the expansion.The β 1 variable is the coefficient of interest in this study.The X variable denotes all self-reported covariates that are potentially associated with the CTC eligibility status and
Results
Our weighted sample comprised 546 366 adults with a mean (SD) age of 43.02 (14.54) years.More than one-half of the sample were female (52.9% compared with 47.1% male) and non-Hispanic White (57.7% compared 12.2% non-Hispanic Black and 10.5% non-Hispanic Asian, other race and ethnicity, or multiple races), and Hispanic individuals accounted for 19.6% of the sample.The most common education level was high school graduate or less (36.0%).Regarding income distribution, the highest frequency was observed in the $50 000 to $74 999 range (16.1%), followed by $100 000 to a The source is our own analyses of data from the Household Pulse Survey, April 14, 2021, to January 10, 2022. 15Sample weights were applied.
b The other category reflects all races and ethnicities available in the public use data, which was not expanded further due to small numbers.
The Figure shows the estimated ORs and the 95% CIs from the triple-difference model with PSM samples for the full sample and for subgroups defined by demographic characteristics, including sex, age, race and ethnicity, marital status, and education.The ORs were derived from the tripleinteraction term, representing the estimated associations of the expanded CTC with mental health for the primary beneficiaries with household incomes below $35 000.The CTC expansion was not significantly associated with a change in depression (OR, 0.857; 95% CI, 0.694-1.059).The OR for anxiety symptoms for the full sample was 0.730 (95% CI, 0.598-0.890),implying a significant reduction in anxiety symptoms associated with the CTC expansion.Our findings for the full sample suggest that the expanded CTC benefits delivered from July to December 2021 may have contributed to a nearly one-fourth decrease in anxiety symptoms for the primary beneficiaries whose household income was less than $35 000.
Next, we performed a set of subgroup analyses by demographic characteristics using separate regression models for each subgroup.The subgroup analysis by sex showed that the OR for anxiety symptoms among female respondents was statistically significant (0.694; 95% CI, 0.544-0.886),suggesting a favorable association of the policy with anxiety symptoms for the female group.The source of these findings is our analyses of data from the Household Pulse Survey, April 14, 2021, to January 10, 2022. 15Odds ratios (ORs) are plotted as point estimates with 95% CIs.The ORs are derived from triple-difference models in which the primary exposure is a triple-interaction term between a binary variable representing that the interview was conducted after the CTC expansion (July 15, 2021), an indicator for CTC eligibility, and a binary variable for whether household income was below $35 000.All logistic regressions are adjusted for sex, age, race and ethnicity, marital status, income, education, employment status, and number of children in the household.Biweekly fixed effects and state fixed effects also were accounted for.Robust SEs were applied in the logistic regression models.
a The other category reflects all races and ethnicities available in the public use data, which was not expanded further due to small numbers.
However, no association with mental health was observed among male respondents.We further stratified the full PSM sample based on age into 2 groups: working age (17-60 years) and older adults (Ն61 years).The OR for anxiety symptoms for the working-age group was statistically significant (0.706; 95% CI, 0.569-0.876),while the OR for the same measure for older adults was not (0.964; 95% CI, 0.672-1.382).Again, the ORs for depression were consistently not significant for both age groups.In the subsequent subgroup analysis by race and ethnicity, the expanded CTC benefits did not show an association with any of the mental health measures for any of the racial and ethnic groups examined with the exception of anxiety symptoms among non-Hispanic White respondents (OR, 0.677; 95% CI, 0.513-0.894),which was an almost one-third improvement.The next subgroup analysis by marital status showed that the extended CTC benefits were not associated with mental health measures for either married or single parents.Finally, the subgroup analysis based on education level (high school or less, some college, and bachelor's degree higher) showed that the policy expansion was associated with a decrease in anxiety symptoms for the some college (OR, 0.699; 95% CI, 0.534-0.915)and bachelor's degree or higher (OR, 0.665; 95% CI, 0.469-0.943)groups.
As a robustness check, we also estimated the regression models by ordinary least squares (eTable 2 in Supplement 1), which were qualitatively consistent with the main findings from the logistic regression models, and performed estimations using the raw sample before PSM (eTable 3 in Supplement 1), which supported that the results derived from the PSM sample were more favorable.Furthermore, changing the reference period did not qualitative change the findings, as discussed in the Measures section (eTable 4 in Supplement 1).
In summary, our analyses highlight that, on average, the extended CTC benefits were associated with an improvement in anxiety symptoms among parents with low income.The subgroup analyses indicated that the positive associations of the policy with anxiety symptoms were particularly pronounced among the female, working-age, non-Hispanic White, some college, and bachelor's degree or higher subgroups.However, the policy expansion had null associations with depression.
Discussion
In this cross-sectional study, we investigated the association of the 2021 CTC expansion with mental health outcomes among parents with low income, with a specific focus on depression and anxiety symptoms.We found that the expanded CTC was associated with alleviating parental anxiety symptoms.However, distinct variations existed within specific segments of the population.
In general, our findings align with several prior studies that indicated that the expanded CTC had favorable associations with alleviating mental health outcomes, including depression and anxiety symptoms, 11,13 while diverging from those of Glasner et al, 14 who reported no short-term association of the CTC expansion with measures of life satisfaction and anxiety and depression symptoms.Given that several studies have shown that the expanded CTC significantly improved financial burdens and economic difficulties, including poverty and material hardship, 10,12,19 all of which are strongly associated with mental health, it is plausible to posit that the positive outcomes on these economic aspects may have partially or entirely translated into enhancements in parental mental health, as evidenced in the present study.
The findings from the subgroup analyses indicate that less disadvantaged groups experienced notable improvements in mental health outcomes.For example, working-age, non-Hispanic White, and more highly educated parents experienced a reduction in anxiety symptoms.In contrast, relatively more disadvantaged groups, including parents from racial and ethnic minority groups, older adults, and parents with less education, did not exhibit any observable improvements in their mental health.These findings raise the possibility that these groups may have been excluded from fully experiencing the advantages of the policy expansion.Furthermore, considering the disproportionate negative effects of the pandemic and the heightened challenges faced by more marginalized populations, 4,20,21 these results show the intersectional outcomes of the pandemic.In
JAMA Network Open | Health Policy
Expansion of Child Tax Credits and Mental Health of Parents With Low Income other words, the intersections of multiple aspects of inequalities, specifically age, race and ethnicity, and education, within our sample experiencing poverty may result in compounded outcomes when individuals experience multiple disadvantaged positions. 22Thus, the CTC expansion, on its own, may not suffice in addressing the broader mental health issues that could be more pronounced among these highly vulnerable demographics.
Even more intriguingly, our findings differ in several ways from those of previous studies 11,13 that used the same data from the HPS.First, our study found that the expanded CTC was associated with an improvement solely in anxiety symptoms, not depressive symptoms.In contrast, Batra et al 11 and Cha et al 13 reported that the policy expansion was associated with an enhancement in both symptoms.Second, these prior studies reported more pronounced positive policy associations among racial and ethnic minority groups.Such a disparity in results from the subgroup analyses underscores the need for more rigorous methodological approaches to identify the policy associations.The distinctiveness of our results may be attributable to the methodological rigor we have meticulously integrated.To be more specific, by using the PSM technique, the current study achieved better balance between the treated and untreated groups, accounting for heterogeneity between the 2 groups in estimating the true outcomes of the CTC expansion.This enhanced methodological approach instills higher confidence in the reliability and validity of our findings, setting our research apart from previous research.In addition, we provide in eTable 3 in Supplement 1 estimations using the raw sample before the PSM.The results from the raw sample indicate that the estimates derived from the PSM sample are more favorable, reinforcing the validity of our use of PSM.
Limitations
The current study is not without limitations.4][25] The national-level weighted response rates from weeks 28 to 41 range from 5.4% to 7.4%, according to the HPS technical documentation for phases 3.1 to 3.3. 26Consequently, we emphasize the need for careful interpretation of the results, especially for subgroups, as quality matters more than quantity. 23Nevertheless, it is important to highlight that Parolin et al 27 found that the HPS sample closely mirrors the Current Population Survey population estimates.Second, the literature generally suggests that people with better health may be more likely to participate voluntarily in health surveys. 28This inclination may lead to an underestimation of mental health issues, especially among individuals at higher risk.Therefore, the combination of low response rates and the self-reported nature of mental health data requires caution in interpreting the results.Third, our model only considers the number of children eligible for the CTC.The benefit levels of the expanded CTC differed based on the age of the child, providing a larger amount for younger children up to age 6 years.
However, due to the data availability issue, we were unable to incorporate the number of children by age in the model.
Conclusions
Although the 2021 CTC expansion held the potential to enhance economic outcomes, 10,12,19 which in turn may have led to improved mental health as our findings suggest, it was temporary and expired by the end of 2021.Reverting the program to its former state was expected to result in an immediate reduction in the benefit level, which could potentially exacerbate parental mental health conditions and child poverty. 13Consequently, the expiration of the program raised substantial public health and economic concerns about its potential negative impact on the well-being of vulnerable populations, underscoring the need for continued support and comprehensive policy solutions, particularly as economic recovery from the pandemic prolongs and continues to place additional strain on vulnerable populations.Debates continue regarding the program's permanence, alongside federaland state-level consideration for similar programs.In fact, there are ongoing political efforts to revive
JAMA Network Open | Health Policy
These findings of improved parental anxiety symptoms with the CTC expansion may provide valuable evidence to inform policy makers' consideration of making the CTC expansion permanent or transforming it into a universal program.
This cross-sectional study used data from the Household Pulse Survey (HPS) collected by the US Census Bureau in collaboration with federal agencies.The data collection started on April 23, 2020, with the primary objective of gathering nationally representative information on the experiences of US households during the pandemic.This study was exempt from review and informed consent under the Common Rule (45 CFR 46) given the use of secondary deidentified data.This study follows the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cross-sectional studies.JAMA Network Open | Health Policy Expansion of Child Tax Credits and Mental Health of Parents With Low Income
Figure .
Figure.Outcomes of the Child Tax Credit (CTC) Expansion in 2021 on Mental Health Among Parents With Low Income, April 2021 to January 2022 Expansion of Child Tax Credits and Mental Health of Parents With Low Income including sex, age, race and ethnicity, marital status, education, household income, employment status, and number of children in the household.We include time dummies (γ w ) and state dummies (η st ) to account for time-varying trends that were consistent across individuals and heterogeneity between states, all of which may affect our main outcomes of interest.The Tablepresentsweighted descriptive statistics for 4 groups, categorized by period and CTC eligibility, for the final analytic sample with nonmissing values on the outcome measures and key covariates.
JAMA Network Open.2024;7(2):e2356419.doi:10.1001/jamanetworkopen.2023.56419(Reprinted) February 21, 2024 4/11 Downloaded from jamanetwork.comby guest on 03/26/2024 mental health, Expansion of Child Tax Credits and Mental Health of Parents With Low Income JAMA Network Open.2024;7(2):e2356419.doi:10.1001/jamanetworkopen.2023.56419(Reprinted) February 21, 2024 8/11 Downloaded from jamanetwork.comby guest on 03/26/2024 the policy expansion, as evidenced by President Biden's 2024 fiscal year budget proposal for the government 29 and the introduction of the American Family Act of 2023 by Representatives Rosa DeLauro, Suzan DelBene, and Ritchie Torres.In this context, our findings may provide valuable evidence for policy makers and offer essential insights to inform decision making and shape future policies.Cheung KL, Ten Klooster PM, Smit C, de Vries H, Pieterse ME.The impact of non-response bias due to sampling in public health studies: a comparison of voluntary versus mandatory recruitment in a Dutch national survey on adolescent health.BMC Public Health.2017;17(1):276.doi:10.1186/s12889-017-4189-829.Maag E. Biden's child tax proposal would help many but presents administrative challenges.Tax Policy Center, Urban Institute & Brookings Institution.March 31, 2023.Accessed October 1, 2023.https://www.taxpolicycenter.org/taxvox/bidens-child-tax-proposal-would-help-many-presents-administrative-challenges Effect of the CTC Expansion in 2021 on Mental Health Among Low-Income Parents Estimated From the OLS Model, April 2021-January 2022 eTable 3. Effects of the CTC Expansion in 2021 on Mental Health Among Low-Income Parents Using the Non-PSM Sample, April 2021-January 2022 eTable 4. Sensitivity Test: Effects of the CTC Expansion in 2021 on Mental Health Among Low-Income Parents Using Alternative Mental Health Measures, April 2021-January 2022 28. | 5,611.2 | 2024-02-01T00:00:00.000 | [
"Economics"
] |
Ethics of the Attention Economy: The Problem of Social Media Addiction
Social media companies commonly design their platforms in a way that renders them addictive. Some governments have declared internet addiction a major public health concern, and the World Health Organization has characterized excessive internet use as a growing problem. Our article shows why scholars, policy makers, and the managers of social media companies should treat social media addiction as a serious moral problem. While the benefits of social media are not negligible, we argue that social media addiction raises unique ethical concerns not raised by other, more familiar addictive products, such as alcohol and cigarettes. In particular, we argue that addicting users to social media is impermissible because it unjustifiably harms users in a way that is both demeaning and objectionably exploitative. Importantly, the attention-economy business model of social media companies strongly incentivizes them to perpetrate this wrongdoing.
A final issue with the term internet addiction is that, despite the fact that we have been using the term as if it refers to a single kind of addiction, the term seems actually to encompass several distinct addictions. Davis (2001) distinguishes between addiction to the internet in all of its forms, which we will call general internet addiction, and addiction to specific activities that are accessed through the internet, which we will call specific internet addictions, such as social networking, gaming, gambling, information searching, and accessing online pornography. Young (1996Young ( , 1998b, one of the first to coin the term internet addiction, later argued that the term should be understood as an umbrella term encompassing five subtypes of specific internet addictions: cyber sexual addiction (cybersex and cyberporn), cyber relationship or social media addiction (online social interactions), net compulsions (gambling, shopping, or day trading), information overload (web surfing and information searching), and computer addiction (game playing) (Young, Pistner, O'Mara, & Buchanan, 1999). In this article, our primary focus will be on the specific internet addiction that Young categorizes as cyber relationship addiction or what others have called social media addiction or social networking addiction (Kuss & Griffiths, 2011). A secondary focus of our discussion, however, will be directed at general internet addiction, particularly because much of the research on internet-related addiction has been on its generalized form.
General internet addiction has a surprisingly high prevalence among both adults and the young. In their meta-analysis of thirty-one international studies, Cheng and Li (2014) estimated that 6 percent of the world's population had become addicted to the internet. A survey by Durkee et al. (2012) found that about 4.4 percent of European adolescents were addicted to the internet, while Bányai et al. (2017) found that 4.5 percent of Hungarian adolescents were addicted. Koukia, Mangoulia, and Alexiou (2014) found that the prevalence of internet addiction among Greek university students was 4.5 percent. Anderson (2001) found that 9.8 percent of US 325 Ethics of the Attention Economy college students who used the internet were addicted, while an online survey by Cooper, Morahan-Martin, Mathy, and Maheu (2002) found that 9.6 percent of US respondents were addicted. Thatcher and Goolam (2005) estimated that as many as nine million Americans were addicted to the internet. Studies of Asian populations have found significantly higher prevalence rates than those of Western groups. The studies of Yen, Yen, Chen, Tang, andKo (2007, 2009) concluded that about 18 percent of Chinese high school students and about 12 percent of Chinese college students were addicted to the internet, while found that about 20 percent of Taiwanese adolescents were internet addicted. Internet addiction is clearly a large and global problem.
Although research on the epidemiology of social media addiction is not as mature as the research on general internet addiction, a few studies have attempted to look at how widespread social media addiction (as distinct from general internet addiction) is in the general population and among younger users. (While some of these studies rely on the Griffiths conceptualization of addiction, we note that others use overlapping but different diagnostic criteria.) Cabral (2011), for example, surveyed 313 users of social media in the United States and found that 59 percent of them felt they were addicted to social media, while Olowu and Seri (2012) surveyed 884 students in Nigeria and found that 27 percent of them felt that they were addicted; the studies of Cabral (2011) and of Olowu and Seri (2012), however, were based on self-reports. A study of Chinese college students by Wu, Cheung, Ku, and Hung (2013) using a self-designed validated diagnostic instrument found that 12 percent of their sample was addicted. A study of young Peruvian subjects by Wolniczak, Cáceres-DelAguila, Palma-Ardiles, Arroyo, Solís-Visscher, and Paredes-Yauri (2013), also using a newly constructed validated diagnostic instrument, found that 8.6 percent of their sample were addicted to social media. A study of 1,870 Indian students using yet another validated diagnostic instrument found that 36.9 percent of social media users in the sample were addicted (Ramesh Masthi, Pruthvi, & Phaneendra, 2018). Taken together, these studies suggest that social media addiction is an important problem, but because studies have relied on selfreports or newly developed and different instruments, it is difficult to say with precision how extensive the problem is. The lesson to be taken from the studies, however, is that social media addiction is a global issue that appears to be widespread among young people as well as adults.
While the specific mechanisms social media companies use in designing their platforms in ways that have rendered them addictive have changed over time, three of these design elements are common and worth pointing out: first, the use of intermittent variable rewards (or what is sometimes called the slot machine effect) (Griffiths, 2018;Harris, 2019;Williams, 2018;Wu, 2016); second, design features that take advantage of our desires for social validation and social reciprocity; and third, platform designs that erode natural stopping cues. 3 We describe these briefly here and will return to them in the next section, but we note that these are not the only addictive mechanisms internet companies use. 4 Wu (2016: 187), paraphrasing the cognitive scientist Stafford (2006), notes that "the most effective way of maintaining a behavior is not with a consistent, predictable reward, but rather with what is termed 'variable reinforcement'-that is, rewards that vary in their frequency or magnitude." When a user opens the Twitter app, the user is brought to a blue loading screen. One might think that this loading screen must be due to slow hardware or internet; however, Morgans (2017) notes that this delay in loading is yet another way to generate intermittent variable rewards. Pinterest takes a slightly different approach: "As the user scrolls to the bottom of the page, some images appear to be cut off. Images often appear out of view below the browser fold. However, these images offer a glimpse of what's ahead, even if just barely visible. To relieve their curiosity, all users have to do is scroll to reveal the full picture. . . . As more images load on the page, the endless search for variable rewards of the hunt continues" (Eyal, 2014: 110). More generally, the "pull-to-refresh" feature seen in a number of social media platforms mimics the motion and variable reward schedule of a slot machine (Harris, 2019;Williams, 2018).
In addition to generating intermittent variable rewards, social media platforms have introduced reward schemes designed to take advantage of our desire for social validation and reciprocity, among other psychological tendencies and needs. One notable example is Snapchat's use of "snapstreaks," a running tally of the number of consecutive days a user has exchanged photographs or "snaps" with another user (Griffiths, 2018). Teens often face immense pressure to maintain these streaks (Bosker, 2016). Similarly, Facebook's "like" button taps into social reciprocity (and social validation); as Alter (2017: 128) notes, "it's hard to exaggerate how much the 'like' button changed the psychology of Facebook use." Most social media platforms have now introduced social reward schemes similar to the "like" button.
Third, the erosion of natural stopping cues is most prominently seen in the use of infinite scrolls (Harris, 2019;Williams, 2018). Prior to infinite scrolls, when a user arrived at the bottom of a webpage, there was a natural stopping cue-that is, the end of the page. The user at that point would have faced some decisions: whether to press the link to load the next page, whether to exit the platform, and so on. Introducing infinite scrolls removed the opportunity to make such decisions. Now, as the user scrolls, the platform automatically populates the next page, thereby removing stopping cues that would have previously given the user the opportunity to reflect, even for a moment, on whether that user should continue using the platform.
Crucially, the more that users spend time on social media platforms, the more data social media companies have about what works and what does not, which in turn allows them to further refine their platforms. As Alter (2017: 4) puts it, "the people who create and refine tech . . . run thousands of tests with millions of users to learn Ethics of the Attention Economy which tweaks work and which ones don't-which background colors, fonts and audio tones maximize engagement and minimize frustration. As an experience evolves, it becomes an irresistible, weaponized version of the experience it once was. In 2004, Facebook was fun; in 2016, it's addictive." Given the prevalence of social media addiction (and of internet addictions more broadly) and the ethical significance of the issues raised by addicting users to social media (issues we discuss in detail later), it is clear that social media addiction is a serious problem that managers of social media companies (as well as policy makers, public health officials, educators, and parents) would do well to address.
THE IMPERMISSIBILITY OF MAKING SOCIAL MEDIA ADDICTIVE
The research and related literatures on internet addiction, on balance, then, would seem to suggest that there is a substantial social media addiction problem, and one that gives rise to various harms. We turn now to making three distinct, but related, moral arguments about this problem. First, we argue that in light of the kinds of harms associated with internet addictions, it is wrong to use social media platforms to addict users, and these harms are not justified by the benefits those technologies produce. Second, we argue that users of social media platforms are injured in a way that is demeaning, thereby adding insult to the injury. Third, we argue that addicting users to social media constitutes a particularly objectionable form of exploitation. These arguments, we believe, show not only that it is wrong to design social media platforms that addict users but also why it is wrong.
The Harm Argument
Much of the literature on internet addiction has examined the harmful effects of both general internet addiction and addiction to social media, and several studies have confirmed their association with a wide range of harmful effects. Generalized internet addiction has been associated with poor performance at school because the addicted student fails to devote enough time to his or her studies (Fitzpatrick, Burkhalter, & Asbridge, 2019). It has also been associated with poor work performance because the addicted worker spends excessive amounts of time surfing the internet at work (Beard, 2002). A study focused on addiction to social media by Shakya and Christakis (2017) found that the more time young people spent on social media, particularly Facebook, the unhappier they were. Another study found that the more time adolescents spent on social media, the more depressed they became (Raudsepp & Kais, 2019). Kross et al. (2013) found that the longer people remained on Facebook, the more negative a mood they later reported.
It is significant that many of the harms associated with both general and specific internet addictions have a shared source: the time the addict spends on the technology. As the addicted person devotes more time to social media, the individual will necessarily have less time to devote to school, work, sleeping, caring for himself or herself, interacting with family, and face-to-face socializing with friends. As a result, the person's school, work, health, and social life often suffer. The individual's familial and other face-to-face social relationships will atrophy, leading one to 328 Business Ethics Quarterly become more isolated. In addition, as the empirical studies reviewed earlier show, the greater the amount of time the addicted person spends on the internet, the more that person will feel anxious and depressed. Moreover, even when the addicted person is not on social media, the addiction continues to put demands on their time. An individual who is addicted to social media, for example, finds themself repeatedly throughout the day shifting attention away from other activities to check social media feeds. Each time the person returns to their other activities, the individual not only needs additional time to refocus attention on those other activities but is able to give only limited attention to those other activities (Ward, Duke, Gneezy, & Bos, 2017). This repetitive fracturing of attention, then, decreases the time and attention the addict can devote to school, work, or socializing. These harms are not negligible, and they are morally significant. To understand their moral significance, it will help if we set them against a plausible view of what human dignity requires. Toward that end, we here adopt the capabilities approach developed by Nussbaum (1997Nussbaum ( , 2000aNussbaum ( , 2001Nussbaum ( , 2003Nussbaum ( , 2011bNussbaum ( , 2011a and Sen (1985Sen ( , 1992Sen ( , 1999. The capabilities approach has, of course, been subjected to criticisms (Giri, 2000;Menon, 2002;Pogge, 2010Pogge, , 2002)-a number of which are addressed by Nussbaum (2000bNussbaum ( , 2007Nussbaum ( , 2019-and there are critical differences between Nussbaum and Sen, the two major proponents of the view (Nussbaum, 2003). However, we here adopt the approach as articulated by Nussbaum, not only because the approach remains plausible to us despite her critics, but because it has also been endorsed by a large number of philosophers and has become part of the theoretical foundations of contemporary international development policies, including the United Nation's Human Development Index (Stanton, 2007). 5 Nussbaum (2003: 40) proposes ten "human capabilities" that, she argues, are required by "the dignity of the human being and . . . a life worthy of that dignity." Among these are the following seven: 1) life; 2) bodily health; 3) senses, imagination, and thought (being able to sense, imagine, think, and reason in a "human" way informed by education); 4) emotions (being able to experience love, grief, longing, gratitude; not having one's emotional development blighted by fear and anxiety); 5) practical reason (the ability to form a conception of the good and engage in reflection about the course of one's life); 6) affiliation (being able to live with others, show concern for them, engage in social interaction with them); and 7) play (being able to laugh and enjoy recreational activities). 6 Nussbaum argues that these capabilities are "entitlements" of every person and that if we use the "language of rights," we can say that every individual has a "human right" to these capabilities (Nussbaum, 2011b: 36). 5 Beyond human development and human rights, the broader capability approach has had far-reaching influence on a number of fields, including welfare economics, environmental policy, gender studies, and global public health (Robeyns, 2016). 6 In addition to these seven, Nussbaum includes three other capabilities that are not directly relevant to our argument; these three are bodily integrity (freedom to move from place to place, security from violence, and choice in matters of reproduction), other species (being able to live with concern for and in relation to animals, plants, and the world of nature), and control over one's political and material environment.
Ethics of the Attention Economy
The harms that social media addiction-and internet addictions in generalinflict on the addict offend against these seven human capabilities that, according to Nussbaum, are required by human dignity and to which every person has a right. 7 Specifically, studies that have examined the harms social media addicts suffer and the corresponding capabilities they impair include the following: 1) Life: Several studies (Luxton, June, & Fairall, 2012;Twenge, Joiner, Rogers, & Martin, 2017) have shown that those who manifest an internet addiction, including a social media addiction, are more likely than others to have suicidal ideation. A recent meta-analysis of these studies by Cheng et al. (2018) showed that persons who have any kind of internet addiction not only think of suicide but also have significantly higher rates of planning and of actually attempting suicide.
2) Bodily health: A number of studies (Andreassen, 2015;Kim, Park, Kim, Jung, Lim, & Kim, 2010;Koc & Gulyagci, 2013;Wolniczak et al., 2013) found that compared to nonaddicts, adolescents who had a social media addiction, as well as those with other forms of internet addiction, suffered from poor sleep quality, used more alcohol and tobacco, ate irregularly, and had poor diets. Kojima et al. (2019) found that in general, adolescents addicted to the internet (including those addicted to social media) engaged in less exercise and physical activity and had less sleep. All of these factors, of course, undermine a person's bodily health.
3) Senses, imagination, and thought: Researchers have found an association between social media addiction (and other internet addictions) and a decline in the ability to reason accurately, think clearly, and engage in activities that require concentrated thought (Judd, 2014;Junco, 2012;Karpinski, Kirschner, Ozer, Mellott, & Ochwo, 2013;Kirschner & Karpinski, 2010).
4) Emotions:
Those who are addicted to social media, and to the internet in general, suffer a number of emotional deficits, including depression, low self-esteem, social anxiety, alienation from family and peers, hostility toward others, and poor interpersonal relationships (see the reviews of Andreassen, 2015 (2015), and King and Delfabbro (2018) found that those with internet addictions in all its forms engage in fewer social activities, spend less time with family and friends, and experience less family closeness.
7)
Play: Although those affected specifically with online gaming addiction may engage in excessive amounts of online gaming, all others afflicted with social media addiction or any other form of internet addiction have little time to participate in sports or any other kind of recreational activities, much less to "enjoy" them (Kim et al., 2010;Zhang, 2012).
The harms associated with social media addictions, then, are substantial moral injuries inflicted on the users they encumber. If we accept Nussbaum's (2003: 40) argument-as we do-they are harms that strike at the "central requirements of a life with dignity." To use Nussbaum's language, inflicting such harms are violations of the addicted person's rights (Nussbaum, 2011a). Inflicting such harms, then, is, prima facie at least, morally wrong.
An objection might be raised to our argument at this point. We argue that internet addiction (particularly social media addiction) imposes serious harms on users. To support our argument, we have cited a number of studies, many of which are correlational studies, that show that addiction to social media is associated with certain detrimental conditions, such as depression and anxiety. It may be objected, however, that although correlational studies may show that addiction to social media is associated with these detrimental conditions, they do not show that the addiction to social media (or, more generally, to the internet) causes those harmful conditions. Recent critical reviews of the research on social media have, in fact, pointed out that the correlational studies do not adequately distinguish cause from effect (Odgers & Jensen, 2020;Orben & Przybylski, 2019). Moreover, it is possible that the causality Ethics of the Attention Economy is bidirectional (Zink, Belcher, Kechter, Stone, & Leventhal, 2019). Indeed, some studies have found evidence that depression and anxiety lead some people to become addicted to the use of social media, while the addiction to social media leads others to fall prey to depression and anxiety (Gamez, 2014;Li et al., 2018).
A number of longitudinal and experimental studies, however, have addressed the causality issue, and these provide grounds for believing that, even if in some cases conditions such as depression and anxiety lead some users to become addicted to the use of social media, nevertheless, addiction to social media causes a significant number of users to fall prey to these detrimental conditions. A longitudinal study by van den Eijnden, Meerkerk, Vermulst, Spijkerman, and Engels (2008) found that the addicted use of chat and messenger features, now a core part of most social media platforms, at one point in time predicted the development of depressive symptoms six months later, while depressive symptoms did not predict later addiction to social media. The authors speculated that being addicted to social media might lead to the displacement of face-to-face interactions with friends and family and that the reduction in such face-to-face interactions results in depressive symptoms. Lam and Peng (2010) found that young subjects who engaged in the addicted use of the internet at the beginning of their study developed depressive symptoms nine months later. A three-year longitudinal study by Shakya and Christakis (2017), employing 5,208 US subjects, found that the more addicted their respondents became to the use of Facebook over three years, the more their physical health later declined, the poorer their mental health became, the lower they assessed their life satisfaction to be, and the higher their body mass index became. On the other hand, the more time their respondents devoted to interacting with real-world friends, the better they later fared on these measures. In an experimental study, Kross et al. (2013) followed eighty subjects over two weeks, asking them to text their responses to a questionnaire assessing their subjective well-being five times per day for fourteen days. Their study found that the more frequently their subjects interacted with Facebook, the lower their subsequent subjective well-being. A randomized experimental study by Tromholt (2016) found that when Facebook users were asked to give up their use of Facebook, they experienced fewer symptoms of depression a week later compared to those who continued to use Facebook. A similar experimental study by Hunt, Marx, Lipson, and Young (2018) followed 143 subjects over a four-month period and found that individuals who stopped using social media subsequently showed a reduction in their levels of depression, while those in a control group that did not stop using social media showed no changes. The decline in depression was strongest in those who were most highly depressed when they stopped using social media. The evidence, then, supports the position that addiction to social media is a cause of the harmful conditions that the correlational studies have found to be associated with such addiction.
It may also be objected that just because an act harms, or imposes risks of harms, does not necessarily render it morally impermissible; after all, many surgeries, medicines, and so on, cause harms, but we nevertheless deem the harms justified because of the compensating benefits the act produces. One might argue that the aggregate benefits produced by websites like Facebook greatly outweigh (and may 332 Business Ethics Quarterly justify) the aggregate harms due to addiction. Facebook and other social media websites, for example, have allowed billions of people to communicate and interact in ways that have been of enormous benefit. They have allowed many people to go online and build new relationships or recover old relationships with distant family and friends, to share their expertise and knowledge with others, to educate themselves about what is happening in the world, to communicate in times of crisis, and to organize entire social movements. In other words, social media also produces benefits, particularly by enhancing communication. Such benefits are not negligible, and the benefits Facebook and other such websites have produced may very well outweigh the harms they produce. But this objection fails to consider the fact that the immense benefits associated with the internet in general, and social media in particular, do not require the use of the mechanisms that have given these websites their addictive character. Much of the communicative and social interaction benefits social media websites deliver can be produced even if social media companies did not introduce the addictive mechanisms that they have designed into their websites, such as the intermittent variable rewards, social validation rewards, and elimination of natural stopping cues that we discussed earlier. These addictive mechanisms are not necessary to provide the communicative, relationship-building, educative, and organizational benefits social media has provided. The internet companies that build social media websites, then, build mechanisms into their websites that end up harming their users by addicting them, though they could provide similar valuable forms of social communication without those mechanisms. 10 Social media addiction is not a necessary part of delivering the benefits these products provide.
We conclude that it is morally wrong, then, to inflict on users the kinds of addictions that afflict many users as a result of the way social media companies construct their platforms and that the benefits produced by those platforms cannot justify the assaults on human dignity that result from the harms associated with those addictions.
The Adding Insult to Injury Argument
Not only are social media websites designed in ways that harm their users by addicting them but they add insult to the injury in a way that demeans and thus disrespects their users. To bring out this point, it will help first to briefly touch on a key feature of the design of social media platforms: adaptive algorithms.
Social media companies use so-called adaptive algorithms that continuously refine their platforms such that they can become more addictive for each user. The algorithms embedded in social media adjust the content they feed each particular user such that each user will remain engaged with the platform for ever longer 10 We are not here claiming that social media companies are intentionally harming their users. Rather, we are saying that social media companies make decisions that-regardless of their intentions-end up addicting users and thereby end up inflicting morally significant harms on users. Whether the social media firms intend to perform the action under that description (of intending to harm) is an issue on which we here take no position. For an overview of philosophical theories of intention, see Setiya (2018).
333
Ethics of the Attention Economy periods of time (Lanier, 2018;Rader & Gray, 2015). 11 The algorithms do this by monitoring the amount of time particular kinds of content keep the particular user engaged with the platform, and they use that data to continuously adjust the content so that the particular user remains engaged with the platform for ever lengthening periods of time (Lee, Hosanagar, & Nair, 2018). The user's engagement with social media, then, produces an addictive feedback loop: the more one uses the platform, the more data the platform's algorithm has about what keeps that particular user engaged, and the more the algorithm feeds that particular user precisely the content that will keep them engaged even longer, and so the more addictive the platform becomes for that particular individual (Chessen, 2018;Schou & Farkas, 2016).
Of course, employing user data to influence content and presentation decisions is not new. Television has used Nielsen ratings to make both content and presentation decisions. What is new, however, is the level of granularity with which the adaptive algorithms are able to tailor their platforms to specific individuals and to do so continuously, automatically, and in real time. As Wharton professor Jonah Berger puts it, "social media is like a drug, but what makes it particularly addictive is that it is adaptive. It adjusts based on your preferences and behaviors" (Knowledge@Wharton, 2019).
One might object that all addictions are characterized by tolerance, so that the more a person consumes an addictive substance, the more addicted that individual becomes. That is, the more a vulnerable person consumes alcohol, smokes cigarettes, or snorts cocaine, typically, the more addicted the person becomes to each of these things. So how is the rise in the addictive potential of social media different? While addictive substances change the addicted person by increasing the person's desire or craving for the substance, the adaptive algorithms of addictive social media websites change the website itself to increase its own addictive potential for each particular user. In other words, the more a person uses a social media platform, the more addictive the platform itself becomes (and in turn, the greater the propensity and likelihood of addicting the user or making the user more addicted). Cigarettes do not change themselves to become more addictive for each particular smoker; however, the more a person uses a social media website, the more addictive the website itself becomes for that particular individual.
Crucially, then, there is an added insult in the way the social media platform's addictive potential is increased: the social media companies involve the individual in the very process that makes the platform more addictive to that individual. Not only are social media companies inflicting the harms associated with the addiction but they get the user to contribute to their ability to do this. The user is being used against oneself, given that by using the social media platform, the user provides the data that make the platform itself more addictive for that individual. This adds a demeaning 11 Our point is not that spending lots of time on a social media platform is equivalent to addiction. As noted in section 1, one must satisfy additional conditions to be addicted. However, excessive time spent on social media is a particularly salient observable feature that does not rely on user reports about his or her mental state and is defeasible evidence of addiction.
334
Business Ethics Quarterly insult to the harms that accompany social media addictions and makes social media companies' act of addicting their users particularly perverse. 12 To highlight the nature of the demeaning insult, it will help to consider insults in a different context: paternalistic policies. Shiffrin (2000: 207) argues that paternalistic policies "convey a special, generally impermissible, insult to autonomous agents." This sort of insult has been characterized as "effectively telling citizens that they are too stupid to run their lives, so Big Brother will have to tell them what to do" (Anderson, 1999: 301). 13 More simply, the thought such paternalistic policies and interventions express is the insulting thought that "you do not know best with regard to your own matters" (Cornell, 2015(Cornell, : 1316 and that "we know better than you what's good for you" (1317). Now the insult in getting a person to contribute to making addictive the very thing to which that person becomes addicted expresses something worse than the insult involved in paternalism. 14 The insult involved when a social media website uses the person to harm themself is not the insult that a person does not know what is best for them (the insult expressed in some acts of paternalism); rather, it expresses the demeaning idea that the person's interests do not matter at all-a paradigmatic instance of disrespect. The insult involved in some cases of paternalism might be preferable, given that in such cases, at least what's best for you is a consideration in the decision calculus, even if it is condescending. But the insult involved in the case of social media is one that disrespects users through expressing the demeaning thought that the companies do not care whether it is better or worse for the user because the user does not matter; the user's interests do not figure into the social media company's decision-making.
The demeaning insult involved in the way social media companies addict users-by getting them to provide the data they will use to addict them-is a further reason why addicting users to social media is morally wrong. We will next turn to building on the argument in this and the previous sections to advance our final argument-that addicting users to social media constitutes a wrongful form of exploitation.
The Exploitation Argument
Much of the contemporary philosophical attention to exploitation (Mayer, 2007;Sample, 2003;Valdman, 2009;Vrousalis, 2018;Wertheimer, 1996;Zwolinski & Wertheimer, 2017), including in business ethics (Arnold, 2010;Arnold & Bowie, 2003;Berkey, 2020;Powell & Zwolinski, 2012;Snyder, 2010Snyder, , 2013Zwolinski, 2008Zwolinski, , 2009, has been directed at what we might call the "hard case" of exploitation, that is, understanding why and to what extent exploitation is wrong, when in many 12 The demeaning insult is analytically distinct from the harm because the harm can be realized without doing so in an insulting way (as is the case with other businesses that sell harmful products). Given this, the two are not one and the same, even if the insult and the harm are contingently linked. We thank an anonymous reviewer for asking us to clarify this point.
13 Citation due to de Marneffe's (2006: 80). 14 See Caulfield (2019) for an account of the value of assessing various problems in business ethics through an expressive lens.
335
Ethics of the Attention Economy exploitative arrangements (e.g., sweatshops and price gouging), both parties are better off than they would be without the arrangement. But there is an easier case of exploitation: the case of exploitation that harms the exploited party. We will argue that addicting users to social media is just such a case. In section 2.1, we discussed the harms involved in addicting users to social media. Now, we turn our attention to why addicting users to social media is a form of exploitation and one that is morally objectionable. Wood (1995Wood ( , 2016 has provided an important account of exploitation that has had influence in a diverse range of contexts (e.g., Arnold & Valentin, 2013;Healy, 2010;Miller, 2010;O'Neill, 2013;Rogers, Mackenzie, & Dodds, 2012). Wood (1995) holds that exploitation involves taking advantage of a person's vulnerability to advance one's own ends. He notes, "To exploit someone or something is to make use of him, her, or it for your own ends by playing on some weakness or vulnerability in the object of your exploitation" (Wood, 2005).
But not all acts of taking advantage of another's weakness or vulnerability for one's own ends are morally objectionable-that is, not all acts of exploitation are morally objectionable. For example, it is not wrong in basketball to exploit a defender's lapse in attention to pass the ball to a teammate for an easy layup, nor is it objectionable for an attorney to exploit a weakness in the opposition's argument (Wood, 1995: 152). So, what makes an act of exploitation morally objectionable? For an act of exploitation to be a wrongful kind, it must involve disrespect toward the object of exploitation (Arnold, 2010;Wood, 1995).
We will build on the argument in the previous subsection and argue that addicting users to social media involves a wrongful form of exploitation. We can characterize the components of the morally objectionable form of exploitation in which we are interested as follows: exploiting X involves 1) taking advantage of X's vulnerability to 2) advance one's own ends 3) in a way that disrespects X. 15 In section 2.2, we already discussed the demeaning insult that disrespects the user when social media companies design their websites in ways that addict their users. So now, we will focus on 1) and 2): how social media companies advance their own ends through taking advantage of their users' vulnerability.
According to Wood (1995), then, an act is exploitative only if the exploiter advances his or her ends (even if the exploiter does not benefit all things considered) through the interaction with the object of exploitation. This is clearly satisfied in the 15 Since Wood's (1995) article on exploitation, there have been numerous accounts of exploitation. The debate surrounding the concept of exploitation is an active area of research. For some overviews of the state of the debate on exploitation, along with some worries with Wood's account, see Vrousalis's (2018) and Zwolinski and Wertheimer's (2017). That said, Wood's key insight that exploitation involves taking advantage of another's vulnerability for one's benefit strikes us as capturing a critical aspect of exploitation. Moreover, it has been a particularly important account in the realm of business ethics (Arnold [2010] calls it "perhaps the most compelling empirical account of exploitation"). So, while acknowledging that there are a variety of accounts of exploitation available, we think it plausible that Wood's account captures a key component of exploitation, even if his account ultimately falls short of offering an exhaustive set of individually necessary and jointly sufficient conditions for the concept of exploitation.
336
Business Ethics Quarterly interaction between social media companies and their users. Social media companies, in fact, are among the most lucrative of all businesses, and given that their profitability stems largely from advertisements directed at users (PwC, 2018), it is clear that social media companies are advancing their own ends when they get users to engage and remain engaged with their social media platforms. This point is uncontroversial, and we will not say more about how social media companies benefit themselves through their interactions with their users. However, for the interaction between social media companies and users to be exploitative, the companies must advance their ends in a certain way: they must do so by taking advantage of the vulnerability of the users. So, we now turn our attention to how social media companies take advantage of the vulnerability of social media users to advance their own ends.
There are two sources of vulnerability in social media users. The first source of vulnerability is seen in the garden-variety type of exploitation that exists between drug dealers and their addicted buyers. This vulnerability is based on the addicted person's powerful and sometimes desperate craving for the addictive object that is the usual outcome of becoming addicted to the object. Wood (1995: 143) notes that "an addict's need or desire for drugs, for example, is clearly a vulnerability which pushers may [exploit]." 16 Similarly, social media companies exploit the desire or craving to use their platforms that is the result of becoming addicted to those platforms, and the companies profit when this craving leads their users to engage with the platforms.
The second source of vulnerability is rooted in the pervasiveness and importance of the internet in our lives. Even if a user were to overcome the first source of vulnerability (i.e., were to overcome his or her addiction), the user must continue to contend with this second source. The second source of vulnerability is based on the fact that the same powerful desires or cravings that are the result of becoming addicted to an object in the first place can be reignited by environmental cues even after the addict has managed to overcome the addiction (Lu et al., 2002;Niaura, Rohsenow, Binkoff, Monti, Pedraza, & Abrams, 1988). Several studies have shown that objects or situations that are associated in the addict's memory with the object of his or her addiction will arouse the desires and cravings that originally accompanied the addiction, even years after the addict was presumed to have overcome the addiction (Conklin, 2006;Siegel, 1999). A former drug addict, for example, may begin to experience such cravings when seeing drug paraphernalia or watching a movie with scenes of people using drugs (University of Guelph, 2019;Wolter, Huff, Speigel, Winters, & Leri, 2019). In a similar way, people who have recovered from an addiction to social media (or some other form of internet addiction) may again experience a craving to engage with social media when they see others using a computer or smartphone or when they themselves use a computer or smartphone for some purpose unrelated to social media (Ko et al., 2013). Unfortunately, because of 16 See also Mayer (2007: 137): "It is usually thought to be wrong to exploit another person's attributes, for example when a pusher takes advantage of an addict's craving and sells her more drugs." 337 Ethics of the Attention Economy the pervasiveness of the internet and its unavoidability in our lives, this second source of vulnerability is inescapable in contemporary life.
In other words, the pervasiveness and importance of the internet in our lives create an inescapable vulnerability to exploitation that makes addicting users to social media especially invidious. Addictions to many other activities and goods-for example, gambling, heroin, marijuana, television, and, to a lesser extent, alcohol -are such that one can get through life without having to be in situations where one is exposed to the environmental cues that can reignite craving for the addictive object. One can maintain a productive life even if one, for example, avoids going to casinos, removes oneself from the environment in which heroin use was common, or gets rid of the television. But it is virtually impossible in today's world to avoid use of the internet. While one can get on with a fairly productive life with little or no exposure to heroin, television, or gambling, it is extremely difficult to get by in contemporary society without exposure to the internet.
Moreover, it is not just that the internet is pervasive; it also plays a legitimate and essential role in many of our lives (Jackson, 2011). Many professional jobs require one to use email. Students at universities rely on the internet, as universities use online portals for grades and assignments, email communications, and entire courses. Health care professionals often convey test results through the internet. One report indicates that a majority of employers are less likely to hire a person without an active online presence (Harris Poll, 2017). Some employers strongly encourage employees to be active on social media and to post their experience as employees so that they can serve as brand ambassadors (Cervellon & Lirio, 2017). Many university social groups rely heavily on social media. Alerts and active shooting warnings are often disseminated through social media platforms by local governments, university security departments, and regional police departments; in some cases, changes to national and foreign policy are announced through the social media accounts of government officials. In other words, the internet's reach into our lives is much deeper and wider than the reach of other addictive substances and that constant exposure provides the cues that produce the cravings of social media addiction. This gives social media businesses innumerable opportunities not only to addict but to also readdict users.
The pervasiveness feature is perhaps most worrying in the context of children and teens. 17 Unlike many addictive substances and activities that are illegal for minors, the internet is entirely licit. A fifth grader cannot go to a store to purchase cigarettes or alcohol. Similarly, teenagers and children are not permitted to gamble in casinos. Yet there are few barriers to a child's internet use, and in fact, children face a significant cost to not using the internet. 18 Children and teens, then, are exposed 17 The Royal College of Psychiatrists recently released a report calling on the British government to require social media companies to provide data so researchers can further study the mental health effects of social media on children (Dubicka & Theodosiou, 2020). We thank an anonymous reviewer for bringing this to our attention. 18 Of course, some social media companies might note that a child needs to be of a certain age to sign up, but this has been almost entirely ineffective given the ease with which one can input a different age when signing up (Coughlan, 2016).
338
Business Ethics Quarterly to the internet at a time when they lack full moral agency and are most susceptible to addiction (Chambers, Taylor, & Potenza, 2003;Jordan & Andersen, 2017). In addition, some individuals, adults as well as children, have characteristics that make them particularly vulnerable to becoming addicted to the internet. Some studies have shown, for example, that users with low self-control (Li, Dang, Zhang, Zhang, & Guo, 2014;Özdemir, Kuzucu, & Ak, 2014) and neuroticism (Kuss, Griffiths, & Binder, 2013) are particularly vulnerable to internet addiction. The pervasiveness feature of the internet means that individuals with such vulnerabilities will find it particularly difficult to avoid becoming addicted.
Addicting users, given our current context, then, constitutes a form of morally objectionable exploitation. Social media companies exploit the vulnerabilities of potential targets who are vulnerable not because of deviant preferences but because our society now relies heavily on the internet. Internet companies have a vast number of potential addicts who cannot simply follow Nancy Reagan's infamous mantra to "just say no." To conclude, given how pervasive the internet is in our lives, and how difficult it is for most of us to forgo the internet, addicting users to social media involves an especially invidious sort of exploitation. By inflicting its users with addiction, social media businesses engage in a form of morally objectionable exploitation.
Summary
In this section, we argued that addicting users to social media is impermissible because it involves unjustifiably harming them in a way that is demeaning and objectionably exploitative. We argued that addicting users to social media harms them in ways that violate their rights and that these harms are not justified given that whatever benefits social media may provide, they can be realized without addiction. Second, the way in which social media companies have users contribute to making the platforms themselves more addictive, we argued, is particularly perverse because it involves a demeaning insult. Furthermore, addicting users is a morally objectionable form of exploitation that is especially troubling because the pervasiveness and legitimate role the internet plays in our lives create for some users an inescapable vulnerability to such exploitation.
In what follows, we will discuss the nature of the business model used by social media companies and how it incentivizes this wrongful kind of behavior.
A BUSINESS THAT INCENTIVIZES WRONGDOING
Many kinds of businesses (both technology and nontechnology businesses) provide products that addict their users. But addiction is merely a contingent feature of the business model of most of them. For example, a cigarette company would not object if a customer bought its product and threw it in the garbage, used the cigarettes to build model bridges, transformed them into modern art, or used the product in any other way apart from smoking, so long as the customer continued to purchase the 339 Ethics of the Attention Economy product. 19 In other words, the cigarette company would be indifferent to whether a customer ever actually smoked its cigarettes, as long as its revenues continued to flow at the same or an increased rate. 20 Something similar is true even for some addictive technology products that do not have an attention-economy business model. For example, consider subscriptionbased digital streaming services (e.g., Netflix): the contemporary popularity of the term binge watching is perhaps in large part due to such services. But so long as their customers purchase or renew their memberships, it is immaterial to these subscription streaming services whether or not they binge watch a given television series. This is not to say that these subscription-based streaming services do not employ mechanisms that render their platforms addictive: automatically rolling over into the next episode is a feature designed to keep users on the platform by eliminating natural stopping cues (e.g., having to end an episode and click into a new one). But the point is that it is not a necessary feature of the business model of companies with subscription-based streaming services that customers continue to watch the companies' shows. As long as customers renew or purchase their memberships, their failure to binge watch is not a significant problem for these companies. Perhaps it is even beneficial to subscription-based streaming services; assuming a company pays royalties on a per use basis, the company could lower its costs, and it would perhaps even be able to narrow its bandwidth infrastructure costs. To be clear, we are not saying that all of its customers would continue to buy and renew their subscriptions to these streaming services if they did not find the content addictive; rather, we are pointing out that making a platform addictive is not an essential feature of the subscription-based streaming service business model.
But attention-economy businesses-of which social media businesses are the paradigmatic example and our primary focus-have a business model that exhibits an important difference: it hinges on keeping users active on a platform for prolonged periods of time. The longer a user is active and engaged on a social media platform, the more profitable it is for the social media company. This is because the longer the user remains engaged with the platform, the more likely it is that this user will be exposed to, influenced by, and engaged with advertisements, and so the more the social media company can charge its advertisers (Lanier, 2018;McNamee, 2019;Price, 2018). Users of social media, unlike users of cigarettes, alcohol, or junk food, are not the source of the companies' revenues. The revenues of social media companies come from advertisers, not users. As the familiar slogan goes, with social media, you are not the customer, you are the product. 21 Thus built into the business 19 There is, of course, the possibility that cigarette companies would want you to smoke them for the purpose of getting other people to think it is trendy. But insofar as you are able to make it look like you are smoking, it would be irrelevant to them whether you in fact smoked. 20 None of this is intended by way of apologetics for the many serious ethical worries that arise due to cigarette businesses. We acknowledge the innumerous public health consequences of cigarettes and the cigarette companies' efforts to thwart democratic processes through troubling lobbying efforts and their attempts to influence the research agendas of universities. 21 The fact that user data are also sold is another point that supports the notion that users' attention is the product.
340
Business Ethics Quarterly model of social media is a strong incentive to keep users online for prolonged periods of time, even though this means that many of them will go on to develop addictions (Alter, 2017;Price, 2018). And, as we have argued, the significant harms of social media addiction have a temporal dimension: they are primarily related to the amount of time the person who becomes addicted spends on social media. 22 Given the arguments from the previous section-that addicting users to social media is impermissible because it inflicts unjustified harms in a way that is demeaning and objectionably exploitative-social media businesses have a strong incentive to engage in wrongdoing.
To be clear, we are not claiming that ad-based businesses are the only ones with a strong incentive to capture their customers' attention for as long as possible. We are arguing, rather, that insofar as a company's business model is an attention-economy business model (of which the ad-based models of social media companies are a paradigmatic example), this model generates a strong incentive to design websites in ways that addict users. The prime incentive for an attention-economy internet business is to get its users to devote prolonged periods of time to its website, and devoting their time to the website, for those who subsequently become addicted, is a primary source of the harms associated with their addiction.
The attention-economy business model is not novel, of course: both radio and TV programming have long run on such a model (Wu, 2016). Worries about addiction to radio and TV were also raised when these technologies first came to market (Meerloo, 1954;Sussman & Moran, 2013). And as ad-based attention-economy businesses, TV and radio also have an incentive to addict. But, as we have argued, technologies such as adaptive algorithms allow social media companies to target and continuously maximize their addictive potential at the individual level in ways that radio and television, currently, at least, cannot do.
Not all social media users, of course, become addicted, and there are a variety of reasons why particular individuals are vulnerable to becoming addicted to various behaviors, including genetics, environmental factors, and individual vulnerabilities (Browne et al., 2019;Kim & Hodgins, 2018). But a particularly important cause of internet addiction in general, and social media addiction in particular, are the design elements that internet companies embed in their platforms (Alter, 2017;Price, 2018). Some of the design elements that addict users to internet platforms were originally developed by engineers who drew on behavioral psychology to keep gamblers seated before the computerized slot machine monitors (electronic gambling machines or EGMs) that have largely replaced other forms of gambling in casinos (Abbott, 2017;Breen & Zimmerman, 2002;Schüll, 2014). According to Yücel, Carter, Harrigan, van Holst, and Livingstone (2018: 20), these EGMs "are intentionally designed with carefully constructed design elements … that modify fundamental aspects of human decision-making and behaviors, such as classical and operant conditioning, cognitive biases, and dopamine signals." Having been 22 The content of what users are exposed to also is plausibly linked to the harms. For example, exposure to content involving self-harm is linked to higher rates of suicidal ideation (Arendt, Scherr, & Romer, 2019). We thank an anonymous reviewer for raising this point about the relevance of the content that users encounter.
341
Ethics of the Attention Economy developed in the gambling industry, it was an easy step to adapt these design techniques to early computer games and then to the design of internet websites (Alter, 2017: 136-39;Courtwright, 2019). But web design researchers have gone on to develop new addicting technologies that combine engineering and behavioral psychology to ensure website users will be "persuaded" to behave as the designer wants (Alter, 2017;Fogg, 2003;Lanier, 2018;Price, 2018). Engineers trained in university programs where such techniques are researched, developed, and taught (e.g., at the Persuasive Technology Lab of Stanford University, located in Silicon Valley) are hired by Silicon Valley companies to use those techniques-for example, variable reinforcement (Cash et al., 2012)-to design websites that entice users to remain engaged for ever longer periods of time (Andersson, 2018;Leslie, 2016;Simone, 2018) and that ultimately addict many of them (Alter, 2017;Lanier, 2018;Young & de Abreu, 2011). Users who are vulnerable become "hooked" (Eyal, 2014). The morally significant harms suffered by those who become addicted to social media, then, are the result of the design decisions of people who work in the companies that own and create those platforms.
Our aim has not been to argue that social media firms intentionally addict their users; rather, we have focused exclusively on characterizing the moral dimensions of the act of social media firms designing platforms in ways that result in addicting users (whatever the intentions might have been of the people designing these platforms). Making claims about any given agent's intentions with respect to an action requires a kind of evidence that we have not aimed to provide. (We may note, however, that there is a plausible case to be made that some individuals at social media companies at least knew that users would become addicted and thus harmed by their website designs.) 23 Our chief aim in this section has been to discuss some of the distinctive incentives created by the business model of social media companies and to highlight how these incentives-particularly the incentive to monopolize the user's time-have led to the time-related harms inflicted on those who become addicted to social media.
IMPLICATIONS FOR THEORY AND PRACTICE
Some implications for both theory and practice are now worth noting. Several decades ago, the importance of not divorcing business ethics and engineering ethics was most prominently raised by the case of Ford Pinto (Danley, 2005). Our article provides further support for why certain issues in design ethics and engineering ethics are not only of tangential relevance to business ethics but themselves raise distinctly business ethics issues. While the specific issues of how exactly to design a 23 For example, Chamath Palihapitiya, a former vice president at Facebook, stated, "The short-term dopamine-driven feedback loops we've created are destroying how society works… . I feel tremendous guilt… . I think … we kind of knew something bad could happen" (quoted in Lanier, 2018: 9). Sean Parker, the first president of Facebook, stated, "We need to sort of give you a little dopamine hit every once in a while because someone liked … a photo or a post or whatever … because you're exploiting a vulnerability in human psychology… . The inventors, creators-it's me, it's Mark [Zukerberg], its Kevin Systrom on Instagram, it's all of these people-understood this consciously. And we did it anyway" (quoted in Lanier, 2018: 8).
342
Business Ethics Quarterly social media platform may turn on questions of engineering and its ethics, many of these decisions are made by the company's managers and are prompted by the incentive structure of the company. Scholars should not see engineering ethics questions as divorced from business ethics, and vice versa. Second, well-intentioned teachers often encourage children to become more tech savvy, with an eye to preparing them for college and beyond. But this emphasis on technology in K-12 education, given the high addictive potential of social media, should be carried out with full awareness of its costs. More than that, it is not selfevident that a more technologically advanced class is a more pedagogically advanced class. Nor is it obvious that use of technology allows our children to become better-however we may understand this term-graduates or even citizens of our communities. Indeed, the founders and executives of many Silicon Valley tech companies-employees of the very firms that create the most addictive platforms-have opted to send their children to low-tech schools that do not integrate computers, tablets, or other electronic devices into their curricula (Archibald, 2018). 24 Furthermore, although much research has focused on the so-called digital divide (the disparity in access to the technology needed for educational and professional success between low-and high-income communities) (Rideout & Robb, 2019;van Dijk, 2006), there is a different kind of digital divide-call it the digital use dividewhere teens in low-income communities are exposed to nearly two hours more per day of screens than teens in wealthier communities (Rideout & Robb, 2019). While it is important to ensure that children of low-income communities have access to the resources required for educational and professional success, understanding the digital use divide takes on added importance, given the potential to addict.
Third, the design features firms use to make their platforms more addictive could be used for the opposite purpose: to empower users to have a healthier relationship with social media. Importantly, these are fixes tech companies could implement with relative ease. For example, Apple has implemented features into its most recent iOS operating system to alert the user to his or her phone's usage statistics (e.g., hours spent on the device, number of instances a user turned on his or her phone). Something similar could be done for social media. Harris (2018) suggests other helpful design features, including alerting users to the estimated time they would spend were they to log in to a given website, alerting them to how long ago they logged in, and more. These suggestions are akin to the sorts of suggestions Sunstein and Thaler (2008) make in their discussion of nudges: in the same way that use of an opt-out on an organ donor form dramatically increases the number of donors, social media companies could assume that users opt out of the use of addictive aspects of technology unless they explicitly opt in (see also Goldstein, Johnson, Herrmann, & Heitmann, 2008).
Fourth, insofar as social media firms continue to render their products more addictive, this fact should be made plain to their users. This is especially so given 24 For example, Steve Jobs famously did not allow his children to use iPads (Bilton, 2014).
343
Ethics of the Attention Economy that, as we discussed, the platforms themselves are increasing in addictive potential due to the use of adaptive algorithms. Imagine if every time you bought coffee from your neighborhood café, the coffee, without your knowledge, spiked in addictiveness. This, obviously, would be troubling. But it might be made less bad if the café were to tell you that it would increase the addictiveness of your coffee each time you purchased coffee there. Tech firms, similarly, owe it to their users to make it clear that they are employing the users' usage data in ways that will not only make the experience better but may also make the platform more addictive. Moreover, doing so would help to lessen the force of the insult we discussed in section 2.2 on adding insult to injury. In short, even if technology firms continue to addict users, they ought to be transparent to users about the ways in which they are incorporating tools of behavioral psychology to design mechanisms that may elicit addiction.
Fifth, we should take seriously the possibility of ridding ourselves of social media until it takes on a form with a dramatically changed incentive structure and is designed to empower users in their decisions regarding its use (Lanier, 2018;Newport, 2019). And policy makers have an important role to play. 25 Although it is unlikely that policy makers would (or even should) pursue measures as drastic as prohibiting social media, policy makers should lower the barriers users face to exit social media. If a user wants to quit Facebook, for example, this is the process the user must go through at present. First, the user must click an unlabeled down arrow at the top right of the screen, and then "Settings." After doing so, the user is presented with a menu of thirteen options, including "Privacy," "General," "Security and Login," "Your Facebook Information," and "Blocking." The user would need to know that the correct option to choose is "Your Facebook Information." Once that is selected, the user is brought to a menu of an additional five options, one of which says "Deactivate and Delete." One might think the process is now complete. Not quite. The user is then presented with a choice to "Deactivate Account or Permanently Delete"-with the default, preselected option being the former. Suppose the user selects "Deactivate Account." After doing so, the user is brought to a different page, at the top of which it asks, "Are you sure you want to deactivate your account?" followed by five algorithmically curated photographs of that user's friends. Above each photograph, it notes that the friend will miss the user; so, for example, above Anjali's photo, it says "Anjali will miss you," along with a prompt to send Anjali a message (which would then thwart the deactivation process).
Suppose the user remains on course. Facebook next requires the user to select from one of ten reasons for leaving. Each of the listed reasons generates a pop-up window. For example, suppose the user's selected reason for leaving is "I spend too much time using Facebook"; that option pops up a window with the following response: "One way to control your interaction with Facebook is to limit the number of emails you receive from us. You can control what emails you receive [by clicking this link]." Let's suppose the user exits this window and continues the deactivation process. Once the user selects his or her reason for deactivating, the user must then select a further box to opt out of receiving future emails from Facebook (and select whether the user wants to keep using Facebook's messenger platform). Suppose the user opts out of receiving emails from Facebook and declines to continue using Facebook's messenger platform. Now the user can press the deactivate button. After doing so, once again, a notification pops up asking whether the user is sure about wanting to deactivate. If the user selects selects in the affirmative, the user can then press "Deactivate Now," which will conclude the deactivation process.
Suppose that, after deactivating, the user wants to return to Facebook. What must the individual do to reactivate? Facebook states, "You can reactivate your Facebook account at any time by logging back into Facebook or by using your Facebook account to log in somewhere else." That is, simply log back in. Given that many users habitually log in, some users may inadvertently log in. And once a user has logged back in (inadvertently or not), if the user wants to deactivate, the user must restart the entire deactivation process.
Suppose that, instead of deactivating, the user wants to permanently delete. The user needs to go through the same process discussed earlier for deactivating, but the user is instead ultimately brought to a screen that notes, "Your account is scheduled for permanent deletion. Facebook will start deleting your account in 30 days." If at any point during that thirty-day window the user inadvertently logs back in, the deletion is canceled, and the user must begin the process all over, with a renewed thirty-day period. This recommendation for policy makers is thus a simple one: require lower barriers to exit.
CONCLUSION
Social media companies have designed their platforms in ways that render their platforms addictive. Moreover, this is precisely what the attention-economy business model of social media companies strongly incentivizes them to do. Our article shows why scholars and policy makers should not treat social media addiction as the same sort of phenomenon as other addictions. We argued that a special kind of wrongdoing is involved in social media companies addicting their users: it unjustifiably harms users in a way that is both demeaning and objectionably exploitative. | 15,195.6 | 2020-10-06T00:00:00.000 | [
"Philosophy",
"Business"
] |
Transport Properties of Methyl-Terminated Germanane Microcrystallites
Germanane is a two-dimensional material consisting of stacks of atomically thin germanium sheets. It’s easy and low-cost synthesis holds promise for the development of atomic-scale devices. However, to become an electronic-grade material, high-quality layered crystals with good chemical purity and stability are needed. To this end, we studied the electrical transport of annealed methyl-terminated germanane microcrystallites in both high vacuum and ultrahigh vacuum. Scanning electron microscopy of crystallites revealed two types of behavior which arise from the difference in the crystallite chemistry. While some crystallites are hydrated and oxidized, preventing the formation of good electrical contact, the four-point resistance of oxygen-free crystallites was measured with multiple tips scanning tunneling microscopy, yielding a bulk transport with resistivity smaller than 1 Ω·cm. When normalized by the crystallite thickness, the resistance compares well with the resistance of hydrogen-passivated germanane flakes found in the literature. Along with the high purity of the crystallites, a thermal stability of the resistance at 280 °C makes methyl-terminated germanane suitable for complementary metal oxide semiconductor back-end-of-line processes.
Introduction
The aromatic bonds of graphene can be saturated with hydrogen atoms. This process leads to a 2D hydrocarbon called graphane [1,2], where the flat morphology of graphene evolves in a buckled sheet. This symmetry breaking results in a band gap opening to a value between 3.5 and 3.7 eV, depending on the resulting configuration of the buckling (boat or chair/twist) [3,4]. Such a wide band gap hampers the use of graphane in electronics and has motivated the synthesis of silicon and germanium-based analogues of graphane, called silicane and germanane, respectively [5][6][7]. Both silicane and germanane have a reduced theoretical band gap compared to graphane: 2.9 eV and 1.9 eV, respectively [8]. The experimental value has been found to be even lower, between 1.4 and 1.6 eV for germanane [6,9], with carrier mobilities of tens of cm 2 ·V −1 ·s −1 [10], raising hope for its use as an active channel in field effect transistors [11].
When compared to bulk Si and Ge, silicane and germanane offer the compelling combination of a quantized thickness and an atomic flatness characteristic of 2D materials. Both assets reduce the scattering mechanisms involved in electrical transport and thus favor a high carrier mobility. However, it has been shown that the hydrogen termination is unstable against thermal treatment [5], calling for more robust functionalization, such as methyl-terminated surfaces, which can restrict the formation of traps and thereby minimize carrier scattering [12,13]. While theoretical calculations predicted that the carrier mobility in methyl-terminated germanane could reach 10 4 cm 2 ·V −1 ·s −1 [14], the experimental works on the transport properties of methyl-terminated germanane remain scarce and are limited to ensembles of flakes [15].
Here, we took advantage of multi-probe scanning tunneling microscopy in ultrahigh vacuum (UHV) to characterize the transport properties of individual microcrystallites. Prior to the electrical measurements, our study revealed a different behavior between microcrystallites under the electron irradiation of a scanning electron microscope despite their annealing at 180 • C. While a fraction of the microcrystallites was well resolved, many microcrystallites became charged. Given that germanane is a layered material [16,17], and that layered materials are known to easily intercalate atoms and molecules [18], we first examined the chemical properties of the microcrystallites to identify the origin of the charging effects. By combining energy-dispersive X-ray (EDX), Raman, and cathodoluminescence (CL) spectroscopies, we show that an incomplete dehydration, or a partial oxidation, accounts for the charging of the microcrystallites and predominantly affects the microcrystallites with lateral dimensions exceeding~5 µm. In contrast, the stable microcrystallites are always free of oxygen, which is suitable for the formation of good electrical contact. Measurements of their four-point probe resistance is consistent with a bulk transport. While the resistance values normalized by the microcrystallite thickness is comparable to those reported for H-passivated germanane flakes [10,19], the methyl-terminated germanane microcrystallites are found to be thermally robust at 280 • C, a temperature more suitable with standard complementary metal oxide semiconductor (CMOS) technological processes.
Materials and Methods
The methyl-terminated germanane were synthesized as follows: a three-neck round bottom flask was taken into a N 2 filled glovebox, to which iodomethane and acetonitrile were added. The flask was then connected to a Schlenk line and immersed into liquid nitrogen until the solution was frozen into a solid. CaGe 2 , water, and a stir bar were added to the flask while the contents were frozen. The contents of the flask had a molar ratio (CaGe 2 :iodomethane:water:acetonitrile) of 1:30:10:60. The flask was evacuated and refilled with nitrogen three times, and the methylation proceeded for seven days at room temperature. At this point, the reaction mixture was again frozen by immersing the flask in liquid nitrogen and loaded into a glovebox filled with N 2 . The methyl-terminated germanane was separated using vacuum filtration, washed with acetonitrile, then dried under vacuum on a Schlenk line. The material was finally redispersed in an isopropanol solution.
The structural and chemical analysis of the methyl-terminated germanane microcrystallites considered in this study was reported in Ref. [15]. Transmission electron microscopy (TEM), X-ray diffraction (XRD), Fourier transform infrared (FTIR), and Raman spectroscopies revealed the formation of fine crystallites with a methyl functionalization of the sheets. Recent additional selected area electron diffraction (SAED) in the TEM confirms the structural quality of the flakes and microcrystallites when methyl-terminated germanane is stored in isopropanol ( Figure S1).
For this new study, the crystallites were deposited on a native oxide layer at the surface of a p-type B-doped Si(111) wafer by drop casting the crystallites from the isopropanol solution. Two scanning electron microscopes (SEM) were used to perform the characterization of the microcrystallites. The accelerating voltage and probe current in both microscopes were set at 5 kV and 100 pA, respectively. The first microscope (Zeiss Gemini) was installed in an ultrahigh vacuum (UHV) system (nanoprobe, Omicron Nanotechnology), with a base pressure lower than 5 × 10 −10 mbar. It was used to guide the positioning of four tips of a multiple-tip scanning tunneling microscope (STM) on the crystallites. As the use of good electrical contacts is essential to measure the resistance of a microcrystallite, tungsten tips were prepared by an electrochemical etching in NaOH and thoroughly annealed in UHV to remove the thin oxide layer covering the tips. In order to determine the microcrystallite thickness, a STM tip was brought into the tunneling range above the substrate in the vicinity of the microcrystallite. It slowly scanned the surface and, once the edge of the microcrystallite was detected, safely retracted to keep the tunneling current constant. Based on the variation of the piezo tube along the z-direction, the height profile was measured ( Figure S2). The STM tip could also be used to manipulate the microcrystallite and flip it around to visualize its morphology. The cross-sectional SEM view allowed the microcrystallite thickness to be measured (Figure 1c1,c2). The thinnest microcrystallite consisted of a hundred nanometers ( Figure S2c), which corresponds to a stacking of about a hundred layers [18], whereas a large majority of microcrystallite had a thickness of about 800 nm. As for the transport measurements, one of the tips was first brought into electrical contact at the surface of a microcrystallite with the Si substrate grounded. The substrate was then disconnected from the ground and the three other tips were approached to the surface of the microcrystallite in tunneling mode, so that the current flowed through the microcrystallite only. The final approach was monitored with the tunneling current. Stable electrical contacts were obtained when the current saturated, yielding electrical resistances between the first tip and one of the other contacted tips in the range 1-10 MΩ. The tips were positioned with an in-line four-point geometry. Injection of the current I through the outer tips and measurement of the voltage drop V between the two inner tips provided access to the four-point resistance R 4p = V/I. The four-point resistance R 4p is independent of the contact resistance [20], but it is important to minimize the drift of the piezo tube and maintain steady contacts during the transport measurements; otherwise, the V(I) characteristics might deviate from a straight line ( Figure S3). The second SEM was a ZEISS ULTRA 55 scanning electron microscope combined with a Quanta 200/Flash 4010 EDS detector (Bruker, Billerica, MA, USA) or a CLUE system (Horiba, Kyoto, Japan). The fully automatized compact optical spectroscopy module (R-CLUE) with a retractable parabolic mirror offers colocalized Raman, cathodoluminescence (CL), and photoluminescence (PL) imaging, as well as spectroscopy characterization of individual microcrystallites. The Raman spectroscopy measurements were carried out using an iHR320 spectrometer with a laser operating at 532 nm wavelength as an excitation source. The spectra were calibrated by setting the silicon phonon mode at 520.5 cm −1 .
Results
The methyl-terminated germanane microcrystallites were first examined with the UHV-SEM of the nanoprobe system. Prior to their observation, they were annealed for 3 h in the UHV preparation chamber. Indeed, the first reflection in the XRD pattern, which corresponds to the interlayer distance, was measured at 2θ = 7.9 • [15], and a comparison of this reflection with the literature suggests a hydration of the microcrystallites [18]. Although water is typically desorbed at 120 • C in UHV, the annealing was performed at a higher temperature of 185 • C to increase the efficiency of the desorption process as water molecules are intercalated between adjacent GeCH 3 layers deep into the microscrystallites. Figure 1 shows a selection of microcrystallites with lateral dimensions in the micrometer-scale range. Surprisingly, two types of behavior were found under the electron beam: some microcrystallites appear bright and are well resolved (Figure 1a), whereas the contrast of other microcrystallites show strong fluctuations (Figure 1b). We note that the stable microcrystallites are not altered by a longer exposure to the electron beam. In contrast, the fuzziness of the blurry microcrystallites vanishes upon manipulation with a metal STM tip, as illustrated by the comparison shown in Figure 1c,d. As the STM tip was grounded, we attribute the improved stability of the SEM image to a discharging of the microcrystallite through the electrical contact with the STM tip. A statistical analysis of the occurrence of instabilities in SEM images was performed on 75 microcrystallites, where both types of microcrystallites were observed side by side ( Figure S4), revealing that the largest microcrystallites are more likely to become charged (Figure 1d).
To understand the origin of the charging effects, the samples were transferred to a SEM capable of performing EDX spectroscopy. Despite a brief exposure to air, the same behaviors were found. As shown in Figure 2a, the microcrystallite to the right appears stable, whereas three other microcrystallites, delineated by blue arrows, exhibit charging, although the accelerating voltage and the beam current were minimized to 1 kV and 100 pA, respectively. EDX revealed that these microcrystallites contained a large amount of oxygen in contrast with the absence of oxygen for the well resolved microcrystallite (Figure 2b,c). As the detection of oxygen can be caused by the oxidation of germanium or the presence of water molecules still trapped in the microcrystallites, further analyses of the microcrystallites were performed with Raman spectroscopy implemented in the SEM setup. Figure 2d shows three typical spectra, acquired on a well-resolved microcrystallite and on two microcrystallites prone to charging effects, respectively. The well-resolved microcrystallite shows a strong peak at 299 cm −1 and a small and broad band in the range 525-635 cm −1 , with both features also present when the analysis was quickly performed in air ( Figure S5b), consistent with previous results [15]. We attribute the first peak to the E 2g doubly degenerated longitudinal and transversal modes of the Ge-Ge bonds in germanane [6]. The second one arises from two contributions: the second-order phonon modes, similar to what is observed in bulk Ge [21], and the excitation of a Ge-C vibrational mode, a signature of the methyl-terminated germanane. This mode was measured at 594 cm −1 in Ref. [15] and is known to occur at 573 cm −1 in Fourier transform infrared spectroscopy [12,22]. As to the charged microcrystallites, two distinct spectra were obtained. In the first case (bottom), the spectrum shows a strong shift of the main peak to 289 cm −1 , with a broad shoulder towards lower wave numbers. This shoulder can be caused either by the presence of amorphous Ge or by the oxidation of the microcrystallite. As a small peak was also measured at 440 cm −1 , characteristic of the vibrational modes of GeO 2 [23][24][25], we can identify these microcrystallites as oxidized in agreement with the EDX analysis. The second type of spectra resembles the one of the well-resolved microcrystallites, albeit in a shift of the main peak to 293 cm −1 , and the occurrence of an ill-defined band around 167 cm −1 . An aging analysis of microcrystallites in air ( Figure S5c,d), which shows a similar magnitude in the shift of the main Raman peak upon a three-week exposure to ambient air, suggests a hydration of the flakes. Regarding the low-frequency band, it could be assigned to the A 1g out-of-plane transversal optical mode of Ge-Ge bonds. This mode is expected at a frequency below the E 2g mode in germanane [26], and due to the heavier mass of the methyl group compared to hydrogen, could occur at an even lower frequency than the one measured at 228 cm −1 in H-terminated germanane [6]. However, water is known to produce a strong Raman peak in this low-frequency region [27]. This is also true for H 2 O molecules under nano-scale confinement [28]. Comparison to Raman spectroscopy performed in air, where a small peak is measured at 162 cm −1 (Figure S5b), supports this hypothesis. Moreover, a very weak CL signal, centered at an energy of 1.82 eV (Figure 2e), was measured in contrast to the other microcrystallites, where no luminescence was detected. This agrees with the photoluminescence observed in air ( Figure S5a), indicating that water molecules are still present in the microcrystallites. Therefore, three types of microcrystallites are found: oxygen-free microcrystallites which show a high stability under the electron beam; hydrated microcrystallites; and oxidized microcrystallites. We tested a longer annealing time at 180 • C for further improvement of the hydrated microcrystallites, but did not notice any change in the probability of finding stable microcrystallites with SEM. In order to understand why intercalated water molecules cannot be desorbed from the hydrated microcrystallites, Raman mapping of crystallites was performed in air. This revealed a variation of the main Raman peak as a function of spatial location within a crystallite: 301.6 cm −1 at the center of the crystallite and 304.1 cm −1 at the edge of the crystallite ( Figure S5c). We attribute this spatial variation of the main Raman peak to the chemical environment of the germanane sheets. Indeed, the energy of the E 2g mode depends on the nature of the ligands, with the methyl group yielding the highest energy in comparison with H, CH 2 CH=CH 2 , or CH 2 OCH 3 functionalization [29]. This also depends on the ratio between the number of H and CH 3 when the relative stoichiometry of both ligands changes. As a result, we suspect the hydrated microcrystallites to be partially functionalized with the methyl groups, making the complete desorption of intercalated molecules more difficult as water molecules can react with poorly passivated sites. As time in air increases, the crystallite further interacts with ambient humidity, which red shifts the E 2g mode ( Figure S5d), further supporting an incomplete methyl passivation.
Based on all of these observations, we suspect a higher degree of polycrystallinity in the largest microscrystallites, with grain boundaries facilitating the ingress of solvent and water molecules during their long storage in isopropanol. As a result, these molecules can further react with germanium atoms through the numerous defects at the grain boundaries, accounting for the detection of oxygen in the EDX experiments. Interestingly, when a charged microcrystallite is manipulated with the STM tips, it can be cleaved (Figure 3). Following the cleavage, the smaller crystallites do not exhibit a fuzzy contrast in comparison with the biggest one, which is still glittering under electron irradiation. The improved stability of the contrast observed on the small crystallites points to a higher structural quality. The microcrystallites were electrically characterized with multiple-tip STM. Charged microcrystallites were investigated, but it was not possible to obtain stable electrical contacts. This situation is caused by the difficulty of keeping the charged microcrystallites immobile, when polarized tips are approached or contacted. Indeed, microcrystallites can move because of the electrostatic repulsion induced by the polarized STM tips ( Figure S6). Moreover, when contact is achieved, due to their partial oxidation, the insulating character of these microcrystallites results in highly resistive contacts which preclude the injection of current into the microcrystallites. Conversely, on the well-resolved microcrystallites (Figure 4a,c,g), the V(I) characteristics were rather linear, as shown in Figure 4c. Analysis of the resistance as a function of the tip separation revealed two types of behaviors among these microcrystallites. For microcrystallites, where the separation of the tips is in the range of the microcrystallite thickness, the resistance increases with the tip spacing. For example, the microcrystallite seen in Figure 4a has an electrical resistance that linearly increases for more than 4 kΩ, when the tip separation varies between 0.7 µm and 2.5 µm (Figure 4d). Due to a crystallite thickness which is smaller than the outer tip separation, the current distribution is compressed at the bottom of the crystallite, raising its electrical resistance (Figure 4b). Conversely, for thicker microcrystallites where the tip separation is smaller than the microcrystallite thickness, the current predominantly flows near the surface. Hence, the resistance is inversely proportional to the distance between the tips. Such an example is illustrated in Figure 4f for the microcrystallite observed in Figure 4e. While strikingly different, both behaviors are consistent with a three-dimensional transport [20]. At small tip distances, despite the finite size of the microcrystallite, the four-point resistance verifies Ohm's law for a homogeneous and isotropic semi-infinite three-dimensional resistive material, R 4p = ρ 3D /2πd, with ρ 3D and d as the bulk resistivity of the microcrystallite and the tip separation, respectively. At larger distances, the fourpoint resistance verifies the relationship R 4p = ρ 3D d/S, where S is the cross-section of the microcrystallite. Estimating a 2.5 µm 2 cross-section for the microcrystallite in Figure 4a and fitting the data points of Figure 4b,d with the second and first relationships, respectively, yields the consistent resistivities of 0.78 Ω·cm and 0.66 Ω·cm. A confirmation of a three-dimensional transport is provided by changing the probe arrangement. Instead of contacting the same top plane, the source tip can be in contact with an edge facet and the potential probes positioned across the microcrystallite as the electrodes for the source. Ground and potential detection are then easily commutable. Such a situation is illustrated in Figure 4g. This might give rise to slight deviation on the V(I) characteristics due to small changes in the injection of electrons when the tip sourcing of the current is in contact with rough facets. Overall, however, the V(I) curves yield resistances around 1 kΩ (Figure 4h), which is in agreement with the resistances found before. Although the microcrystallites can be seen as parallel resistors, their manipulation with the STM tips reveal a defective layered morphology, as shown in the SEM side views of Figure 1c. This inhomogeneous structure, with a random presence of dislocations and grain boundaries, renders the estimation of the resistivity of a single methyl-terminated germanane layer impossible.
However, it is interesting to compare the resistance of methyl-terminated germanane microcrystallites with the resistance of H-terminated germanane flakes found in the literature. Although the H-terminated germanane flakes show similar lateral sizes, they were much thinner [9,18]. Hence, we normalized the resistance by multiplying it by the measured thickness of the microcrystallite. Figure 5a summarizes the values of the resistances measured for different microcrystallites with similar electrode spacings, in the range 0.5~3.0 µm. For annealing temperature around 180 • C, the electrical measurements show similar results between the CH 3 -terminated microcrystallites and the H-terminated flakes. At a temperature of 210 • C, the resistance of the H-terminated germanane flakes strongly decreases, whereas an annealing of the methyl-terminated germanane microcrystallites at 280 • C for 12 h does not lead to any significant change in resistance. The stability of the electrical conduction at varying temperatures is supported by the structural analysis of the microcrystallites. As seen in Figure 5b-d, observation of the edge of a microcrystallite thin enough to exhibit a single diffraction pattern does not show any significant modification in the TEM image, as well as in the SAED pattern, highlighting the robustness of the methyl groups. The passivation of Si and Ge surfaces with hydrogen is known to be fragile. The surfaces easily react with water and organics in air, accounting for the lack of PL in H-terminated germanane [12]. The resistivity of H-terminated germanane drops after an annealing at 210 • C, due to hydrogen desorption and the possible transformation of the layers into germanene layers [19]. In contrast, the Ge-C bonds are stronger and more resistant to oxidation. This absence of any electrical modification shows the strong thermal stability of the methyl group, which is consistent with a previous study where the methyl desorption was found to occur at 420 • C [13].
Conclusions
In summary, methyl-terminated germanane microcrystallites intercalate molecules, water in particular, which can lead to reactive processes and unwanted chemical modifications. While the release of the intercalated water molecules upon annealing in UHV is more efficient for the smaller microcrystallites, a straightforward identification of the chemical quality of the microcrystallite can be easily performed with SEM by identifying the microcrystallites that do not charge under electron irradiation. These microcrystallites show ohmic behavior, which, in contrast to the H-terminated flakes, is found to be stable for annealing temperatures higher than 200 • C, offering stronger reproducibility for future experiments involving germanane flakes in a field effect transistor configuration.
Supplementary Materials: The following supporting information can be downloaded at https: //www.mdpi.com/article/10.3390/nano12071128/s1, Figure S1: TEM and SAED analyses of the microcrystallites, Figure S2: Height characterization of the microcrystallites, Figure S3: Influence of the electrical contact on the V(I) characteristics, Figure S4: Additional example of SEM image acquired at different length scales, Figure S5: Photoluminescence and Raman spectroscopy of methyl-terminated flakes performed in air, Figure S6: Jump of a charged microcrystallite due to electrostatic repulsion. | 5,275.2 | 2022-03-29T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Effect of ultrasonic and Nd: Yag laser activation on irrigants on the push-out bond strength of fiber post to the root canal
Abstract Objective: This in vitro study aimed to compare the efficacy of irrigants using various irrigation activation methods to the push-out bond strengths of fiber post to root canal luted with self-adhesive resin cement (SARC). Methodology: Forty-eight decoronated human canines were used. The specimens were divided into four groups corresponding with the post-space irrigation process and were treated as follows: distilled water (DW) (Control) group received 15 mL of DW; sodium hypochlorite (NaOCl)+ethylenediaminetetraacetic acid (EDTA) group was treated with 5 mL of 5.25% NaOCl, 5 mL of 17% EDTA, and 5 mL of DW; passive ultrasonic irrigation (PUI) group was treated with 5 mL of 5.25% NaOCl, 5 mL of 17% EDTA, and 5 mL of DW, and each irrigant was agitated with an ultrasonic file; and laser activated irrigation (LAI) group was treated with 5 mL of 5.25% NaOCl, 5 mL of 17% EDTA, and 5 mL of DW, and each irrigant was irradiated with Nd: YAG laser. Fiber posts were luted with SARC, and a push-out test was performed. Data was analyzed using one-way analysis of variance and Tukey HSD test. Results: The bond strength values for the groups obtained were as follows: Control (10.04 MPa), NaOCl+EDTA (11.07 MPa), PUI (11.85 MPa), and LAI (11.63 MPa). No statistically significant differences were found among all experimental groups (p>0.05). The coronal (12.66 MPa) and middle (11.63 MPa) root regions indicated a significantly higher bond strength compared with the apical (9.16 MPa) region (p<0.05). Conclusions: Irrigant activation methods did not increase the bond strength of fiber post to canal.
Introductıon Endodontically treated teeth with excessive loss of coronal structure can be restored by using fiber posts. Some advantages of fiber posts include credible mechanical properties and lower modulus of elasticity, similar to that of dentin, and thus there exists little risk of causing root fracture. Another advantage is that it is post-translucent, which allows the passage of light which is necessary for cementing polymerization, allowing the post to be connected to the dentin wall. 1 When the post is luted to the root canal, two interfaces occur. One of them is between the post and the cement, and the other is between the cement and the root dentin. 2 Most commonly reported mode of clinical failure in fiber post is debonding from root dentin. 3 The main problem that is believed to adversely affect the bonding strength of fiber post is the blockage of cement adhesion by obstruction of the dentinal tubules by the Gutta-percha and sealer remnants, dentine debris and smear layer. 4 In addition to external factors such as irrigants, types of adhesives, and endodontic sealers, dentin-related factors such as dentin status and orientation of dentin tubules also affect these interfaces. 5 Irrigation after post space preparation procedures help remove the smear layer and may increase the strength between the cement bond and the root canal dentin 6 . Frequently suggested irrigation protocols are 5.25% sodium hypochlorite (NaOCl) and 17% ethylenediaminetetraacetic acid (EDTA). NaOCl is used to remove the organic content of the smear layer while EDTA is often used to remove the inorganic content of the smear layer. 7 Unfortunately, irrigants are unable to completely remove the filling material from the canals. 8 Therefore, several activation techniques were developed to more effectively remove pulp tissue and microorganisms, smear layer, and dentin debris from the root canal system, such as passive ultrasonic irrigation (PUI) and laser activated irrigation (LAI). 9 PUI is the ultrasonic activation of an irrigant in the root canal via an ultrasonically oscillating small file placed in the root canal after the root canal has been shaped. 10 Recently, LAI has been introduced as an activation method of irrigation solutions that uses transfer of pulsed energy by means of various laser systems. 11 Self-adhesive resin cement (SARC) systems are utilized to overcome the technical problems of multistep applications and to shorten the duration of clinical application. The main adhesive characteristic of SARC is attributable to a chemical reaction between phosphate methacrylates and hydroxyapatite; this cement presents limited infiltration into the tooth tissue. 12 The connection between post-dentin and cement is important for restoration stability and longevity. 13 Hence the effect of different irrigation methods on post-dentin bonding strength should be investigated after post-space preparation. Therefore, in this study, we aimed to evaluate the effect of irrigant activation techniques on the push-out bond strength of fiber posts. The null hypothesis tested was that irrigant activation techniques do not affect the push-out bond strengths of fiber posts to root dentin.
Specimen preparation
Forty-eight freshly extracted human maxillary canines were selected for this study. Teeth with a single straight root canal and developed apices fulfilled the inclusion criteria. All teeth were stored in 0.1% thymol until the experimental procedure. Periapical radiographs were taken from both the mesiodistal and buccolingual sides to ensure that there was only one straight canal in each tooth. Teeth that presented prior endodontic treatment and fracture lines were excluded from this study.
Each specimen was decoronated using a low-speed saw (Mecatome T180; Presi, Eybens, France) under water cooling to provide a uniform root length of 15 mm. All root canals were prepared to size R50 with the RECIPROC system (VDW, Munich, Germany). Irrigation was made using 5 mL of 5.25% NaOCl with 5 mL of 17% EDTA solution for 1 min and 5 mL of 5.25% NaOCl between instrument changes. Distilled water (DW; 5 mL) was used for a final irrigation. Finally, the canals were dried using sterile paper points. All instrumented teeth were obturated with gutta-percha cone and AH Plus sealer (Dentsply De Trey, Konstanz, Germany) with use of the cold lateral compaction technique, and the canals were covered with temporary filling material (Cavit-G; 3M ESPE, Seefeld, Germany). All specimens were stored at 37°C and 100% humidity for 7 days, after which the temporary fill was removed. The filling with mild pressure, and the SARC was polymerized with a light-emitting diode unit (Elipar S10, 3M ESPE, Neuss, Germany). All specimens were stored at 37°C and 100% humidity for 24 hours.
The specimens were buried in acrylic blocks to be cut with a low-speed saw. The samples were horizontally cut to obtain 1-mm sections. Six samples from each root were obtained. The samples were then separated into two pieces belonging to the coronal, middle and apical parts of the root. The 2 nd , 4 th , and 6 th slices were selected for a push-out test of the samples obtained from the coronal, middle and apical parts.
Push-out test
The push-out bond strength was measured using a universal testing machine (Autograph AGS X; Shimadzu Co, Japan). The push-out test was applied at 0.5 mm/min using a 1-mm diameter metallic plunger
Results
The means and standard deviations of the push-out bond strength values are presented in
Intragroup comparisons revealed that bond
strength values decreased in the coronal-to-apical direction, there was no significant difference in Control and LAI groups (p>0.05) but apical root regions were significantly lower than the coronal root region in NaOCl+EDTA and PUI groups (p<0.05) ( Table 1).
The frequency of each type of bond failure mode is given in Table 2. The most common failure mode was adhesive (51.4%), followed by mix failure between dentin and resin cement (41.7%) and cohesive failure in the resin cement (6.9%).
Dıscussıon
In many cases, treatment failure caused by cementation failure could be restored with a post after endodontic treatment. 1 The purpose of this in vitro study was to evaluate the effects of various irrigation methods on the push-out bond strengths of fiber posts to root dentin with SARC. Our results revealed that the examined irrigation methods slightly affected fiber post bond strength, but this was not found to be statistically significant. In accordance with the results, the null hypothesis was accepted.
After the post cavity was prepared, the presence of a smear layer consisting of sealer, gutta-percha, and debris was observed on the dentin surfaces examined with a scanning electron microscope. 20 Table 2-Mode failure percentages with respect to post space irrigation procedure and-rinse systems are used, removing the smear layer becomes essential in order to achieve a hybrid layer. 4 When endodontic treatment is applied for the first time or during retreatment, a combination of NaOCl and EDTA is considered as effective irrigation for smear removal but smear layer cannot be completely removed. 21 Therefore, self-adhesive cement does not require hybrid layer formation and is considered as an advantage for post retention. 12
Conclusıons
The findings of this study showed that irrigant activation methods did not increase the bond strength of fiber post to dentin luted with SARC, and apical root regions exhibited a significantly lower bond strength than the coronal and middle root regions. | 2,063.6 | 2019-05-30T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
PESTICIDAL POTENTIAL OF ETHNOBOTANICALLY IMPORTANT PLANTS IN NEPAL – A REVIEW
Received 23 July 2020 Accepted 11 August 2020 Available online 25 August 2020 Pests are considered a major problem in agriculture as they cause a various degree of losses. The use of synthetic pesticides to control these pests has resulted in pest resurgences, pest resistance, environmental degradation and lethal effect to non-target organisms in the agro-ecosystems. To minimize or replace the use of synthetic pesticides, botanical pesticides are important alternatives. They possess a toxic effect against pest including repellent, antifeedant and antibiosis effect against insect growth. In Nepal, among 5,345 species of flowering plants, 324 species have pesticidal properties. Some of the botanicals like Neem, Tobacco, Sweet flag, Garlic, Mint, Ginger, Artemisia, Sichuan pepper, Adhatoda, Basil, Drum-stick, Jatropha, Polygonum, Lantana, Chinaberry etc are widely used in pest management and many types of research have been done to explore the potential of these botanicals. This study aims to review the insecticidal potential of these important ethnobotanical plants. The biopesticides made from these botanicals were found to be effective against various pests. However, efficacy was found to be variable and often lower than that of synthetic pesticides.
INTRODUCTION
Nepal's agriculture sector is the most significant contributor to National GDP, engaging 2/3 rd of its total population. A large number of crops are grown for food, fiber, shelter, fuel, animal feed, fodder, medicine, and so on. Infestations of insect pests are one of the major challenges in agriculture. Different kinds of pests (insects, mites, rodents, birds, slugs, snails, etc.) and disease causal organisms (bacteria, fungi, viruses, nematodes, etc.) attack all types of crops which leads in various degrees of loss (Neupane, 2004). Globally, it is estimated that yield loss due to arthropods, diseases, and weeds account for about 35% significant crops, which may exceed 50% when pest control options are limited (Oerke et al., 2012). And in some cases, there may be higher losses up to total crop failure (Abate et al., 2000).
The application of pesticides has rapidly increased for control of pests in agriculture after their introduction in Nepal in the early sixties. The largest quantity of pesticides is used in rice (40-50%) followed by grain legumes (14-20%), fiber crops (13-15%), and vegetables and fruits (10-20%) (Manandhar & Palikhe, 1999). The annual import of pesticides in Nepal is about 211 mt. a.i. with 29.19% insecticides, 61.38% fungicides, 7.43% herbicides, and 2% others, and the average amount of pesticide used in Nepal is 142 g a.i./ha, which is very low as compared to other Asian counties (Sharma et al., 2012). Chemical pesticides are economical, reliable, and easy to use and have a high and instant effect against pests. These chemicals not only control the target pests but also control other non-target organisms (parasitoids, predators, plant pollinators, soil microorganisms, aquatic organisms, etc.) and wild animals. Such chemicals, when used repeatedly at high doses, lead to pest outbreaks, resistance, and resurgence. In the long run, insects develop resistance to insecticides.
Synthetic pesticides have serious health issues among workers during manufacture, formulation, and field applications. Most of the Nepalese farmers are unaware of pesticide types, level of poisoning, safety precautions, and potential hazards on health and environment. High amount of pesticidal residue in vegetable crops is sold in the market. The growers don't follow the certified waiting periods (time between the last application of pesticide and harvest of a crop) for several pesticides on vegetable crops (Shrestha & Neupane, 2002). They can create hormonal imbalance and have high and acute residual toxicity (Pretty, 2012).
Biopesticides were developed as self-alternative for synthetic pesticides. According to Mazid et al.,2011 "Bio-pesticides are naturally occurring substances from living organisms (natural enemies) or their products (microbial products, phytochemicals) or their by-products (semiochemicals) that can control pest by nontoxic mechanisms" (Mazid et al., 2011).
3) Biochemical pesticides
In Nepal, the use of locally available plants for pest control is one of the traditional methods. Our farmers have been using such plants since ancient times. Most of the Ayurvedic plants also possess pesticidal properties (Neupane, 2004). Botanical pesticides are easy to grow and are easily found in our surroundings. Besides low cost, less toxicity, and environmentally friendly characteristics of these pesticides make them more preferable (Palikhe, 2002). Botanical pesticides do not pollute the environment as they are easily decomposed by microorganisms (Dubey et al., 2010). The study aims to know the pesticidal potential of the ethnobotanicals found in Nepal. The specific objectives of this study is to access the effectiveness of those ethnobotanicals against various pests.
REVIEW METHODOLOGY
A rigorous desk study was done to collect and synthesize information in line with the topic of study. Various research papers, review articles, commentaries, and reports were earnestly read and screened for data compilation and its subsequent analysis. Scientific databases referred for the purpose included Scopus, Sciencedirect, Pubmed, Scifinder, ResearchGate, academia.edu, and Google scholar.
Pesticides in Nepal
In Nepal, total 170 different pesticides (by common name) have been registered under various trade names (3035). Most pesticides used in Nepal are imported from India, some from China and Japan, and other countries based on registration. The distribution of pesticides in Nepal is conducted only in the form of finished products. In Nepal, 3035 types of pesticides by the trade name and 170 common names have been registered up to 2018 for use under Pesticides Act and Rules.
Opportunities of Biopesticides in Nepal
The demand of biopesticides is rising steadily with organic crop cultivation and increasing health consciousness among people in all parts of the world as they are safe, do not have application restriction, are easily degradable and possess superior residue and resistance management potential. When used in Integrated Pest Management systems, biopesticides efficacy can be equal to or better than conventional products.
Bio pesticide sector has huge scope in Nepal. Due to its rich biodiversity, Nepal offers plenty of scope for biopesticides. The rich traditional knowledge base available with the highly diverse indigenous communities in Nepal may provide valuable clues for developing newer and effective biopesticide. The increasing awareness on organic and residue free food would certainly warrant increased adoption of bio pesticides by the farmers.
Trend of Bio-pesticide Import
The negative impacts of synthetic pyrethroids and increasing pesticide resistance have increased the interest in alternative control methods, with emphasis being placed on botanical pesticides and biological control. Biopesticides help farmer's transition away from highly toxic conventional chemical pesticides. The data shows that the import of Bio-pesticide is increasing rapidly from 147.02 a.i.
Demerits of using botanicals
Less toxic or nontoxic compounds makes it less harmful for human and environment but also to target organism. Botanical pesticides may require frequent applications and pesticides instead of synthetic chemical pesticides (Dodia et al., 2010).
Botanical pesticides are naturally derived from plants that have been formulated specifically for their ability to control insects. They are not true insecticides since many are merely feeding deterrents and their effect is slow. Botanical insecticides are easily degraded by sunlight, air, and moisture. They lack persistence and wide spectrum activity. They are not necessarily available season long. Most of them have no established residue tolerance. All plant products applied by growers have not been scientifically verified. Botanicals cannot kill insect immediately but are quickly to stop its feeding. Botanicals tend to be less expensive but are not widely available. Also, the potency of some botanicals may differ from one source or batch to the next. Also, they have poor water solubility and are not generally systemic, which is very important for effective control of sucking pests. The phytotoxicity is also problem of botanical pesticides such as neem oil based is often phytotoxic to tomato, brinjal and ornamental plants at high oil levels (Nawaz et al., 2016).
Commonly used botanicals in Nepal
In Nepal, around 324 species of botanicals are found and among them, in
Mode and specificity of action of Neem as bio-pesticide
• Oviposition deterrence
Azadirachtin blocks the neuro-secretory cells, which disrupts adult maturation and egg production and egg deposition of aphids. (Vijayalakshmi et al., 1985;Vimala et al., 2010) observed that the reproductive potential of Myzus persicae fed on a diet containing azadirachtin was less than half the other that fed on control diet within the first 26 hours.
• Repellant According to (Shannag et al., 2015) the three products Azatrol, Triple Action Neem Oil, Pure Neem Oil at higher concentrations were able to repel aphids feeding on sweet pepper plants.
• Antifeedant
When Spodoptera litura infested crops were treated to neem products, due to presence of azadirachtin, salanin, and melandriol, it cause vomiting like sensation and the insect does not feed on the neem-treated surface (Jeyasankar et al., 2010;Vijayalakshmi et al., 1985). •
Growth Regulation
The neem components, azadirachtin, suppresses the activity of ecdysone so that the larva fails to molt and ultimately dies. It also causes malformation and sterility in emerging adult or inhibition of chitin formation (Vijayalakshmi et al. 1985).
Effectiveness of neem products
Neem products are found effective against more than 350 species of arthropods, 12 species of nematodes, 15 species of fungi, 3 viruses, and two species of snails and one crustacean species (Nigam et al., 1994), 200 species of insects (Uchegbu et al., 2011).
A study on the effect of Azatrol 1.2% (Azadiractin A and B) and triple action neem oil (70% neem oil) and pure neem oil against aphid in greenhouse condition showed that aphid colonization is reduced by 50-75% after 1 week of application and 2 nd application at 1 week of 1 st application cause total elimination of aphid. It has shown that Feeding was suppressed but Neem couldn't achieve complete inhibition of food intake (Shannag et al., 2015). The cold extract after soaking of leaves for 1 week is found to have effective insecticidal properties against the storage insect pests (Vimala et al., 2010) . Neem seed kernel powder mixture can be used for the control of Okra Cotton leafhopper (Neupane, 2000).
Asuro (Justicia adhatoda)
Justicia adhatoda linn is a shrub widespread throughout the tropical regions of Southeast Asia (Chakraborty & Brantner, 2001). It was found that the leaves and flowers of Asuro contain a significant amount of phenols, flavonoids, and alkaloids in addition to protein and carbohydrate. The presence of these bioactive secondary metabolites in the leaves and flower of Justicia adhatoda Linn are correlated with their medicinal applications (Sarangthem, 2014).
Extracts of Asuro showed antifeedant (76.33), larvicidal (62.33%), pupicidal (22.05%), and ovicidal (58.86%) effect. On the contrary, the extracts of Vitex negundo and Justicia adhatoda prolonged the larval and pupal duration of S. litura. This indicates that the selected medicinal plants may be a potent source of natural antifeedant, ovicidal and larvicidal activities against selected important agricultural lepidopteran polyphagous field pest Spodoptera litura. Justicia adhatoda was found to be effective in reducing the feeding rate of larvae of Spodoptera litura with maximum antifeedant activity in ethanol extracts of Justicia adhatoda at 5% extract concentration (Sukanya Rajput, 2018).
Sadek (2003) reported the extract of Asuro leaves to exhibit feeding deterrent properties when applied on the leaf disc method against Spodoptera littoralis (Sadek, 2003). Anuradha et al. (2010) reported the deterrent effect of Asuro leaves extract on the last instar of Spodoptera litura at various concentrations (25, 50, 75 and 100%). Due to the toxic effect of plant extracts, the maximum number of treated larvae died in spite of less food consumption (Anuradha, 2010).
Tobacco (Nicotiana tabacum)
Tobacco (Nicotiana tabacum) contain nicotine and other alkaloids which are synaptic poisons, they mimic neurotransmitter acetylcholine and exhibits agonistic effects on most nicotinic acetylcholine receptors (Brack, 2018). Rizvi and his team concluded that tobacco extract @ 2 % showed the control of cotton mealybug when the infestation is at the initial stage (Rizvi et al., 2015). Tobacco decoction ( @250 g tobacco + 30 g liquid soap + 4 liters of water boiled for 30 minutes), sprayed @ 1:4 parts water was found effective to control Tobacco caterpillar( Spodoptera litura F.), mustard sawfly(Athalia lugens proxima) and leaf miners(Phytomyza horticola) on vegetable crops (Mainali et al.). According to Ubina et al. (1994) Tobacco spray reduced bean fly and bean aphid population by 89% and 97% , respectively. Tobacco dust reduced tomato cutworm and bean fly populations by 89% and 79%, respectively. Leafhopper, thrips and corn earworm were also reduced by 50-69%.
Sweet flag (Acorus calamus)
Sweet flag (Acorus calamus), native to India, central Asia, and Eastern Europe is found today in many temperate and sub-temperate areas of the globe. In Nepal, the herb is available up to 2000-meter altitude. Bojho are found in sedge meadows that are prone to flooding, edges of small lakes and ponds, marshes, swamps, seeps and springs, and wetland restorations. The plant contains β-asarone in stolons which is considered the main substance that acts as an insecticide (Giri et al., 2013). Acorus calamus stolon dust at 5 g/kg of potato tubers showed high efficacy to protect potato tubers against potato tuber moth for about three to four months in farmer's rustic potato stores (Giri et al., 2013). Bulb of the sweet flag can be used as an insecticide, insect repellent, and contact poison (Dahal, 1995).
Garlic (Aliium sativum)
Garlic (Aliium sativum) is herb that contains numerous vitamins, minerals, and trace elements. Many research have shown that garlic can be used as repellent to some plant pests and diseases (Ramasasa, 1991). Sulfur compounds such as DAS, DADS, DATS, methylallyl disulfide, methylallyl trisulfide, 2-vinyl-4H-1, 3-dithinin, and (E, Z)-aioenes are present in essential garlic oil (Aggarwal et al., 2013). These constituents could be used for the control of serious fruit and vegetable pests (Upadhyay, 2016). Two of the major constituent's methyl allyl disulfide and DATS, were found against Motschulsky and Tribolium castaneum (Herbst). Similarly, essential oils of garlic repelled and caused lethality in Sitophilus zeamais L.
Ginger (Zingiber officinale)
Ginger (Zingiber officinale) is one of the most common herbs used as pesticides. Prophylactic and therapeutic cadmium detoxification effects of ginger have been reported in many studies (Egwurugwu et al., 2007). 6dehydroshogaol, zingerone, and 3-hydroxy-1-(4-hydroxy-3methoxyphenyl)butane extracted from ginger showed moderate insect growth regulatory (IGR) and antifeedant activity against Spilosoma obliqua , and significant antifungal activity against Rhizoctonia solani (Agarwal et al., 2001). Extract of ginger can help in the control of American bollworm, aphids, planthoppers, thrips, whitefly, root-knot nematodes, brown leaf spot on rice, mango anthracnose, and yellow vein mosaic (Sridhar et al., 2002). Higher concentrations of ginger residue were found effective for the protection of crops against C. maculatus adult emergence (Amuji et al., 2012).
Sichuan pepper (Xanthoxylum armatum.)
Timur (Zanthoxylum armatum) is commonly used in daily life for condiments and therapeutic remedies. Different plant parts of the Z. armatum also has insecticidal potential. However, potential has not been yet determined against many agricultural pests, including leaf worm. In study done by (Kaleeswaran et al., 2018), n-hexane pericarp extract of Z. armatum has strong antifeedent, ovicidal and larvicidal properties against Spodoptera litura. Some research shows that it have insecticidal properties against Plutella xylostella (Kumar et al., 2016) and Pieris brassicae (Kaleeswaran et al., 2019). In a case study made in some parts of the country, Timur was found to be used by farmers for the preparation of botanical pesticides (Kaphle & Bastakoti, 2016).
Chinaberry (Melia azedarach)
Chinaberry (Melia azedarach) is highly recognized for its insecticidal properties. Biologically active triterpenoids with an alimentary effect are responsible for this property. They inhibit the feeding and also cause death and malformations of subsequent generations (Vergara et al., 1997). M. azedarach senescent leaf extract proved to be lethal to 100% of the larval population of Spodoptera frugiperda (Bullangpoti et al., 2012). Similarly in a study conducted on Diamond Black Moth extracts of chinaberry was found to be toxic to larvae they died due to failure in molting (Chen et al., 1996).
Lantana (Lantana camara L.)
Lantana Camara L. is a perennial shrub, exotic to Nepal, due to its adverse growth it is also called unwanted shrub (Vaidya et al., 2005). In Nepal, Lantana Camara extract and its powder widely used to check the plant diseases whether it is bacterial or fungal as well as to increase the fertility of the soil and also used to cure human diseases (Vaidya & Bhattarai, 2009). Lantanolic acid and Lantic acid are the active principles present in Lantana, which shows growth inhibition and repellent activity against insect pests (Nirmal et al.). Chopped leaves and tender stem of Lantana camara mixed with potato tubers @ 300-330 gm/8 kg was found effective to control potato tuber moth in storage (Pradhan, 1987). It contains a variety of chemical substances such as triterpenes, iridoid, and phenylethanoid, glycosides, naphthoquinones, and flavonoids (Ghisalberti, 2000). (Rajashekar et al., 2014) reported lantana to be effective against storage pests, while (Muzemu et al., 2011) reported that different plant extracts are biopesticidal against rape aphids(Brassica napus). L. camara contains camaric acid and olenolic acids which may have larvicidal or ovicidal properties (Ghimire et al., 2015). Research of Ghimire found that 50% concentration of L. camara leaf extract at 48 hrs and above was found deleterious to root-knot nematode (Ghimire et al., 2015).
Titepati (Artemisia Vulgaris)
It is distributed throughout Nepal at 300 -2500 m, common along sideways and in margins of cleared forest (Rai et al., 2012). Artemisia vulgaris L., a perennial aromatic shrub with a bitter taste, is considered as a medicinal plant and water extract of it consists of active components like psilostachyin A, psilostachy C, Exiguaflavanone A, Maackiain, fernenol with both anti-bacterial and medicinal value (Rai et al., 2012). Artemisia vulgaris leaves chopped pieces (20 g /kg potato) were also effective in reducing potato tuber moth damage levels (Giri et al., 2013). Fresh leaves extract of artemisia kept in water for one hour (1:4 parts) and sprayed @ 25,50, and 100g/liter water was found effective to control red pumpkin beetle in summer squash (Neupane, 1993). Chopped foliage of Titepati @ 5 mt/ha mixed in soil controlled red ant in potato field (Gc et al., 1997).
Mint (Mentha arvensis)
Essential oils and chemical constituents derived from different species of the Mentha were found to be effective against fungal and bacterial plant pathogens including storage insects like Callosobruchus and Tribolium species (Singh & Pandey, 2018). An aqueous extract of Mentha arvensis @2oo gm/ 1.33 liter of water applied on cauliflower foliage prevents the attack of mustard aphid (Lipaphis erysimi) (K Vaidya, 2000). Vaidya,
Traditionally it is used to cure gastrointestinal diseases, neurological disorders, diarrhea (Sharma, 2003) and leaf paste is used to cure swelling (Parihaar et al., 2014), According to Ayaz et al. (2016), in addition to the medicinal property, 124 compounds were identified among which several bioactive antibacterial, antifungal, and insecticidal compounds were found.
Sajiwan (Jatropha curcas)
Sjiwan (Jatropha curcas) is considered a multipurpose plant because of its multiple uses. It is used as live fences as it can prevent or control erosion and also reclaim land (Openshaw, 2000). Seed oil is used to make biodiesel while Twigs are used to brush teeth to cure gum problems and latex are mixed with mustard oil and used for itches in the body. Along with other uses, sajiwan can also be used as biopesticides and in laboratory insecticidal activity of seed extracts of Jatropha curcas against Homopteran (peach aphid), Lepidopteran (cabbage butterfly), and Coleopteran (rice weevil) insect pests were observed (Li et al., 2006).
CONCLUSIONS
Traditionally, farmers have identified and used variety of the plant products and extracts for pest control. As an alternative to synthetic pesticides, ecologically safe methods must be developed to control insect pests of field crops and stored food products. Organic pest management appears to be a more attractive alternative with lower economic costs. Combined use of botanicals with microbial pesticides increases efficacy and reduces cost per application and delays the development of resistance. Although botanical pesticides are safe to the environment, human health and natural enemies, they can't completely replace the synthetic pesticides. It will be quite logical to use existing technical knowledge and skill with scientific technologies. In several research, botanical pesticides have shown better activity than synthetic pesticides. Such many studies should be done on various effects of these botanicals against several harmful insect pests. To make botanicals more versatile, more formulations should be developed. Identification, documentation, conservation, and promotion of the existing indigenous knowledge and skill should be done to protect the intellectual right and put them into formal research for effective technology development. Networking and coordinated effort of all stakeholders is a crucial need to harness the abundant in-house resources available on pest management.
ACKNOWLEDGEMENT
First and foremost, praises and thanks to god, the Almighty, for his showers of blessings to us. The completion of this study could not have been possible without continuous support, valuable time and guidance from our respected teacher, Mr. Subodh Khanal. Besides him we would also like to thanks Mr. Arun GC sir for motivating and suggesting us. We are really thankful to our dear friend miss Sabina Regmi for technical guidance. Last but not least we would like to thank our beloved parents for supporting us in each and every situations of our life. | 4,912.6 | 2020-08-25T00:00:00.000 | [
"Biology"
] |
The 7T-MPW-EDDI beamline at BESSY II
The materials science beamline EDDI is operated in the Energy Dispersive DIffraction mode and provides hard synchrotron X-rays in an energy range between about 8 ... 150 keV for a multitude of experiments reaching from the in-situ study of thin film deposition over the investigation of liquid phase processes to the analysis of the residual stress distribution in complex components and technical parts. For high temperature experiments or measurements under external mechanical load various devices such as heating stations and a tensile/compression load test rig are available. Besides the sample environment for pure diffraction experiments a tomography/radiography setup is provided which allows for combined simultaneous diffraction plus imaging investigations.
Introduction
The EDDI beamline started user service in April 2005.It is operated in the energy-dispersive (ED) mode of diffraction employing the direct white photon beam provided by a superconducting 7T multipole wiggler.With an usable energy range of about 8 … 150 keV it is first of all dedicated to the analysis of structural and property gradients in the near surface zone of polycrystalline materials, thin film systems and technical parts and components (Genzel et al., 2007).The main advantage of the ED diffraction mode compared with the angle-dispersive (AD) mode is that the former yields complete diffraction patterns (inclusive of the fluorescence lines originating from the elements the investigated material consists of) for fixed but arbitrary positions of both, sample and detector.Since each diffraction line ℎ originates from another average information depth 〈 ℎ 〉 the ED mode provides an additional parameter that can be used for the depth-resolved analysis of residual stresses, crystallographic texture and the materials microstructrure, respectively (Genzel et al., 2011;Apel et al., 2011).Together with high flux synchrotron radiation which facilitates time-resolved studies, the two features fixed scattering arrangement plus multitude of simultaneously recorded diffraction lines offer a variety of experimental possibilities in different fields of materials science.
Fig. 1 shows the hutch of the EDDI experimental station which in contrast to most of the other facilities at BESSY II is firmly connected to the EDDI beamline (cf.chapter 4).The diffractometer system consists of two units in form of a X-cradle segment (5-axes positioner, mounted at the basic Θ-Θdiffractometer) for light and small samples, and a 4-axes positioner for large and heavy samples.The two-detector setup at the back wall of the hutch allows for simultaneous data acquisition in two different measuring directions, i.e. orientations of the diffraction vector with respect to the sample reference system.The radiography/tomography + diffraction measurement option available at EDDI is shown in Fig. 2.After being partially absorbed by the sample the directly transmitted beam is converted into visible light by a LuAG scintillator and then mirrored into the optical system of a fast CMOS camera.The part of the beam being diffracted by the sample passes the light conversion components without being absorbed and is recorded by a Ge solid state detector.This setup enables to perform fast in-situ imaging (radiography/tomography) and diffraction analysis simultaneously at one and the same sample and therefore, to track phase transformations and (micro)structure evolution during dynamic processes such as metal foaming (García-Moreno et al., 2013).
Instrument Applications
Due to the features of ED diffraction mentioned above and the very flexible setup EDDI is a multipurpose instrument applicable in various fields of materials science.Typical applications are: Phase analysis (qualitative and quantitative) Residual stress analysis Texture analysis Microstructure analysis (domain sizes and microstrain) In situ investigations (e. g. under high temperature or external load) High spatially resolved measurements (slit widths up to appr. 10 µm possible) Simultaneous measurements with two detectors Simultaneous radioscopy/tomography and diffraction
Source
The insertion device is a superconducting 7T multipole wiggler with the parameters summarized in Table 1.The wigglers critical energy is 13.5 keV at 1.7 GeV.Fig. 3 shows its energy spectrum recorded directly without as well as with different attenuators in the beam by means of a Germanium solid state detector (Canberra).
Optical Design
The overall beamline layout is shown in Fig. 4. Because the beamline is exclusively designed for ED diffraction, direct use is made of the white beam as emitted by the wiggler.The only optical elements are an absorber mask and two slit systems at different positions of the beamline, which are needed to reduce the beam cross-section.Additionally, two filter systems equipped with attenuators of different material and thickness are available to suppress low energy photons in order to prevent sample heating.The samples can be mounted on different positioning units, data acquisition can be performed either using one detector in pure vertical scattering geometry (standard setup) or by means of a two-detector setup (cf.Fig. 1)
Figure 1 :
Figure 1: Top: Experimental hutch of the EDDI-beamline.The inset depicts the 5-axes positioning unit and the laser + CCD camera system for sample alignment.Bottom: The two-detector setup at the back wall of the hutch.
Figure 2 :
Figure 2: The radiography/tomography + diffraction setup.The left inset depicts the rotation table, the right inset schematically shows the X-ray path of the transmitted (red) and the diffracted (green) beam, respectively (García-Moreno et al., 2013). | 1,185.8 | 2016-01-29T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Nelder-Mead Based Iterative Algorithm for Optimal Antenna Beam Patterns in Ad Hoc Networks
Directional antennas shape transmission patterns to provide greater coverage distance and reduced coverage angle. Use of adaptive directional antenna arrays can minimize interference while also being more energy efficient. When used in an ad-hoc network, this reduces interference among transmitting nodes and thereby increases throughput. Such “smart antennas” use digital beamforming based on signal processing algorithms to compute the appropriate weights to form effective antenna patterns. Smart antennas require the knowledge of the signal received at each antenna in the antenna array, thereby increasing the complexity of hardware and cost. Also, conventional smart antennas optimize results for each individual node, while it is preferable to have a global optimal solution. A problem that has not been addressed is how to compute individual beam patterns that maximize some measure of global network performance. Historically, the focus has been on finding node antenna patterns that give locally optimal performance. In this paper, we investigate a low hardware complexity beamforming approach aimed at improving global performance that uses average Noise-to-Signal ratio as the performance measure. Given a multi-hop route from source to destination, beam patterns are shaped to maximize average signal-to-noise ratio across all nodes on the route, which reduces bit-error rates and extends battery and network lifetime. The antenna weights are sequentially adjusted across all nodes in the route to achieve optimization across the network. By using phase-only weights, hardware costs are minimized. The performance of the algorithm using different path loss models is explored.
Introduction
Ad hoc networks are wireless networks capable of autonomous communication independent of pre-established infrastructure.Energy efficiency is an important consideration as it determines node and network lifetime.Usually, communication takes place using omni-directional antennas that radiate signals in all directions.This not only wastes transmission power at the node but also acts as a source of interference to other nodes.
Power efficiency can be improved using smart antennas to direct the beam in the desired direction while minimizing gain in interference directions.Smart antennas use digital beamforming (DBF) methods, which require separate transceiver chains, A/D and D/A converters, and DSPs for each antenna in the array (Figure 1(a)).Given access to each antenna signal, DBF can adaptively manipulate antenna weights to maximize SNR in real time.The downside is that DBF is unsuitable when low cost and complexity are required by the application.More importantly, such adaptive algorithms give locally optimal performance and don't typically consider global cost and performance constraints.
Analog beamforming (ABF) is a low complexity alternative to smart antennas.
The system relies on a single transceiver and power splitter/combiner (Figure 1(b)).Computer controlled analog amplifiers/attenuators and phase shifters are used to form the desired beam patterns.The disadvantage of ABF is that the computer only has access to the combined received signals.Furthermore, bidirectional amplifiers or step attenuators add cost and complexity to the system.Figure 1.(a) DBF architecture (Image source: [24]).(b) ABF architecture (Image source: [24]).
Therefore, the research described here focuses on techniques that use phase-only weights.
Since there is little established previous research on network-optimized antenna beamforming [1], we initially focus on fixed non-mobile networks such as wireless mesh sensor networks where node locations are known a-priori.
For a given route, we assume N nodes are active for relaying packets across a known route, and each node has an M-antenna array of omnidirectional antennas.All other nodes in the network are considered to be interference sources.
The SNR at each node can be predicted using simple path loss models (such as the two-ray model or the Walfisch-Ikegami model [2]), along with known positions of all nodes to estimate signal and interference powers.
We use SNR averaged over all network nodes as the measure of network performance.This is straightforward to compute and directly relates to network performance metrics such as bit error rate and battery life.To accomplish the optimization, we first find weights , 1, 2, , for each node j that minimize average network noise-to-signal ratio (NSR).Once all N node weight vectors are known ( , 1, 2, , ) weights are recomputed iteratively until the average NSR reaches a stable minimum.This provides a beamforming solution that intents to improve the performance of the network globally.
This approach can be simplified since each node in the route only communicates with previous and next hop neighbors and all other nodes are treated as interference.Thus, "average NSR" is taken to be the arithmetic mean over all the nodes along the route.Initially, only one route is considered; network nodes that are not in the current route are assumed to be interference nodes and their beam patterns remain omnidirectional or unchanged if assigned a pattern previously.
This paper is organized as follows: In Section 2, we discuss the related work on adaptive beamforming in ad hoc networks.Section 3 provides background theory on phased array antennas and the optimization technique used to obtain optimized beampattern in this paper.Section 4 explains the beamforming algorithm to provide improved global solution that increases the overall average Signal-to-Noise ratio (SNR) of the network.Section 5 outlines a brief discussion of noise in antenna systems.The simulation and analysis of results of the proposed beamforming algorithm is presented in Section 6. Section 7 concludes this paper with a discussion on performance issues and future work to be done on the developed algorithm.
Related Work
The problem of finding a global optimal solution to minimize the power consumption or maximize the Signal-to-Noise ratio is challenging in ad hoc networks.Previous researchers [3] [4] [5] [6] have proposed sub-optimal solutions using co-operative beamforming where nodes in an ad hoc network co-operate to act as antenna arrays.The authors of [3] formulate an optimization problem in a generalized scenario with multiple primary and secondary receivers that maximizes the weighted sum transmission rate of secondary destinations while maintaining the asynchronous interference at the primary receivers below their target thresholds.One of the main limitations of this approach is the huge amount of feedback overhead involved in the co-operative formation control algorithms [6] [7] [8].There are other researches that propose non-cooperative beamforming [1] [9] [10], which includes selfish nodes that do not co-operate with other communicating nodes and use smart antenna approaches for adaptive beamforming.The problem of finding a global solution is challenging mainly because of the lack of natural ordering of the actions in ad hoc networks [9].Zeydan et al. [9] point out that simple changes like variations in the power of one node pair affect the Signal-to-Interference-plus-noise ratio (SINR) of other node pairs and vice versa.
Thornburg et al. [11] assess the performance of millimeter wave (mmWave) devices to reduce interference due to directional antennas and building blockages.They formulate the performance of mmWave ad hoc networks in a stochastic geometry framework under the assumption of adaptive directional beamforming implementation and simulate beampatterns using a sectored model [12].The sectored model represents beam patterns under the assumption that the antenna array can provide enough degrees of freedom to form the purported beam but can result in errors or sub-optimal solutions if the antenna array is unable to produce the desired beam.
The authors of [13] study the impact of using directional antennas and beamforming schemes on the connectivity of cognitive radio ad hoc networks.They evaluate the performance using randomized beamforming and center directed beamforming in ad hoc networks.However, they do not employ any adaptive beamforming techniques to adapt to changes in the network.
Anbaran et al. have proposed a method using smart antennas that delivers beamforming performance close to that of phased array antennas without having any constraints on the antenna spacing, and compare it to the conventional Electrically Steerable Passive Array Radiator (ESPAR) [13] system.An Electrically Steerable Passive Array Radiator (ESPAR) antenna delivers a low-cost solution for analog adaptive beamforming.The ESPAR antenna consists of one center element connected to the source and several surrounding parasitic elements reactively terminated to ground.The beam pattern can be controlled by adjusting the value of the reactance that terminates the parasitic elements.This method is efficient but results in relatively larger beam width and higher side lobe levels.However, smart antennas require information about signal from each antenna in the array, which increases the hardware cost.The authors in [14] focus on using smart antennas in ad hoc networks and they also provide simulation results with a seven element ESPAR antenna using QualNet.
The Kalman filter [15] based adaptive array processing is fast and efficient but requires transceivers and additional circuitry at each antenna in the node, which adds to cost and complexity of the circuit.Our approach uses a single transceiver and phase only weights to reduce the hardware complexity and cost.The major objective of [16] is to study the overall efficiency of an ad hoc network in terms of the antenna pattern and the length of the training sequence used by the beamforming algorithms.They conclude from the simulation results that the radiation patterns with smaller beam widths and lower side-lobes result in higher network capacity.Reference [17] describes an approach that makes use of a directional antenna to improve the performance of multicasting in ad hoc wireless networks.The antenna beam width at the network nodes is determined in such a way that both the node's transmit power and the interference among simultaneous transmissions are reduced while the signal power at the intended receivers remains unchanged.
In this paper, we provide a low-hardware complexity phased array antenna beamforming technique that provides a network-wide optimized solution to deliver a global improvement in performance.We assume that the optimal route from source to destination is known a-priori, and that it can be obtained from any convenient routing protocol.All the antenna weights are calculated centrally (not in a distributed fashion at individual nodes) to minimize the signal-to-interference-plus-noise ratio (SINR) at each node.This makes our approach a separate layer that is independent of the routing protocol used and can be added on top of existing networks.Similar to cooperative techniques, beamforming is done to take into account terrain and node locations, but our method does not require internode communication and the associated overheads.Like smart antennas, we adjust beams to adapt to local terrain and other node signals, but antenna weights are computed off-line and prior to network setup, with periodic updates made as needed.Smart antennas perform local optimization, while our method seeks to optimize globally across the network.Also, hardware complexity of the proposed system is much lower than smart antenna-based radios.
Phase Only Weights
The transmitted signal can be represented as we can safely assume the transmitted signal to be ( ) ( ) 0 e j t r t s t ω = to avoid unnecessary calculations.
A uniform linear array consisting of M antenna elements is shown in Figure 3. Consider a single instant of time t, giving a snapshot of the wavefront for a signal arriving from direction θ .By looking at Figure 3, it is evident that wave , then the wavefront at the antenna #1 is ( ) ( ) We assume a "low-pass narrow-band" signal s(t) with 0 Bandwidth f .Therefore ( ) ( ) , where . The signal at the receiver for antenna m is given as For the entire array, the received signal will be 1, e , e , The received signal is represented as ( ) ( ) ( ) ( ) , where ( ) s t is a narrow band message signal and ( ) v t is white noise.The receiver down-converts the signal resulting in a complex base-band signal ( ) y t .For example, the result for a uniform linear array is: where k, d, and θ are the wave number, antenna element separation distance, and direction of arrival (DOA), respectively, and ( ) h θ is called the steering vector.Note that ( ) h θ must be modified for each specific antenna array geo- metry to give proper delay characteristics in the direction θ.Now, it is possible to find a linear filter K that minimizes the effects of noise without distorting the signal.
The gain in direction θ is: where w is the weight vector which applies to antenna elements and depends on the optimization method.
There are several ways to find the weight vector w .For example, it is possible to find w that minimizes output noise power while holding ( ) s G θ = in signal direction θ s .This is called a "Minimum Variance Distortion-less Response (MVDR)" filter [18].This method is effective but requires knowledge of the noise plus interference covariance matrix, which requires access to individual antenna signals and is not possible using analog beam forming.In addition, MVDR weight magnitudes are unconstrained and thus not suitable as phaseonly weight vectors., where , , , .
However, in most cases the optimal gain pattern ( ) opt G θ is not known a-priori so we use NSR as the fitness function.As this directly relates to global network performance and the resulting gain pattern ( )
Nelder-Mead Algorithm
The Nelder-Mead (NM) algorithm [19] is one of the most widely used methods for non-linear unconstrained optimization.The Nelder-Mead method attempts to minimize a scalar valued non-linear function of n real variables using only the function values without any derivative information.This algorithm uses a simplex of n-dimensional vectors x.Let x i denote the list of points in the current simplex, 1, , 1 i n = + .Because we seek to minimize the function f, x 1 is referred to as the best point, and x n+1 as the worst point.Four scalar parameters reflection (ρ), expansion (χ), contraction (γ), and shrinkage (σ) are specified for Nelder-Mead method.
The following indicates one iteration of the Nelder-Mead algorithm [20]: • The n + 1 vertices are ordered such that ( ) ( ) ( ) • The reflection point, x r , is computed as x if it is better than the worst point.• The function is evaluated by replacing all the points by ( ) + , except for the best point.The new vertices 1 2 1 , , , n x v v + are used for update in the next iteration.Before discussing performance of NM solutions applied to an entire network, we first focus on individual antenna array performance by comparing NM-based array solutions to the popular ESPAR antenna array described earlier.We modeled an antenna array with 7 antennas arranged in a circular geometry that is similar to the ESPAR antenna used by [14] for simulation as shown in Figure 4.In contrast to the ESPAR antenna, our antenna array has all elements connected to the source and the phases are individually adjusted to control the beam pattern.Both antenna arrays were designed to maximize the output SINR under the desired and interfering signals, which requires maximum gain in the signal direction and minimum gain in all others.Comparison of the beam patterns (Figure 5) with simulation results from [14] show that the NM-designed array has a much narrower beam width and lower side-lobe levels than the ESPAR antenna.
Algorithm
In this section, we describe a technique for finding network-optimized beam patterns (and the associated complex phase-only antenna weights) for all nodes along a route.This is a joint solution, where individual beam patterns depend on the antenna patterns of adjacent nodes.The approach is iterative, such that the iteration proceeds in a sequential manner from node to node along the route.The average NSR of the nodes along the route is calculated at each iteration and NM is used to minimize the average NSR by trying different values of antennas weights.The scheme is repeated until convergence.
The algorithm is summarized in the following steps: • Given all node locations, compute the distance d ij and the angle θ ij between the nodes i and j.
• For a given source and destination node, use a routing protocol to determine which nodes are members of the route.Nodes not included in the route are treated as interference sources.• Calculate path loss between the transmitting and receiving nodes using a suitable path loss model.
• Compute received power at node i, P Rij due to signal source j with transmit power P Tj , where i and j represent nodes on the route.Then compute the total received signal power at node i using S Ri Rij j P P = ∑ . Antenna gains are calculated using Equation (3).
• Similarly, total interference power at node i can be calculated using where P Rij represents received power from interference node j, i.e., node j is not a node on the route.
• Ambient noise can be included by computing a suitable noise temperature and using it to calculate the noise power i N .
• Assume an initial weight vector for the antennas at each node to compute an initial gain ( ) i G θ ; using this, calculate the received signal power, interfe- rence power, and the noise power.From these values calculate NSR at each of the M nodes in the route as well as the route-average NSR denoted by NSR : • Apply NM to compute the weight vector at each node using NSR as the fitness function.
• Using the obtained weight vector for the i-th node, calculate the gain in the direction of the j-th node ( ) ( ) is the weight vector at node i defined by antenna phase vector , , nimizes NSR by trying different values of i ϕ to get the weights.Note that each candidate weight vector affects the beam pattern of the current node, thereby changing the node's NSR as well as the average NSR for the route.
The iteration proceeds in a sequence along the nodes in the route and is repeated until convergence.Each time the weight vector at a node is calculated, it considers the refined weight vector of its neighbors from the previous iteration.Convergence is reached when there is no longer significant reduction in NSR in a complete pass of the algorithm through the route.
Each node along the route must communicate with its previous and next hop neighbors.In each pass, the algorithm tries to refine the weight vector at each node, such that the average noise-to-signal ratio is minimized.As the algorithm tries to reduce the average NSR by minimizing the individual terms of the summation , it always tries to improve the SNR at each node.Also, as the iteration proceeds in a sequence, all the nodes in the route are equally favored.
Noise in Antenna Systems
Along with the desired and interference signals from various sources, the antenna system also receives noise from radiating sources of natural origin.These sources include cosmic noise, noise from the sun, noise from the ground, etc.
Apart from these noises, the receiving system and amplifiers used with antennas also contribute to the system noise.Usually, the noise power received by an antenna is represented using antenna noise temperature and the noise from different sources can be combined in an additive manner.
The antenna noise temperature is the temperature of an equivalent fictitious resistor that would give rise to the same noise per unit bandwidth as that of the antenna output at a given frequency.The received noise power per unit bandwidth is given by a a S kT = , where k is the Boltzmann's constant and a T is the noise temperature of the antenna and is computed as , G θ φ is the antenna gain and ( ) , T θ φ is the sky brightness.Their product is integrated over the entire solid angle to compute the antenna noise temperature.There are empirical formulae available to calculate different factors that contribute to the sky brightness.For example, [21] have provided a formula for approximating the cosmic noise temperature, which is given by ( ) T is the absolute temperature (in Kelvin), λ is the wavelength in me- ters and f is the frequency in MHz.The thermal noise in the receiving system will also have a noise temperature in addition to the noise from natural sources.
The amplifier not only adds noise but also amplifies the noise at the input by a factor of the amplifier gain.Other factors like noise due to lossy elements also contribute to the system noise temperature.In general, the system noise temperature can be computed as .
Therefore, the noise power received by a receiver of bandwidth B would be sys kT B .
The system noise temperatures of a typical directive antenna vary between 40 K to 3000 K depending on the frequency of operation and [22] provide a graph of the noise temperatures of directive antenna for usual environment conditions that serves as an interim standard for most performance calculations.In our simulations, we have the operating frequency as 900 MHz and the system noise temperature at this frequency is about 60 K at a beam elevation angle of 90˚ from the zenith.The noise power of the antenna system is calculated as the product of the system noise temperature (~magnitude of order 2 at 900 MHz), Boltzmann's constant (magnitude of order −23), and the receiver bandwidth.To do an absolute worst case analysis, we assume a low pass filter in our system, so the bandwidth of our receiving system will be equal to 900 MHz (magnitude of order 8).In reality, a receiver would use a bandpass filter, and we could reasonably assume a bandwidth of no more than 30 MHz.Therefore, the noise power in our system is approximately of the order of −12.Considering a scenario of an ad hoc network with typical node separation of 1 km, a transmitter power of 10 W, and using the Friis equation of radio propagation to calculate the received power by an omni-directional antenna results in a magnitude of order −9.This shows that noise power would be at least 3 orders of magnitude less than the interference power received in an ad hoc network.Hence, we ignore the noise power in our computation of optimal weights for the antennas.
Simulation and Analysis
The simulations were performed using MatlabR2015a [24] under the assumption of a known hop sequence from source to destination.Figure 6 shows an example route through a network comprising seven total nodes and hop sequence 1-2-3-4.Each node in the hop sequence is shown along with its final computed linear gain pattern.Nodes 5, 6, and 7 represent interfering nodes whose positions were chosen randomly.The operating frequency was chosen to be 900 MHz.Each node has a square grid antenna array containing nine antennas with λ/4 grid spacing.The transmitter power was fixed to be equal to 10 W.
Figure 6 shows that the gain pattern for each node provides maximal gain toward both the previous and next node in the hop sequence.For example, node 1 tries to point the beam towards node 2, avoiding interference from nodes 5 & 6. Figure 7 shows average network NSR converging quickly from 8 dB to −22 dB after about six iterations providing an improvement in average NSR of 30 dB.The average NSR of the network using only omni-directional antennas is also shown in Figure 7 for comparison.To see if there is an improvement in performance, the SNR at each desired node of the network before and after optimizing the antenna weights is shown in Figure 8, which shows an improvement in SNR at each node of approximately 26 dB.We also show the minimum transmitter power required at each node to maintain a SNR of 10 dB and found a significant reduction in total transmitter power as compared to using omnidirectional antenna, which can be seen in Figure 9. Similarly, we fixed the SNR to be 10 dB (approximate lower limit for acceptable bit error rates) and calculated the transmitter power required at each node to maintain that SNR. Figure 11 shows the plot of the total transmitter power required for 1000 different random instances of interference node positions.It is evident from the plot that the directional antenna arrays outperform the omnidirectional ones and minimize the total power consumption in the network.We Figure 10.Average SNR using fixed transmitter power of 10 W, for both directional and omni-directional cases for 1000 random trials.
present results from experiments performed using the free-space propagation model and the Walfish-Ikegami model (WIM) [23].In WIM, we used an urban area with an average building height of 10 m with a building separation of 14 m, assuming the streets to be 8 m wide.WIM is validated for base station heights ranging from 4 -50 m along with the receiving antenna heights ranging from 1 -3 m.In the case of ad hoc networks, as the same antenna system is connected to a transceiver we assumed the antenna height to be 3.5 m in our simulations.
Comparing Figure 11(a) & Figure 11(b), we can see that the total transmitter power required by the nodes using directional antennas using WIM to calculate the path loss is less when compared to using the FSM.This sounds counter-intuitive as the path loss calculated by WIM is always greater than or equal to the path loss calculated using FSM, resulting in higher received signal strength in case of FSM.This is not only true for the desired nodes but also for the interfering nodes (larger in number), thereby, decreasing the SNR.That is why we need more transmitter power to maintain a certain SNR while using FSM to calculate path losses.
In this approach, the improvement in performance is due to directional gain as well as nulling of interference.The gain pattern for the same topology as in
Conclusion
A low cost, low-complexity, and energy efficient solution for adaptive beam forming in ad hoc networks was proposed to increase the overall average SNR of the network.The approach uses the Nelder-Mead simplex method of unconstrained optimization to find antenna weights that provide a global solution for optimal beam patterns for a given network topology.This can provide lower bit error rate, increase throughput, and extend network life.The proposed method does not require transceivers and additional circuitry for each antenna in One potential problem with the current implementation is that suboptimal solutions may occur when the algorithm settles for a local minimum.Ways to reduce this problem such as randomly visiting nodes during the iteration are being investigated.Another potential issue is that average NSR may not be the best fitness function to use since it can be affected by a few high or low NSR outlier values along the route.Alternative fitness function to be considered might include total route transmission energy, network lifetime, or average network throughput.Currently, the proposed algorithm is suitable only for stationary networks like wireless sensor networks.This is mainly because considerable processing time is required to compute new beam patterns for every node in the network, which would be required each time the network changes.For example, a network with 3 desired nodes and an interference node takes approximately 3.5 seconds to converge on an Intel Xeon Sandy Bridge CPU (2 GHz).The use of high performance computing and neural networks can potentially improve the convergence speed and make the beamforming algorithm suitable for mobile ad hoc networks.
Figure 2 .
Figure 2. Block diagram of the receiver.
Figure 3 .
Figure 3. Uniform linear array of M elements.
ForG
this research, we use the Nelder-Mead (NM) search algorithm to find phase-only weights minimizing a desired fitness function.NM is one of the most widely used methods for nonlinear unconstrained optimization.For example, to generate the optimal gain pattern ( ) opt G θ , we enforce the phase-only constraint by fixing the weights as e θ φ is the gain pattern generated by phase-only weights e j w φ = point x r is accepted and the iteration terminates.•The expansion point is computed as if f r < f 1 and the value of the function f e at x e is evaluated.The iteration is terminated after retaining either x e (f e < f r ) or x r (f e > f r ).• Contraction is performed by computing the contracted point new simplex is obtained by using the contracted point c
Figure 8 .
Figure 8.The initial and final SNR at each desired node (1 -4) in the network.
Figure 9 .
Figure 9.Comparison of the minimum power required at each node using omni-directional antennas and Nelder-Mead optimized arrays.
Figure 10
Figure 10 shows the histogram in decibel scale of average SNR for the considered topology over 1000 trials for two cases: 1) only omni-directional antennas are used, and 2) optimized antenna arrays are used.For each trial, different random interference locations were used.The average SNR for the directional case is considerably greater (about 35 dB) than that for the omnidirectional case.
Figure 6
Figure 6 is shown in Figure 12.Consider the communication between node 1 and 2. Node 1 has a directional gain of 16 dB towards node 2 (0˚) and node 2 has a gain of about 11 dB in the direction towards node 1 (180˚) providing a combined gain of 27 dB.The data markers corresponding to the example angular locations are shown as black squares in Figure 12.As an example of interference nulling, the closest interference in the topology is at the bottom of node 1 (~270˚) and the algorithm places a null with the least gain in that direction.This improves the signal-to-noise ratio of communicating nodes significantly and is
Figure 12 .
Figure 12.Beam pattern of the network nodes plotted on a linear scale.The data markers indicate the gain in the communicating directions and also the null in the direction of interference. | 6,837.6 | 2017-05-09T00:00:00.000 | [
"Engineering",
"Computer Science"
] |