text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Partial Information Decomposition and the Information Delta: A Geometric Unification Disentangling Non-Pairwise Information Information theory provides robust measures of multivariable interdependence, but classically does little to characterize the multivariable relationships it detects. The Partial Information Decomposition (PID) characterizes the mutual information between variables by decomposing it into unique, redundant, and synergistic components. This has been usefully applied, particularly in neuroscience, but there is currently no generally accepted method for its computation. Independently, the Information Delta framework characterizes non-pairwise dependencies in genetic datasets. This framework has developed an intuitive geometric interpretation for how discrete functions encode information, but lacks some important generalizations. This paper shows that the PID and Delta frameworks are largely equivalent. We equate their key expressions, allowing for results in one framework to apply towards open questions in the other. For example, we find that the approach of Bertschinger et al. is useful for the open Information Delta question of how to deal with linkage disequilibrium. We also show how PID solutions can be mapped onto the space of delta measures. Using Bertschinger et al. as an example solution, we identify a specific plane in delta-space on which this approach’s optimization is constrained, and compute it for all possible three-variable discrete functions of a three-letter alphabet. This yields a clear geometric picture of how a given solution decomposes information. Introduction The variables in complex biological data frequently have nonlinear and non-pairwise dependency relationships. Understanding the functions and/or dysfunctions of biological systems requires understanding these complex interactions. How can we reliably detect interdependence within a set of variables, and how can we distinguish simple, pairwise dependencies from those which are fundamentally multivariable? An analytical approach formulated by Williams and Beer frames these questions in terms of the Partial Information Decomposition (PID) [1]. The PID proposes to decompose the mutual information between a pair of source variables X and Y and a target variable Z, I(Z : X, Y), into four non-negative components: The constituent terms proposed are: the unique informations, U X and U Y , which represent the amounts of information about Z encoded by X alone and by Y alone; the redundant information, R, which is the information about Z encoded redundantly by both X and Y; and the synergistic information S, which is the information about Z contained in neither X or Y individually, but encoded by X and Y taken together. An illustration of this decomposition, the associated governing equations, and examples characterizing each type of information are all shown in Figure 1. It was shown that PID components can distinguish between dyadic and triadic relationships which no conventional Shannon information measure can distinguish [2]. [3]) and its governing equations. The system is underdetermined. (B) Sample binary datasets which contain only one type of information. For (i), where Z = X, X contains all information about Z and Y is irrelevant, such U X is equal to the total information and all other terms are zero. For (ii), where Z = X = Y, X and Y are always identical and thus the information is fully redundant. For (iii), where Z is the XOR function of X and Y, both X and Y are independent of Z, but fully determine its value when taken jointly. The problem with this approach is that its governing equations form an underdetermined system, with only three equations relating the four components. To actually calculate the decomposition, an additional assumption must be made to provide an additional equation. Williams and Beer proposed a method for the calculation of R in their original paper, but this has since been shown to have some undesirable properties [1]. Much of the subsequent work in this domain consisted of attempts to define new relationships or formulae to calculate the components, as well as critiques of these proposed measures [3]. These proposed measures include (as an incomplete list): a measure based on information geometry [4]; an intersection information based on the Gács-Körner common information [5]; the minimum mutual information [6]; the pointwise common change in surprisal [7]; and the extractable shared information [8]. Another noteworthy putative solution is that of [9], which requires solving an optimization problem over a space Q of probability distributions, but is rigorous in that it directly follows from reasonable assumptions about the unique information. However, it is unclear how to sensibly generalize this approach to larger numbers of variables. Nonetheless, there has been considerable interest in using the PID approach to gather insights from real datasets, particularly within the neuroscience community [10][11][12][13][14][15]. Independently, an alternative approach to many of these same questions has been formulated focusing on devising new information theory-based measures of multivariable dependency. In genetics, non-pairwise epistatic effects are often crucially important in determining complex phenotypes, but traditional methods are sensitive only to pairwise relationships; thus there is particular interest in methods to identify the existence of synergistic dependencies within genetic datasets. Galas et al. [16,17] quantified the non-pairwise information between genetic loci and phenotype data with the Delta measure, ∆(Z : X, Y). Briefly, given a set of variables {X, Y, Z}, ∆(Z : X, Y) quantifies the change in co-information when considering the variables {X, Y, Z} as opposed to only {X, Y} (we hereafter denote ∆(Z : X, Y) as ∆ Z , ∆(X : Y, Z) as ∆ X , and so on). In its simplest application, the magnitudes of {∆ X , ∆ Y , ∆ Z } can be used to detect and quantify non-pairwise interactions [16,17]. Recent work showed that the delta values encode considerable additional information about the dependency. Sakhanenko et al. [18] defined the normalized delta measures δ = (δ X , δ Y , δ Z ), which define an "information space", and considered the δ-values of all possible discrete functions Z = f (X, Y). Fully mapping the specific example set of functions where {X, Y, Z} are all discrete variables with 3 possible values, they found that the 19,683 possible functional relationships Z = f (X, Y) mapped onto a highly structured plane in the space of normalized deltas (as shown in Figure 2). Different regions of this plane corresponded to qualitatively different types of functional relationships; in particular, completely pairwise functions such as Z = X and completely non-pairwise functions such as Z = XOR(X, Y) were mapped onto the extremes of the plane (see Figure 2; note that this paper defines "XOR" for a ternary alphabet as XOR(X, Y) ≡ (X + Y)mod3). Since discrete variables such as these occur naturally in genetics, this suggests that relationships between genetic variables may be usefully characterized by their δ-coordinates, with useful intuitive value. The difficulty of this in practice is that the coordinates are constrained to this plane only when X and Y are statistically independent, which is not the case in many real datasets, e.g., in genetic datasets in the presence of linkage disequilibrium. Functions with a full pairwise dependence on X or Y map to opposite lower corners, whereas the fully synergistic XOR (i.e., the XOR-like ternary extension XOR(X, Y) ≡ (X + Y)mod3) is mapped to the uppermost corner. In this paper, we show that the Partial Information Decomposition approach and Information Delta approach are largely equivalent, since their component variables can be directly related. The δ-coordinates can be written explicitly in terms of PID components, which leads us to an intuitive understanding of how δ-space encodes PID information by casting them into a geometric context. We then apply our results to two different approaches to solving the PID problem, first one from Bertschinger et al. [9] and then from Finn and Lizier [19]. We show that the sets of probability distributions, Q, used by Bertschinger can be mapped onto low-dimensional manifolds in δ-space, which intersect with the δ-plane of Figure 2. This approach is theoretically useful for the Delta information framework, since it factors out X, Y dependence in the data, thereby accounting for linkage disequilibrium between genetic variables. We suggest an approach for the analysis of genetic datasets which would return both the closest discrete function underlying the data and its PID in the Bertschinger solution, and which would require no further optimization after the initial construction of a solution library. This realization thus yields a low-dimensional geometric interpretation of this optimization problem, and we compute the solution for all possible three-variable discrete functions of alphabet size three. For these same functions, we then compute the PID components using the Pointwise PID approach of Finn and Lizier [19]. This visualization yields an immediate comparison of how each solution decomposes information. Since our derived relationship between the frameworks is general, it could be similarly applied to any putative PID solution as demonstrated here. Code to replicate these computations and the associated figures is freely available [20]. Interaction Information and Multi-Information An important body of background work, which served as a foundation for both the Information Decomposition and Information Delta approaches, involves the Interaction Information, I I. I I can be thought of as a multivariable extension of the mutual information [21]. Unlike the mutual information, however, the interaction information can assume negative values. What does it mean for the interaction information to be negative? It was once common to interpret I > 0 as implying a synergistic interaction, and I < 0 as implying a redundant interaction between the variables. As detailed in [1] and discussed in the following sections, this interpretation is mistaken. Interactions can be both partly synergistic and partly redundant, and the interaction information indicates the balance of these components. For a set of variables ν n = {X 1 , ..., X n }, I I can be defined as [22]: where |ν n | is the total number of variables in the set, and the sum is over all possible subsets τ i (where |τ i | is the total number of variables in each subset). H(τ i ) is the joint entropy between the variables in subset τ i . The interaction information, I I, is very similar to a measure called the co-information, CI [23]. These measures differ only by their sign: for an even number of variables they are identical (e.g., I I(X, Y) = CI(X, Y)), and for an odd number of variables they are of opposite sign. An additional, useful measure is the "multi-information", Ω, introduced by Watanabe [24], sometimes called the "total correlation", which represents the sum of all dependencies of variables and is zero only if all variables are independent. For a set of n variables ν n = {X i } it is defined as: Information Decomposition Consider a pair of "source variables" X,Y which determine the value of a "target variable" Z. Assume that we can measure the mutual information each source carries about a target, I(Z : X) and I(Z : Y) (which we abbreviate as I Z:X and I Z:Y ), as well as the mutual information between the joint distribution of {X, Y} and Z, I(Z : X, Y) (which we abbreviate as I Z:XY ). These mutual informations can be written in terms of the entropies (which we abbreviate using subscripts, e.g., H(X, Y) ≡ H XY ): These mutual informations can be decomposed into components which measure how much of each "type" of information they contain, as follows: where U X and U Y are the unique informations, R is the redundant information, and S is the synergistic information, as described previously in Section 1. This is an underdetermined system which requires an additional equation for the variables to render it solvable. Many of the current and previous efforts to define such an equation (for example, several proposals on how to directly compute the value of R from data), as well as the limitations of those efforts, have been nicely summarized in [3]. Solution from Bertschinger et al. One solution to this problem came from Bertschinger et al. [9], who proposed that the unique information be approximated as: Let Ψ be the set of all joint probability distributions of X, Y, and Z. Then we define Q as the set of all distributions, q, which have the same marginal probability distributions p(X = x, Z = z) and p(Y = y, Z = z) as our dataset. That is, Please note that in [9], this set of probability distributions is denoted as ∆ P , which we change here to Q to avoid notational confusion with the information deltas. Similarly, its elements are indicated by Q in the original paper. Here we indicate the distributions, elements of the set Q, by a lowercase q for consistency with our notation for probability distributions. Put another way, we consider all possible probability distributions that maintain the marginals p(X = x, Z = z) and p(Y = y, Z = z) implied by our data. The relationship between X and Y (p(X = x, Y = y), and consequently the joint distribution p(X = x, Y = y, Z = z)) is allowed to vary. The minimization criterion is perhaps more intuitive when written, equivalently, as: Thus, the unique information U X can be thought of as the smallest possible increase in the interaction information when the variable X is added to the set {Y, Z}. For example, if there exists a probability distribution in Q for which I I(Y, Z) = I I(X, Y, Z), then the addition of X adds no unique information about Z and U X . The core assumption of this approach is that the unique and redundant informations depend simply upon the marginal distributions p(X = x, Z = z) and p(Y = y, Z = z). This solution is rigorous in the sense that the result follows directly from this assumption without any ad-hoc assumptions for how the components are related. Information Deltas and Their Geometry Consider a set of three variables ν n = {X, Y, Z}. Using Equation (3), we can write the co-information in terms of the entropies: The differential interaction information ∆ Z is the change in the interaction information when a given variable Z is added to the set. This can be written in terms of CI and then the conditional mutual information: These measures can be normalized by the multi-information for the three variables, Ω(X, Y, Z) (which we abbreviate as Ω XYZ ), which by Equation (4) we can write in terms of the entropies as: The normalized measures are then: If Z is a function of X and Y, and if X and Y are i.i.d., then δ = (δ X , δ Y , δ Z ) lies within a highly structured plane, where different regions of the plane correspond to qualitatively different types of interactions. Figure 2 shows the mapping of all possible functions onto this highly structured plane. The normalized deltas can be expressed as: The normalized deltas can also be written in terms of joint mutual informations, as follows: We can write all normalized deltas in this form: By inverting previous equations, we can then write: Specifically, Equation Set (16) can be inverted to yield Equation (17a), and Equation Set (14) can be inverted to yield Equations (17b) and (17c). Information Decomposition in Terms of Deltas With Equations (6) and (17), we can equate the expressions for the mutual informations in their delta and information decomposition forms: From the above relations we can derive: In other words, the difference between the synergy and the redundancy increases as we get farther from the origin in δ-space. Also: so the distance from the diagonal in the (δ X , δ Y )-plane is proportional to the difference between the unique informations. These striking relationships are visualized in Figure 3. (23) and (24), the δ-space encodes the balance of synergy/redundancy along one diagonal, and the balance of unique information in each source along the other. Relationship between Diagonal and Interaction Information Considering again Equation (19) and using Equation (13), we can write: where I I(X, Y, Z) is the interaction information between the variables. This replicates the important result that I I(X, Y, Z) = S − R from the original Williams and Beer paper [1]. The Function Plane When the variables are related by a discrete function (as defined in [18]), and X and Y are i.i.d., the function will lie on a plane defined by: Thus, the distance d of a coordinate above the plane is given by And so from Equation (18): Transforming Probability Tensors within Q As noted previously, there is no generally accepted solution for completing and computing the set of PID equations. Our results connecting the PID to the information deltas have therefore, up to this point, been agnostic on this question. All equations in the previous section follow from the basic PID formulation, and the delta coordinate equations. This means they are true for any putative solution, but also brings us no closer to an actual solution to the PID problem; we can still only compute the differences between PID components. We therefore now extend our analysis by using the solution of Bertschinger et al. [9] to fully compute the PID for the functions in Figure 2. We wish to emphasize, however, that the following approach could be used equally well to gain a geometric interpretation of any alternate solution to the PID. Consider a probability tensor for an alphabet size of N: where we use the notation p ijk = p(X = i, Y = j, Z = k). What transformations are permissible that will preserve the distribution within the set Q (as defined in Equation (8))? Please note that we can obtain the marginal distributions simply by summing over the appropriate tensor index. For example, summing along the first index yields the marginal distribution p(Y = y, Z = z). To stay in Q, then, we require that the sums along the first and second indices both remain constant. For an alphabet size of N = 2, we can parameterize the set of all possible transformations quite simply: All possible changes to each layer of the tensor can be captured with a single parameter. For example, increasing p 111 will require that p 121 and p 211 be decreased by the same amount, as the row and column sums must remain constant (which, in turn, determines p 221 ). Each layer can be modified independently, and thus the second layer has an independent parameter. For a given probability tensor with N = 2, then, the probability tensor for any distribution in Q can be fully parameterized with two parameters, and thus the corresponding coordinates in delta-space are at most two-dimensional. In practice, we find that N = 2 functions have delta-coordinates that are restricted to a one-dimensional manifold. Consider, for example, the AND function: We can describe all possible perturbations which remain in Q by the parameterization: However, it can be seen that we must have β = 0, as all probabilities must remain in the range p ∈ [0, 1]. The parameter α, on the other hand, can fall within the range α ∈ [0, 1/4]. Since all possible perturbations can be captured by varying a single parameter, Q must therefore be mapped to a one-dimensional manifold in δ-space. The layers of a probability tensor become significantly harder to parameterize for N = 3. Consider a single layer of a probability tensor: The permissible transformations to this layer can be parameterized by: subject to the constraints that: Clearly, these relations are too complicated to lend any immediate insight into the problem. However, it is a simple matter to use the above inequalities to calculate permissible values of parameters (α, β, γ, δ) and to plot out the corresponding delta coordinates. This is done for randomly generated sample functions in Figure 4. In this case, the delta coordinates have a complex distribution but are nonetheless restricted to a plane in delta-space (the vertically oriented red plane in Figure 5). A set Q consists of all probability distributions p(X = x, Y = y, Z = z) that share the same marginal distributions p(X = x, Z = z) and p(Y = y, Z = z). Each Q maps onto a set of points with a complex distribution, but which is constrained to a simple plane in δ-space. δ-Coordinates in Q Are Always Restricted to a Plane In the N = 2 case, delta-coordinates were parameterized by a single variable such that they must be restricted onto a line. In the N = 3 example, they are restricted onto a plane. Will larger alphabets map Q onto a three-dimensional volume? If not, is it possible to get a non-planar two-dimensional manifold, or are coordinates always restricted to a plane? We will now prove that Q is always constrained to a plane, regardless of the alphabet size. Proof. The definition of Q preserves the marginal distributions by construction. H XZ and H YZ being constant is a trivial consequence of holding p(X = x, Z = z) and p(Y = y, Z = z) constant, which is the condition defining Q. From these constant marginal distributions, we can calculate the distributions p(X = x), p(Y = y) and p(Z = z), which are therefore also constant, as are their corresponding entropies. Only two entropic quantities vary between the different distributions in Q. By considering just their effect on the delta coordinates, we can now show the following: Theorem 1. In any set Q of distributions with equal marginal distributions p(X = x, Z = z) and p(Y = y, Z = z), the delta-coordinates (δ X , δ Y , δ Z ) will be restricted to a plane. This is true for any alphabet size. Proof. We begin by making several notational definitions to simplify the algebra which follows, first from the joint entropies which vary within Q: We then define quantities which collect the constant entropy terms: In terms of these quantities we can now write the normalized delta coordinates as follows: Solving for d in the δ X and δ Y equations yields: And the δ Z equation allows us to solve for h: Plugging this into the equation above yields an equation which simplifies to: Since c 1 , c 2 , c 3 and c 4 are all constant over Q, this defines a plane in δ X , δ Y , δ Z space. Equation (32) not only shows that the points in Q are bound to a plane, but it also implies that this plane always contains the line defined by δ X = δ Y and δ Z = 1. Therefore for any function in Figure 2, we can trivially compute the plane in which the corresponding Q is contained. Figure 5. The same function's Q mapped onto δ-space as in Figure 4, viewed from a different angle. Q is constrained to a plane in δ-space. This plane, highlighted in red, contains the δ-coordinates of the function f (indicated by the red dot) as well as the line(δ X = δ Y , δ Z = 1) (indicated by the solid red line). PID Calculation for All Functions For the set of probability distributions Q, Bertschinger et al. [9] provide the following estimators for the PID components: If we numerically compute the set Q for a given function f (i.e., by generating a distribution such as the one shown in Figure 4 via the parameterization of Equation (30)), these estimators are trivially consistent. Figure 6 shows the computed values of the PID components for all of the functions shown in Figure 2. There is a clear geometric interpretation here: Functions in the lower left/right corners consist almost entirely of U X and U Y , respectively. Functions approaching the top corner become increasingly synergistic with a higher proportion of S. Functions are most redundant towards the lower center of the plane, though no single function is primarily R. Figure 2. Each function is colored by the fraction of the total information in each PID component, as computed using the solution of [9]. There is a clear geometric structure to the decomposition which matches the previously discussed intuition about δ-space. Alternate Solutions: Pointwise PID The Pointwise Partial Information Decomposition (PPID) of Finn and Lizier [19] is an alternate approach to solving the PID problem. It is motivated by the fact that the entropy and mutual information can be expressed as the expectation value of pointwise quantities, which measure the information content of a single event. For example, the event (X, Z) = (x 1 , z 1 ) has the associated pointwise mutual information: and the overall mutual information between the two variables is the expectation value of this pointwise quantity, taken over all possible events. It is important to note that while the overall mutual information is non-negative, the pointwise mutual information can be negative. Finn and Lizier decompose this pointwise quantity into two non-negative components, the "specificity" i + (x 1 → z 1 ) and "ambiguity" i − (x 1 → z 1 ), and argue that: i(x 1 : They similarly decompose the redundancy R into a pointwise specific redundancy r + and pointwise specific ambiguity r − , and argue for the following definitions: where {a i } are the values of each of the source variables in a particular realization (e.g., if we have two source variables X, Y predicting Z, then the event (X, Y, Z) = (x 1 , y 2 , z 3 ) has {a i } = {x 1 , y 2 }). The expectation value of the difference of these quantities then yields the redundancy: from which the rest of the PID components follow. See [19] for a full discussion of motivations and Axioms which these definitions satisfy (including a discussion of the relationship between this formulation and that of Bertschinger et al. [9], and how the many aspects of [19] are arguably pointwise adaptations of the assumptions in [9]). One consequence of this approach is that the PID components are no longer non-negative. There is extensive discussion of the interpretation of this in [19], but one example, RDNERR, is particularly informative. In our probability tensor notation, we can write this function as: This can be interpreted as follows: X is always equal to Z. Y is usually equal to Z, but occasionally (with probability 1/4) makes an error. What should we expect the PID components to be, in this case? The PPID yields (R, U x , U y , S) = (1, 0, −0.81, 0.81), which implies the following interpretation: the information about Z is encoded redundantly by both X and Y, but Y carries unique misinformation about Z due to its tendency to make errors. If all components were strictly positive, we would likely draw a different conclusion: both X and Y encode some information about Z, with X encoding additional unique information. In this way, different solutions will lead to slightly different interpretations about the nature of the relationship between the variables. In Figure 7, we compute the PPID for all functions Z = f (X, Y) and map them onto δ-space, just as we did in Figure 6. Comparing Figures 6 and 7 immediately highlights key differences in how each method decomposes information. For example: in Figure 6, the top corner is purely synergistic, the lower-left corner has information solely in X, and the lower-right corner has information solely in Y; in Figure 7, the top corner has zero redundancy, the lower-left corner has misinformation in Y, and the lower-right corner has misinformation in X. It is not our goal here to argue which result is more correct. Instead, we wish to highlight how comparing Figures 6 and 7 readily yields subtle insights into how the two approaches differ in decomposing information. It also yields immediate insights into the subtleties of how we might interpret coordinates in δ-space. Figure 7. The same set of all 3-letter functions Z = f (X, Y) mapped onto a plane in δ-space, as in Figure 6. The colorscale shows the amount of information in each component, now computed using the pointwise solution of Finn and Lizier [19]. In this formulation, the PID components are the average difference between two subcomponents, the specificity and ambiguity, and can be negative when the latter exceeds the former. Visualizing this solution immediately highlights the differences in how it decomposes the information of functions and leads to an alternate interpretation of δ-space. Conclusions The key overall result of this paper is that the PID problem can be mapped directly into the the previously defined "information landscape" represented by the "delta space" of [18]. This theoretical framework is simple and has a geometric interpretation which was well worked out previously. The simple set of relations between the frameworks, as explicated in Equation (18) and visualized in Figure 3, anticipates a much deeper set of geometric constraints. We build upon this general relationship using the solution of Bertschinger et al. [9]. Using this solution, we parameterize the permissible transformations to a discrete function to numerically generate the distribution set Q, and prove in Theorem 1 that this set is mapped onto a plane in delta-space. The optimization problem defined by this approach is cast in terms of our variables in Equation (33), and the various extrema can be extracted directly from our parameterization and mapping procedure. Code which replicates these computations and generates the figures within this paper is freely available [20]. These results suggest the following approach for computation of the PID components, if using the solution from [9], and given the added assumption that there is some function Z = f (X, Y) which best approximates the relationship between variables. The steps are these: 1. Construct a library (set) of distributions {Q 1 , Q 2 , . . . , Q N } for all functions, f i (X, Y). Specifically, record the δ-coordinates spanned by each distribution (e.g., as plotted in Figure 4) along with the corresponding function and its PID component values. 2. For a set of variables in data for which we wish to find the decomposition, compute its δ-coordinates and then match them to the closest Q i . This will then immediately yield the corresponding function and PID components. If this approach proves to be practical, it would have several clear advantages. First, the computational cost of the library construction would only need to be done once, and not need to be repeated for any subsequent analysis. The cost of the library construction is itself quite tractable (for example, exactly this computation was done to generate Figure 6). Second, this solves an open problem in the use of Information Deltas for which the source variables are not independent, for example, in applications to genetics in the presence of linkage disequilibrium. Specifically, this approach relaxes the common assumption in [18] that X and Y must be statistically independent. The practical application of this approach to data analysis requires further development, which is beyond the scope of this paper. Specifically, the actual data will contain noise such that the computed δ-coordinates will not lie perfectly within any distribution of Q set. The naïve approach of simply taking the closest Q i may therefore be insufficient in general. Future work will characterize the response of δ-coordinates to various levels of noise within the data, to enable the computation of p(Q i | δ, α) (i.e., the probability that variables belong to the set Q i given their observed coordinates δ and some noise level α). Future work will extend the approach of δ to larger sets of variables, so as to fully characterize a higher-dimensional δ-space and its relationship to the PID. Much of the complexity of each framework is contained in considering these higher-order relationships. Future work will also consider additional solutions to the PID problem beyond the solutions of [9,19] considered here. All equations in Section 3 are general and agnostic to the precise solution used for the actual PID computation, and it should be straightforward to generate figures similar to Figure 6 for different solutions to show how they differ in mapping information components onto the function plane. This will provide interpretable geometric comparisons between solutions and also immediately highlight all functions for which results offer differing interpretations, as seen in Figures 6 and 7. We anticipate that this direct comparison of how different solutions map the information content of discrete functions will provide a powerful visual tool for understanding the differing consequences of putative solutions, and thus our unification of these frameworks will be useful in resolving the open question of how best to compute the PID. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. Abbreviations The following abbreviations are used in this manuscript: PID Partial Information Decomposition II Interaction Information CI Co-Information U X Unique Information in X U Y Unique Information in Y R Redundant Information S Synergistic Information PPID Pointwise Partial Information Decomposition
7,757.4
2020-09-27T00:00:00.000
[ "Computer Science" ]
PREDICTING PRODUCTIVITY LOSS CAUSED BY CHANGE ORDERS USING THE EVOLUTIONARY FUZZY SUPPORT VECTOR MACHINE INFERENCE MODEL Change orders in construction projects are very common and result in negative impacts on various project facets. The impact of change orders on labor productivity is particularly difficult to quantify. Traditional approaches are inadequate to calculate the complex input-output relationship necessary to measure the effect of change orders. This study develops the Evolutionary Fuzzy Support Vector Machines Inference Model (EFSIM) to more accurately predict change-order-related productivity losses. The EFSIM is an AI-based tool that combines fuzzy logic (FL), support vector machine (SVM), and fast messy genetic algorithm (fmGA). The SVM is utilized as a supervised learning technique to solve classification and regression problems; the FL is used to quantify vagueness and uncertainty; and the fmGA is applied to optimize model parameters. A case study is presented to demonstrate and validate EFSIM performance. Simulation results and our validation against previous studies demonstrate that the EFSIM predicts the impact of change orders significantly better than other AI-based tools including the artificial neural network (ANN), support vector machine (SVM), and evolutionary support vector machine inference model (ESIM). Introduction Changes during construction projects are very common, making construction one of the most complex industries. Changes can involve adding to or reducing the scope of project work or correcting or modifying an original design. Change orders in the construction industry have negative effects in aspects such as cost, quality, time, and organization. While most change order items (e.g. material, scheduling, rework, equipment) can be relatively easy to measure, quantifying the impact on labor productivity is typically more complicated (Hanna et al. 1999a). Many studies have reported on the impact of change orders on labor productivity. The methods used in the literature to calculate productivity loss can be grouped into the 3 categories of (1) regression analysis (Leonard 1988;Moselhi et al. 1991;Ibbs 2005), (2) artificial neural network (ANN) (Moselhi et al. 2005), and (3) statisticalfuzzy (Hanna et al. 2002). Previous studies (Hanna et al. 2002;Moselhi et al. 2005) have reported that ANN and statistical-fuzzy methods outperform regression analysis. However, no method is suitable for calculating productivity loss because prediction accuracies are outside of acceptable limits. Construction projects are complex undertakings full of uncertainty and vagueness. Developing a deterministic mathematical model to predict productivity loss is difficult and expensive. An inference model (Cheng, Wu 2009) offering high accuracy and low cost is one feasible approach to predicting productivity loss. Inference models derive new facts from historical data. The human brain can learn previous information and deduce new facts from that information. Artificial intelligence (AI) can be employed to develop models that simulate human brain functions. AI is concerned with computer systems able to handle complex problems using techniques such as Artificial Neural Network (ANN), Support Vector Machine (SVM), and Fuzzy Logic (FL). AI-based inference models thus offer a promising solution to predicting productivity loss. Several AI hybrid systems have been developed in recent years that have solved various construction management problems (Cheng, Wu 2009;Cheng, Roy 2010). In an AI hybrid system, fusing different AI techniques can achieve better results than a single AI technique because the advantages of one technique can compensate for another's disadvantages (Cheng, Wu 2009). The Evolutionary Fuzzy Support Vector Machine Inference Model (EFSIM) (Cheng, Roy 2010) was proposed to further improve prediction accuracy. EFSIM is an artificial intelligence (AI) hybrid system that fuses fuzzy logic (FL), support vector machine (SVM), and fast messy genetic algorithm (fmGA). In EFSIM, FL deals with vagueness and uncertainty; SVM acts as a supervised learning tool; and fmGA works to optimize FL and SVM parameters. EFSIM significantly reduces the level of human intervention and can be used by professionals who do not have background in AI (Cheng, Roy 2010). The objective of this research is to use EFSIM to predict productivity loss caused by change orders. Feasibility and capability of the proposed method are evaluated and compared with other methods, including ANN, SVM, and ESIM (Cheng, Wu 2009). Validation with previous studies (Moselhi et al. 2005) is also carried out to demonstrate proposed model performance. Productivity loss caused by change orders Change can be defined as any modification in the original scope, time, or cost of the work (Hester et al. 1991). A change order is issued to formally announce the change and modify the contract between the contractor and owner (Hester et al. 1991). Keane et al. (2010) grouped causes of change into four categories: owner-related, consultant-related, contractor-related, and non-partyrelated, and effects of change into five categories: cost-related, quality-related, time-related, organizationrelated, other effects (Keane et al. 2010). Preliminary research into calculating the effects of change orders on labor productivity was accomplished by Leonard (1988). This research attempted to identify the effects of change orders on labor productivity in 90 cases facing change-order-related productivity losses. Results indicated a significant correlation between change orders and productivity loss. However, there were limitations to Leonard's study, including limited number of variables and subjective evaluation (Hanna et al. 1999a, b). This preliminary study motivated other researchers to develop research in this field further. Two studies used a statistical method to quantify the impact of change orders on labor productivity in mechanical and electrical construction projects (Hanna et al. 1999a, b). These studies used the delta method as an efficiency indicator and regression analysis to analyze questionnaire data. Hanna et al. (2002) improved the method by using the statistical-fuzzy technique to quantify the cumulative impact of change orders. Unfortunately, the technique is difficult for stakeholders to implement due to complicated calculation steps and poor prediction results. A neural network model (Moselhi et al. 2005) was developed to estimate the impact of change orders on labor productivity, including the timing effect of change orders. Analysis results showed this model estimated the impact of change orders on productivity more accurately than those previously described. However, this method could gain better prediction results by fusing the neural network model with an AI technique. Fuzzy logic (FL) FL is a popular AI technique invented by Zadeh in the 1960s. FL has been used in forecasting, decision making, and action control in environments characterized by uncertainty, vagueness, presumptions, or subjectivity (Bojadziev, G., Bojadziev, M. 2007). In general, FL systems have four major components: fuzzification, fuzzy rule base, inference engine, and defuzzification. Fuzzification is a process that uses membership functions (MFs) to convert the value of each input variable into a corresponding linguistic variable degree. Fuzzy rules represent relations between input and output fuzzy sets and form the basis for fuzzy logic to obtain fuzzy output. The result of fuzzification, which is used by the inference engine, stimulates the human decision-making process based on fuzzy implications and available rules. Lastly, defuzzification reverses the fuzzification process and converts the fuzzy set into crisp output. The advantages of FL related to vagueness and uncertainty depend heavily on the appropriate distribution of membership functions (MFs), number of rules, and selection of proper fuzzy set operations. Greater problem complexity increases the difficulty of MF construction and rules (Ko 2002). Some researchers have treated this drawback as an optimization problem because determining MF configurations and fuzzy rules is complicated and problem-oriented. To overcome such difficulties, some researchers have tried to fuse FL with AI optimization techniques such as GA and ant colony (Ishigami et al. 1995;Martinez et al. 2008). These optimization methods have demonstrated their ability to minimize timeconsuming operations and reduce the level of human intervention necessary to optimize MFs and fuzzy rules. Support vector machine (SVM) SVM (Vapnik 1995) is an AI paradigm already used in a wide range of applications. SVM is a learning tool for solving classification and regression problems. SVM works by plotting input vectors into a higher dimensional feature space. The optimal hyperplane is identified within this feature space with the help of a kernel function, K (x i , x j ). A radial basis function (RBF) kernel has been recommended for general users as a first choice due to its ability to analyze higher-dimension data, use of only one hyperparameter in searches, and fewer numerical difficulties (Hsu et al. 2003). SVM has achieved performance levels comparable to or higher than traditional learning tools (Burges 1998; Yongqiao et al. 2005). However, SVM's generalization ability and prediction accuracy are determined by the optimal penalty (C) and kernel (γ) parameters. To overcome this drawback, an optimization technique (e.g. fmGA) may be used to identify the optimum values of parameters simultaneously (Cheng, Wu 2009). Fast messy genetic algorithm (fmGA) fmGA is a recently developed machine learning and optimization tool based on a genetic algorithm approach (Goldberg et al. 1993). fmGA is an improvement on messy genetic algorithms (mGAs), which were initially developed to overcome linkage problems in simple genetic algorithms (sGAs) resulting from a parameter coding problem that sometimes generates suboptimal solutions (Deb, Goldberg 1991). Unlike sGAs, which use fixed length strings to represent possible solutions, fmGA applies messy chromosomes to form strings of various lengths that can efficiently find optimal solutions for large-scale permutation problems (Feng, Wu 2006). The fmGA contains two loop types: inner and outer ( Fig. 1). The process starts with the outer loop. Firstly, a competitive template (randomly generated or problemspecific) is generated. In the inner loop, the fmGA operation is three-phase, including an initialization phase, primordial phase, and juxtapositional phase. In the initialization phase, an adequately large population contains all possible building blocks (BBs) of order k. fmGA performs the probabilistic complete initialization (PCI) by generating n chromosomes randomly and evaluating their fitness value. The primordial phase contains two operations, namely threshold selection and building-block filtering. In this phase, "bad" genes that do not belong to BBs are filtered out so that, in the end, the result encloses a high proportion of "good" genes belonging to BBs. In the juxtapositional phase, fmGA operations are similar to sGA operations. The selection for "good" genes is used together with a cut-and-splice operator to form a high-quality generation that may contain the optimal solution. The next outer loop begins after the respective inner loop is finished. The competitive template is replaced by the best solution found so far, which becomes the new competitive template for the next outer loop. The whole process is performed until the maximum number of eras (k max ) is reached. The fmGA can also be performed over epochs (e max ). An epoch is the complete process between first era and the maximum number of eras (k max ). Epochs can be performed as many times as desired. The algorithm is terminated once a good-enough solution is obtained or no further improvement is made. Evolutionary fuzzy support vector machine inference model The evolutionary fuzzy support vector machine inference model (EFSIM) is a hybrid AI system developed by Cheng and Roy (2010) that fuses the three different AI techniques of fuzzy logic (FL), support vector machine (SVM), and fast messy genetic algorithm (fmGA). In this complementary system, FL deals with vagueness and approximate reasoning; SVM acts as a supervised learning tool to handle fuzzy input-output mapping; and fmGA works to optimize FL and SVM parameters. In EFSIM, the fuzzy inference engine and fuzzy rules based on the FL system have been replaced by SVM. However, SVM's generalization ability and prediction accuracy are determined by the optimal penalty (C) and kernel (γ) parameters. Improper tuning of the parameters will affect the accuracy of the prediction model. To overcome this shortcoming, EFSIM utilizes fmGA to search simultaneously for optimum SVM parameters and FL parameters. The architecture of EFSIM is shown in Figure 2. The EFSIM involves eight major steps, beginning with training data and ending with the optimal prediction model. An explanation of major steps involved in EFSIM is given below: 1) Training data Final data for training are obtained from data preprocessing output. Data preprocessing used in this study included data cleaning, attribute reduction and data transformation. 2) Fuzzification Each normalized input attribute from the previous step is converted into membership grades corresponding to the specific membership function (MF) set generated and encoded by fmGA. This model uses trapezoidal and triangular MF shapes (see Fig. 3) that, in general, may be developed by referencing summit points and widths. This study used the Summit and Width Representation Method (SWRM) (Ko 2002) to encode complete MF sets (Fig. 3 (c)). Figure 4 illustrates the fuzzification process. 3) SVM training model SVM addresses the complex relationship between fuzzy input and output variables. Fuzzification process output, in the form of membership grades, is fuzzy input for SVM. SVM trains the dataset to obtain the prediction model, with penalty (C) and kernel (γ) as its parameters, which are randomly generated and encoded by fmGA. This study used the RBF kernel as a reasonable first choice (Hsu et al. 2003). 4) Defuzzification This is a fuzzification reverse process. Once SVM finishes the training process, output numbers are expressed in terms of a fuzzy set. Output numbers are then converted into crisp numbers. Employing fmGA, the model generates a random defuzzification parameter (dfp) substring and encodes it for conversion into SVM fuzzy output. This evolutionary approach is simple and straightforward, as it uses dfp as a common denominator for SVM output. 5) fmGA parameter search fmGA is utilized to search simultaneously the fittest shapes for MFs, dfp, penalty parameter (C), and RBF kernel parameter (γ). In fmGA, the chromosome that represents the possible solution for searched parameters consists of four parts: the MFs substring, dfp substring, penalty parameter substring, and kernel parameter substring (Cheng, Roy 2010). The chromosome is encoded into a binary string. Chromosomes consist of two segments: FL and SVM. Figure 5 illustrates the chromosome 6) Fitness evaluation A fitness function, a function designed to measure model accuracy and good generalization properties (Ko 2002), is now developed to evaluate fitness value. This function describes the fittest shape of MFs, optimized dfp number, and SVM parameters. The fitness function consists of parameters to calculate accuracy and model complexity, as expressed in Eqn (1): (1) where c aw represents the accuracy weighting coefficient; s er represents the prediction error between actual output and desired output; c cw represents the complexity weighting coefficient; and mc represents model complexity, which can be quantified by counting the number of support vectors. 7) Termination criteria The process terminates when the termination criterion is satisfied. While still unsatisfied, the model will proceed to the next generation. As EFSIM uses fmGA, the termination criterion used in this study was either era number (k) or epoch number (e). 8) Optimal prediction model The loop stops when the termination criterion is fulfilled. This condition means that the prediction model has identified the input/output mapping relationship with optimal C, γ, and dfp parameters and is ready to predict new facts. Historical data Data used in this research were drawn from 102 cases cited in Assem's thesis (2000) and covered 33 cases from Assem's (2000) investigation of the adverse effects of change orders and 69 cases from Leonard's (1988) investigation of change order impacts. A summary of cases is shown in Table 2. Data preprocessing Data preprocessing is an important stage in data analysis that resolves the "unclean" nature of real-world data (Zhang et al. 2003). Several data preprocessing techniques such as data cleaning, attributes reduction, and data transformation were employed in this study. A systematic data-preprocessing flowchart (Fig. 6) was developed to obtain better prediction results. Historical data was analyzed using this flowchart to obtain training data. Data cleaning can be applied to fill in missing values and remove noisy data (univariate and multivariate outliers) (Han, Kamber 2007;Shahi et al. 2009). Attributes reduction was applied to reduce the dimensionality of data attributes and help reduce computational time by eliminating unnecessary attributes. Two methods, correlation analysis (CA) and principal component analysis (PCA), were employed to compare attributes reduction method results. CA is the simplest way to assess input-output relationships. PCA is used to identify strong predictor variables in a dataset. Data transformation techniques such as normalization, where attribute data are scaled to fall within a small specified range, may improve the accuracy and the efficiency of mining algorithms involving distance measurements (Han, Kamber 2007;Shahi et al. 2009). The function used to normalize data in this study is shown in Eqn (2) where: x norm is the normalized data; x i is the observed data; x min is the minimum data; and x max is the maximum data. Final data A total of 96 records were used to train the prediction model. Two kinds of analyses were done to compare performance of attribute reduction methods. Analysis 1 used CA to reduce attributes and Analysis 2 used PCA to do the same. As shown in Table 3, CA and PCA identified 6 and 4 attributes, respectively, as significant factors. Both analyses transformed the data into values ranging Type of impact (1, 2, or 3) c a frequency of change order: ratio change orders number to the actual duration in months; b average size of change orders: ratio of change orders hours to the number of change orders; c 1 represents change-order causes of productivity loss only; 2 or 3 represents change order plus 1 or 2 additional major causes of productivity loss. between 0 and 1. Table 4 shows example input and output data from Analysis 1. Cross-validation Cross-validation is a statistical technique that assesses how accurately a predictive model will perform by dividing data into two segments, of which one is used to learn or train the model and the other is used to test or validate the model. 10-fold cross-validation resulted in the best performance in the simulation (Borra, Di Ciaccio 2010). In 10-fold cross-validation, original data was randomly portioned into 10 equally (or approximately equally) sized segments. Consequently, 10 independent performance estimations of training and testing were performed such that, within estimation, a different fold of the data was alternately used for testing while the remaining 9 folds were used for training (Fig. 7). We then calculated the average of each performance measure to obtain crossvalidation accuracy. Performance measures This research used the following four performance measures to evaluate EFSIM: 1. Root mean square error Root mean square error (RMSE) is the square root of the average squared distance of predicted values by the model and the observed values. RMSE can be used to calculate the variation of errors in a prediction model and is very useful when large errors are undesirable. The RMSE is expressed using the following equation: ( 3) where y j is the actual value; is the predicted value; and n is the number of data samples. 2. Mean absolute error Mean absolute error (MAE) is the average absolute value of the residual (error). MAE is a quantity used to measure how close forecasts or predictions are to eventual outcomes. The MAE is expressed using the following equation: (4) Mean absolute percentage error Mean absolute percentage error (MAPE) is a measurement of prediction accuracy. It represents prediction percentage error. Small denominators can cause problems in MAPE value because small denominators generate large MAPE values that impact overall value. The MAPE is expressed using the following equation: (5) Training time Training time represents time taken by the proposed model to train data and obtain the optimum prediction model. To obtain an overall comparison, a normalized reference index (RI) (Chou et al. 2011) was created by combining the four performance measures (RMSE, MAE, and MAPE, and training time). The RI was obtained by calculating the average of each normalized performance measure. Performance measure values ranged from 1 (best) to 0 (worst). The equation of RI can be described as follows: where: x i is the measurement indicator (RMSE, MAPE, MAE, training time); is the maximum value of the indicator among all prediction methods; is the minimum value of the indicator among all prediction methods; n is the number of measurement indicators. Model performance A systematic methodology was previously established to calculate prediction performance. Database records contain several attributes related to productivity loss caused by change orders. Data preprocessing was done to improve data quality. In the data preprocessing stage, two kinds of analysis relate to attributes reduction methods. We performed Analyses 1 and 2 to compare the performance of each. Analysis 1 employed the CA method and Analysis 2 employed the PCA method, with each implementing training and testing processes in accordance with 10-fold cross-validation. In the testing process, each fold validates the performance of the proposed model. A comparison with other methods such as ESIM (Cheng, Wu 2009), ANN, and SVM was developed to show EFSIM as more accurate and reliable. Several performance measures (RMSE, MAE, MAPE, and training time) were employed to evaluate the proposed model. Table 5 summarizes our comparison of Analyses 1 and 2 results. Optimal EFSIM parameter of Analysis 1 is C = 31 and γ = 0.574, founded in fold 3. Meanwhile, C = 31 and γ = 0.566 of fold 9 is regarded as the optimal EFSIM parameter in Analysis 2. In Analysis 1 earned better results in all EFSIM and ESIM performance measures except for MAPE. On the other hand, Analysis 2 obtained better results in SVM and ANN. Analysis 1 had a higher EFSIM training time than Analysis 2 because of its larger number of attributes. However, the difference between the two analyses in terms of the MAPE performance measure was not significant. Table 5 shows EFSIM results found both Analysis 1 and Analysis 2 to be significantly more accurate than other AI techniques. Longer computation time is required for the EFSIM model due to the FL paradigm. The more attributes in a training process, the more training time is needed to obtain the prediction model. Table 6 shows average Analysis 1 and 2 performance values. The best model with the smallest RMSE value is EFSIM (2.98%). Moreover, Table 6 shows a Fig. 7. 10-fold cross-validation In terms of training time, both ANN and SVM train data relatively quickly, while ESIM requires more time and EFSIM requires the most time. This is due to the FL paradigm that requires more computational time during the training process and to the status of ESIM and EFSIM as hybrid AI techniques. Longer computational time is a trade off necessary to obtain greater accuracy. Figure 8 illustrates the performance described in Table 6. The normalized RI obtained a general measurement by combining all performance measures. Based on RI values, EFSIM is the best model, followed in order by ESIM, SVM, and ANN. Although EFSIM requires the longest training time, it consistently obtains the best results on most other performance measures. Thus, by fusing FL, SVM, and fmGA, EFSIM predicts changeorder-related productivity loss more accurately than the other models considered. Validation with previous studies We compared the performance of EFSIM against other methods such as the general regression model (Moselhi et al. 1991), electrical regression model (Hanna et al. 1999b), and neural network model (Moselhi et al. 2005). Our validation used a dataset of 33 records from Assem (2000) as training data and change-order data from the literature (Bruggink 1997) as testing data. Attributes used in this dataset were (1) timing impact of change orders (TP i ); (2) work type; and (3) type of impact (TI), which could be either change orders only or change orders plus 1 or 2 additional causes of productivity related impact. TP i represents the ratio of actual change order hours to planned hours in each of the five construction periods (i = 1 to 5) as shown in following equation: where: TP i is the timing impact of change orders in period i; HCO i is the actual change order hours during period i; PH i is the planned hours during period i; and i is the period when change orders occur, where the value of i can range from 1 to 5. This data set used NECA (1983) distribution for electrical work and the trapezoidal distribution of Bent and Thuman (1994) for other types of work to distribute the planned hours in each of the five construction periods. The cases were analyzed using EFSIM and results were compared with previous studies found in Moselhi et al. (2005). Table 7 shows comparisons among all methods. Results demonstrate that the EFSIM model proposed in this study outperforms all other models in terms of estimating the impact of change orders on productivity. EFSIM obtained the smallest average error (7.90%) and lowest average absolute error of any model. This shows that EFSIM improves prediction model accuracy and reliability. Conclusions This research proposes a hybrid AI technique, EFSIM, to predict productivity loss caused by change orders. The EFSIM, developed by fusing complementary AI techniques including FL, SVM, and fmGA, achieves prediction results superior to traditional techniques. The developed model reduces the level of human intervention necessary to elicit MF shapes from questionnaire surveys and expert judgment; it also successfully identifies optimum penalty and kernel parameters. EFSIM is easy to apply and convenient for new users, and may be used by professionals without AI domain knowledge. Test results show that EFSIM prediction performance is superior to other prediction methods such as ESIM, ANN, and SVM. Although EFSIM requires Validation results with previous studies in predicting the impact of change orders on productivity loss also indicate that EFSIM provides the smallest margin among competing methods. These results exhibit great potential for EFSIM as a tool to accurately predict change-order-related productivity loss. Moreover, the developed model manages to help the project manager to make an adjustment related to productivity loss caused by change order. Furthermore, this research paper succeed in demonstrating a hybrid Artificial Intelligence paradigm, FL-SVM-fmGA, for facilitating the decision making in the construction industry. Zhang, S.;Zhang, C.;Yang, Q. 2003
5,996.2
2015-07-10T00:00:00.000
[ "Computer Science" ]
SYNERGISTIC EFFECT OF COPPER OXIDE NANOPARTICLES FOR ENHANCING ANTIMICROBIAL ACTIVITY AGAINST K. PNEUMONIAE AND S. AUREUS This study was aimed to assess the antimicrobial activity of copper oxide nanoparticles (CuO NPs) created by method of thermal green way using basically a maize starch. Mucoid were appeared of Klebsiella pneumoniae bacterial colonies and the positive results with some biochemical tests . On the other hand, Staphylococcus aureus appeared pigmented colonies surrounded by a yellow halo because of mannitol fermentation. According to the 24 time incubation period, the CuO NPs antimicrobial activity showed of bacterial growth pathogenic K. pneumonia was 0.52 ± 0.04 cell/ml than control 1.60 ± 0.01 cell/ml .Aven as S. aureus appeared the number of bacterial growth as follow 0.79 ± 0.07 cell/ml compared with control 1.90 ± 0.01 cell/ml. The biologically effect for enhancing antimicrobial activity the percentage of resistant was decreasing from 66.6% to 22.2% when used copper oxide nanoparticles. Also, S. aureus sensitivity test showed resistant percentage was decreased from 55.5% to 33.3% at 24 hours. ) with moved at 10 min toward arrive at white color starch suspension.Then the suspension color was altered to a bold blue when added starch solution to copper sulphate solution with continues stirring.Later added 0.5M (15 drop wise) an abundance amount of ammonia solution until pH solution reached 10, later than 20 min.Then for 2min put the mixture in microwave (LG/Korea), the mixture set was formatted is CuO nanoparticles that changed to a suspension black color.Then centrifugation at ten min (8000 rpm) was used.Finally, wash by DDW and ethanol many times the precipitant to make it free impurities organic and from sulphate, ammonia.By oven for 2 hours precipitant was dried at 150ºC.Finally, obtained black expected CuO NPs powder (18).In previous study, the properties nanoparticles showed were very pure, globular shape, the ranging diameter size of particles was from (47.41 to 109.49)nm and constant, in addition average crystallite measure is 9.8nm.However, the average distribution (d50) is71.17nm(1).To prepare a stock CuO suspension using0.001gprepared black powdered dissolving with 5ml DDW (stock solution).Than was used Sonicatar instrument (Heraeus/Germany) to homogenizing and fine dissolving for more analysis. Bacterial isolation K. pneumoniae isolation was done by streaking loopful from brain heart infusion broth culture previously isolated from pus specimen on MacConky agar and on EMB agar for primary selection of pathogenic K. pneumoniae (3,4,5,17).According to MacFaddin (21), the biochemical tests were employed Catalase, Oxidase, Motility, Urease, Indol, Methyl Red(MR), Voges-Proskauer and Citrate utilization.Beside these tests, Api 20E system kit (Bio-Meriaux, France) and VITEK 2 system kit (Bio-Meriaux, France) were also checked for identification of K. pneumoniae.On the other hand, S. aureus also previously identified from purulent wound and on manitol salt agar.The bacterial isolated were activated on brain heart infusion broth(himedia) and incubated over night at 37 ᵒC. Antibacterial activity tests The activity antimicrobial of 40μg/ml concentration CuONPs was investigated against two strains; K. pneumonia is represented gram-negative organisms and S. aureus is represented gram positive bacteria.Activated bacterial strains on nutrient broth sterile (Himedia/India).Prepared bacteria suspension by alone colony was inoculated for 24h in nutrient broth with turbidity adjust using 0.5 McFarland standards.in brief, (100 µl /40 μg/ml) of CuO NPs were made was added to NB media sterile (5ml).Then inoculated with activated bacterial strains(0.1 ml).Later at37°C incubated these tubes for 24 hours.Control was involved; 0.1 ml of nanoparticles inoculated with nutrient broth media only at 37ºC.Finally, measured the bacterial growth after incubation using uv-vis spectrophotometer (DRG/ USA) at 625nm (18).The triplicate reading values of mean for each bacterial strain was recorded.CuO NPs synergistic effect with diverse antibacterial agents K. pneumonia and S. aureus incubated with the CuO NPs (40μg/ml) was subjected to evaluate the synergistic effect with diverse antimicrobials agents usually was used in present test.The Kirby-Bauer disk diffusion method was used on Mueller-Hinton agar(MHA) plates for determined synergistic effect (7).Antimicrobial disks (Bioanalyse/Turkey) involve; Methicillin (ME) (10μg), Trimethoprim/ sulphamethoxazole (SXT) (25μg), Amoxillin/ clavulanic acid (AMC) (30μg), Gentamicin (CN) (10μg), Levofloxacin (LEV) (5μg), Ciprofloxacin (CIP) ( 10μg), Cefixime (CFM) (5μg), Cefotaxime (CTX) (5μg) and Amikacin (AK) (30 μg) were used (Table 1).CuO NPs (40 μg/ml) with activated bacterial strain suspension (1.5 X 10 8 ) CFU/ml at McFarland 0.5 were mixed.0.1 ml of a mixture were inoculated by dispersed regularly in Mueller Hinton agar (Himedia /India) plates via swab, then antimicrobial disks were dispensed.Afterward incubated these plates for 24 h.at 37°C.After that calculated the zone of inhibition(ZOI) in millimeters (mm) around each antimicrobial disk, and compared to the activated bacterial (0.1 ml) inoculated directly on plate of Mueller-Hinton agar (Control).In addition, the degree sensitivity was determined relation to rules of National Committee for Clinical and Laboratory Standards Institute (NCCLs) (37) Characteristics of bacterial isolates The colonies of K. pneumoniae were appeared mucoid on MacConky agar (Figure 1) and the positive results with sugar catalase, Simmons' citrate, urease and Voges-Proskauer except negative results in MR, Indol, oxidase and motility tests were identified in biochemical testing (Table 2).Result of Api-20E system was agreed with previous biochemical tests (Figure 2).The consequences similar to Patel, et al. (25).On the other hand, the results mannitol fermentation of S. aureus on mannitol salt agar appearance pigmented colonies surrounded by a yellow halo and lush (Figure 3). Figure 1. A result of K. pneumoniae on MacConky agar Table 2. The biochemical tests of K. pneumoniae bacterial isolates Figure 2. A positive result of Api20E for K. pneumoniae Figure 3. A result of S. aureus on mannitol salt agar Antibactericidal tests The results bactericidal impact of biosynthesis 40μg/ml (200μl from stock solution added to 5ml DW)concentrations of copper oxide nanoparticles against pathogenic K. pneumonia and S.aureus strains.According to the 24 time incubation period, showed of pathogenic K. pneumonia was recorded 0.52 ± 0.04 cell/ml compared with control 1.60 ± 0.01 cell/ml (Table3) (Figure4) .While S.aureus showed the number of bacterial growth absorption as 0.79 ± 0.07 compared with control 1.90 ± 0.01 cell/ml (Table3) (Figure5).From consequences exposed previous, the 40mg/ml concentration of copper oxide nanoparticles has impact antimicrobial against G-ve and G+ve bacteria, and all were recorded significant alteration than control.Structure of cell wall bacteria, particle size, in addition to the degree of bacterial cell suitability test was achieved that impact these consequences represent and to settle on the impact of bactericidal assorted attachment the organisms with NPs (20).The great quantity of amines and carboxyl groups on cell surface of bacteria, that raise the Cu ions attraction towards both bacteria and ascribed to the template CuO NPs that lead to the greater sensitivity of these groups to the CuO NPs(19).Mechanisms diverse have been planned to translate the antibacterial conduct of metal oxides.(6) agreement with our current study, all have behavior antimicrobial higher, which is represented that nanoparticle smaller sizes that helped the nanoparticles access during membrane of bacteria and react with component of its. S.aureus investigation CuO NPs synergistically impact with diverse antibacterial agents The 40μg/ml of CuONPs incubation with bacterial strains (K.pneumonia, and S. aureus) during period 24h.Than the bacterial strains sensitivity test against diverse antimicrobial was done suggested by (13), using method of disc diffusion.Depending on guideline of the (NCCLs).The consequences sensitivity of K. pneumoniae to copper oxide nanoparticles and biologically effect for enhancing antimicrobial activity were converted from resistance to sensitive for Methicillin, Ciprofloxacin, Cefataxime and Cefixime at 24hours compared with the resistance control.Furthermore, K. Pneumoniae isolates showed a high sensitive rate to Gentamicin, Levofloxacin and Amikacin (100%), and the percentage of resistant was decreasing from 66.6% to 22.2% when used Copper oxide nanoparticles at 24 hours(Table4) (Figure6).Also, sensitivity test of S. aureus appeared change against antimicrobial agents in the level of resistant involve: Trimethoprim sulphamethox and Cefataxime which was converted from resistant to sensitive at24 hours that copper oxide nanoparticles enhancing antimicrobial activity compared with the resistance control.Thus the resistant percentage was decreased from 55.5% to 33.3%.Whereas S. aureus (MRSA) results showed high sensitivity rate to Gentamicin, Levofloxacin, Ciprofloxacin and Amikacin at a percentage (100%).As well as, S. aureus was showed high degree of resistant to Methicillin, Amoxillin/ clavulanic acid and Cefixime at a percentage (100%) (Table5)(Figure 6).In addition to, sensitivity of K. pneumoniae to copper oxide nanoparticles and biologically effect for enhancing antimicrobial activity higher than S. aureus.The structural of the cell membrane in addition to the compositional contrasts could be ascribed to variation in the antimicrobial agents impact as well as CuO nanoparticles against K. pneumonia and S. aureus (15).Thicker peptidoglycan cell membranes for Gram-positive bacteria contrasted with thin peptidoglycan cell membranes for Gramve bacteria and a low antibacterial effect of CuO of Gram-positive bacteria for the reason that CuO NPs are difficult to enter thicker peptidoglycan cell (32).Multidrug resistance between bacteria or/and some genetic mutations were occurred and randomly used of antibacterial agents may be that due to rise of resistance to most recently antibacterial agents as results in the present study elicited that, and mentioned by Stock and Wiedemann (30).In addition to mutation, The highly resistant organisms was producing from that present genes of resistance in plasmid , and creation CONCLUSON According to the findings of the experiments, copper nanoparticles is beneficial as antibacterial agent and resistance rates to antibiotics were decreased after 24 Fig. 4 .Fig. 5 . Fig. 4. Antibacterial impact of green produced CuO NPs 40μg/ml concentrations against K. pneumonia the similar letters stated to non significant differences
2,163
2024-02-25T00:00:00.000
[ "Medicine", "Materials Science", "Environmental Science" ]
Broadband Rotary Joint Concept for High-Power Radar Applications To allow antenna movements in azimuth and elevation in high-power radar applications, rotary joints are essential. They allow the rotation of a transmission line and therefore are important transmission line components. In the present paper, a broadband rotary joint concept for high-power W-band radar applications is proposed. To avoid a twist of the polarization plane of a linearly polarized mode, like HE11, a combination of two broadband polarizer is used. A cross polarization of Xpol ≤ − 20 dB can be achieved within the considered frequency range from 90 GHz to 100 GHz. This corresponds to a suitable value for radar applications. Introduction As already predicted in the late 1970s [1], space debris becomes a major issue for use of space [2,3]. In particular, the amount of space debris in low earth orbit (LEO) is increasing rapidly [3]. To detect and map space debris, high-performance radar sensors can be used [3,4]. Due to the enormous progress in the field of high-power microwave technology, corresponding radar sensors can also be operated in high-frequency bands such as the W-band [5]. Due to the high bandwidth available there, very high resolutions can be achieved [5,6]. In the near future, even W-band transmission powers in the range of 100 kW may be achieved [7]. To realize a Wband radar sensor with such a high transmission power, a suitable high-power amplifier and an appropriate transmission line are required. The transmission line connects the output of the high-power amplifier with the antenna feed. Due to the high power, overmoded transmission lines are required. A suitable transmission mode is the HE 11 hybrid mode in a corrugated waveguide. Low ohmic loss and small mode conversion can be achieved [8,9]. In principle, for a highly overmoded HE 11 transmission line, a rotary joint can be realized by a simple transmission line gap. Due to the low HE 11 -field components at the waveguide walls, no major influences are expected. The HE 11 mode power loss through a transmission line gap can be estimated by [10]: with L as the gap length, λ as the free space wavelength, and D = 2a as the waveguide diameter. The HE 11 power loss can be already neglected for Lλ/2a 2 ≤ 1/100. A commonly utilized standard waveguide diameter for high-power transmission lines is D = 63.5 mm. With a wavelength of λ = 3 mm follows: L ≤ 6.7 mm. For a technical realization, this is noncritical. However, by rotation of such a rotary joint, the polarization plane of a linearly polarized mode, like HE 11 , is twisted. To avoid such issues, the circularly polarized HE 11 mode can be used for transmission. Owing to the circular symmetry, no twist of the polarization plane occurs. To change the polarization of the HE 11 mode, suitable broadband polarizers are required: A first polarizer converts the incident linearly polarized HE 11 mode to a circularly polarized wave, and a second polarizer restores the linearly polarized HE 11 mode after the transmission line gap. This method provides a simple broadband rotary joint concept for high-power radar applications. Following the reachable cross polarization for such a rotary joint concept shall be investigated. The paper is organized as follows: Section 2 starts with a suitable polarizer design and discusses possible polarization errors. In Section 3, simple formulas to calculate the cross polarization of the proposed rotary joint concept are derived. Section 4 addresses the issue of mode conversion due to diffraction. Section 5, the conclusions, closes with a summary of the considered aspects and the proposed rotary joint concept. Polarizer A linearly polarized wave can be polarized circularly by introducing a 90°phase shift of orthogonal field components [11]. For high-power microwave applications, reflection grids [12] are widely used: Electrical field components parallel to the grid are reflected at the surface of the grid; electrical field components perpendicular to the grid penetrate into the grid and are reflected at the bottom. The resulting time delay can be used for polarization tailoring. Figure 1 shows a rectangular reflection grid with the grid height h, the grid period p, and the grid width a. In the ideal case, an exact phase shift by 90°and an amplitude balance of orthogonal field components by 0 dB can be achieved, within the considered frequency range. At a real polarizer, a phase error Δφ and an amplitude error ΔA occur. In general, a frequencydependent elliptical polarization results: In the ideal case (purely circular polarization) applies: Δφ = 0 and ΔA = 1. The design of a suitable polarization grid for high-power W-band radar applications is discussed in detail in [13]: Good broadband frequency behavior can be achieved for a grid width a = 0.25 mm, a grid period p = 1 mm, and a grid height h = 0.48 mm. To avoid critical field strengths and electrical arc breakdown, curvature radii at the upper end of the grid are used (R = 0.1 mm [13]). Figure 2 shows the corresponding phase error Δφ (in blue, solid) and amplitude error ΔA (in brown, solid) for the grid parameters introduced above, within the frequency range from 90 GHz to 100 GHz [13]. Note that the amplitude error ΔA is scaled in decibels. Figure 2 shows that the phase error can be limited to Δφ ≤ ± 5.8°. The small variation of the amplitude error ΔA is a result of numerical uncertainties [13] and can be neglected. For comparison, also results of a less suitable parameter combination are shown (a = 1.06 mm, p = 1.75 mm, and h = 0.42 mm, dashed curves [13]). The polarization errors are much larger in this case. Cross Polarization In the following, the cross polarization of the proposed rotary joint concept is derived. For simplification, a plane wave approximation is used. For a waveguide diameter D ≥ 12λ, the Phase error Δφ and amplitude error ΔA of a rectangular phase grid with suitable (solid curves) and less suitable (dashed curves) parameter combination [13]. The grid width a, the grid period p, and the grid height h are defined in Fig. 1 results of a plane wave approximation can be transferred to a HE 11 wave [12]. Two identical polarizers are used. Figure 3 shows a corresponding CAD (computer-aided design) model, which will be used for future full wave simulations. Principle Approach As input, an ideal 45°linearly polarized plane wave with unit amplitude is assumed. The propagation direction shall be in positive z-direction (E z = 0). Therefore, the other two field components are given by: At the first polarizer, the E x component is delayed by φ x = π/2 + Δφ. The present phase error is considered by Δφ. As already mentioned, the amplitude error ΔA can be neglected. With sin(x + π/2) = cos(x) follows: A rotation by ϕ of the rotary joint can be described by the rotation matrix: With Eq. (4) follows: At the second polarizer the E 0 y component is delayed by φ y = π/2 + Δφ. With sin(x + π/2) = cos(x) and cos(x + π/2) = − sin(x) follows: The cross polarization is defined as the ratio of the time averaged signal power in the desired polarization plane and the time averaged signal power in the orthogonal polarization plane. Equation (7) describes a 45°linearly polarized wave. To separate the orthogonal polarization planes, an appropriate rotation matrix is used. This rotates the coordinate system by 45°: The cross polarization follows as: The inner product ⟨x, y⟩ is defined as [14]: Using a small-angle approximation, the introduced equations lead to the cross polarization: Figure 4 shows the numerically calculated cross polarization, using Eqs. (7), (8), and (9) without small-angle approximation. The polarization errors introduced in Section 2 are used (see Fig. 2, solid lines). The amplitude error ΔA is neglected. Figure 4 shows that just for ϕ = 180°the cross polarization is frequency independent. This is related to constructive/destructive superposition of the single polarization errors of the first and the second polarizer. However, the cross polarization is always below − 20 dB. This corresponds to a suitable value for radar applications [15]. Furthermore, the worst-case value of X pol ≈ − 20 dB just occurs at the frequency band edges at 90 GHz and 100 GHz. Within the frequency range, the cross polarization is even better. Comparison of the results shown in Fig. 4 and the approximation derived in Eq. (12) shows good agreement. The rotary joint concept will be experimentally proven later, when the whole transmission line will be manufactured. Mode Conversion In the utilized polarizer arrangement, mode conversion due to diffraction occurs [10,16]. The amount of mode conversion loss of the HE 11 mode can be estimated with Eq. (1) and L = D [10]. For radar applications, also the exact mode content is important. Spurious modes could impair the cross polarization and the sidelobe level of the radar antenna. For a highperformance radar sensor, these effects have to be taken into account. To calculate the mode conversion at a miter bend, the gap theory is commonly employed [10,16,17]. The miter bend is represented as a gap with the gap length L = D. To determine the electric/magnetic field components at the output plane, the radiated field components from the input plane are calculated, at the location of the output plane. The field components at the position r ! , radiated from an arbitrary aperture A, can be calculated for k \cdot R ≫ 1 by [18]: with: k = 2π/λ, Z = 120π Ω and the equivalent electric/magnetic current densities [11,18]: The vector r 0 ! represents a point at the radiating aperture A, E 0 ! / H 0 ! the electric/magnetic field components at the aperture, and n ! the normal vector orthogonal to the aperture. The assumption k \cdot R ≫ 1 implied that the distance R ¼ jr 0 ! − r ! j is larger than ≈ 2λ [18]. This does not imply that Eq. (12) is limited to the far field condition R > 2D 2 /λ [11] with D as the largest aperture dimension. In [10,17], analytical approaches are used to calculate the HE 11 -mode power loss with good results. However, simplifications are used which impair the accuracy at high-order modes [10]. For radar applications, the HE 11 -mode power loss and the exact mode content are important. Therefore, Eq. (12) is solved numerically. With the MATLAB Parallel Computing Toolbox, the required computing time can be still limited to a few minutes. To determine the mode content at the output plane, the electric field components are interpreted as the superposition of the normal modes within the corrugated waveguide [19]: with ϑ þ=− n as the complex mode amplitude in the forward/backward direction and e n ! as the electric field components of the nth normal mode. Equation (15) is multiplied by the conjugated transverse electric field components e ! ⊥m of the mth normal mode and then integrated in respect of the waveguide cross section: In consideration of the power orthogonality condition [19,20], each summand with n ≠ m is vanished. Therefore follows: With ϑ − m ¼ 0, just the transversal components of the electric field are required. This can be used to reduce the required computational effort to calculate the field components at the output plane. Due to the circular symmetry, just HE 1n modes have to be considered for the mode decomposition [10,17]. As discussed in [10,16], the gap theory neglected mode conversion to high-order modes close to cut-off. However, due to their high ohmic wall losses, these modes are less critical for radar applications, since those will be damped and not radiated from the radar antenna. Figures 5 and 6 show simulation results for the mode conversion to parasitic low-order modes in dependence of ka = 2π \cdot a/λ. In Fig. 5, the corresponding mode power is plotted and in Fig. 6 the phase relation, relative to the HE 11 mode. As predicted from theory [10,16], mode conversion decreases with increasing ka. Both figures show that an appropriate choice of the waveguide diameter is essential. At f = 95 GHz, a suitable waveguide diameter is D = 63.5 mm (ka ≈ 63). Mode conversion due to diffraction is less critical here. In principle, the diffraction losses can be further reduced by choosing a larger diameter. However, above a certain value, mode conversion losses due to alignment tolerances (flange offsets and tilts) of the waveguide segments become dominant. Corresponding calculations are out of scope of the present paper and will be addressed in an upcoming manuscript. Conclusions The present paper addresses a broadband rotary joint concept for high-power W-band radar applications, within the frequency range from 90 GHz to 100 GHz. A combination of two broadband polarizers and a simple transmission line gap is proposed as rotary joint. Broadband Fig. 6 Phase relation of the parasitic low-order modes, relative to the HE 11 mode Fig. 5 Power of the parasitic low-order modes frequency behavior and negligible power loss can be achieved. At the frequency band edges 90 GHz and 100 GHz, a worst-case cross polarization of X pol ≈ − 20 dB occurs. This corresponds to a suitable value for radar applications. Within the whole considered frequency band, the cross polarization is even lower. For a high-performance radar sensor, spurious modes may be critical, which may impair the cross polarization and the sidelobe level of the radar antenna. At the center frequency of 95 GHz, an appropriate waveguide diameter is D = 63.5 mm, for which mode conversion due to diffraction is less critical. Funding Open Access funding enabled and organized by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
3,379.6
2021-01-05T00:00:00.000
[ "Physics" ]
Crevasse Propagation on Brittle Ice: Application to Cycloids on Europa Existing lineaments on the surface of the Jovian moon Europa are thought to be the result of ongoing brittle crack formation in the elastic regime. Arcuate features are called cycloids and can be modeled using linear elastic fracture mechanics. Here, we build on existing terrestrial models of rift propagation and extend them to cycloids on the moon. We propose that these cracks tend to grow as a series of nearly instantaneous events, spaced by periods of inactivity. The behavior is similar to what is observed on Antarctic ice shelves, where rifts can remain dormant for years. We argue that dormant periods between growth events could explain the presence of cycloids on Europa even without invoking secular motion of the crust. Furthermore, being able to model propagation events and their timing should help future missions exploring the moon. Introduction Observations coming from past planetary missions, in particular the Galileo spacecraft, revealed lineaments and fractures across Europa's surface. Various fracture models and formation scenarios have been proposed, although available data are not sufficient to fully understand how the surface of the moon deforms (e.g., Rhoden et al., 2015). However, one of the main requirements for the formation of these type of cracks is the existence of liquid water underneath the icy crust (Carr et al., 1998;Pappalardo et al., 1999). Radio doppler tracking and magnetic measurements performed by Galileo revealed a water ocean below the global shell (Anderson et al., 1998;Khurana et al., 1998;Kivelson et al., 2000). Although stratification scales are still poorly constrained, the ocean should be around 100 km deep (e.g., Sohl et al., 2002), while the thickness of the frozen ice shell ranges from a few to more than 30 km (e.g., Billings & Kattenhorn, 2005). Tidal stress produced by gravitational interaction with Jupiter is the main force driving cold ice on Europa to fracture and form these crevasses (e.g., Greenberg et al., 1998;Hoppa et al., 1999aHoppa et al., , 1999b. Some of the lineaments are characterized by arcuate and cuspate segments of around 100 km in length. These features are called cycloids, and their orientation can be explained by formation due to tensile cracking (Hoppa et al., 1999b(Hoppa et al., , 2001). In the model of Hoppa et al. (1999b), cycloidal arcs are formed in response to the constantly changing tidal stress magnitude and orientation along Europa's orbit. The stress eventually reaches the ice strength and allows the crack to evolve and follow curved patterns. The rate of change of the tidal stress can be used to derive an apparent growth rate of cycloids of roughly 3 km/hr. By constraining the growth rate, Hoppa et al. (1999b) assumed that a cycloidal arc is formed during a single orbit of the moon around Jupiter. However, low temperatures at the surface of Europa (about 100 K, Spencer et al., 1999) and high strain rates (due to the constantly rotating tidal stress) also imply that ice deforms elastically (Rist & Murrel, 1994). Therefore, cracks on brittle ice should propagate at high fractions of the speed of sound (Lee et al., 2003). The difference in magnitude between the two speeds can be explained by the fact that cracks on ice are generally evolving at nearly instantaneous rates through several events, similarly to what happens on terrestrial ice shelves. Indeed, on Earth, ice shelf rifts are formed through episodic cracking events that follows periods of inactivity, when the crack is dormant (Banwell et • Supporting Information S1 • Figure S1 • Figure S2 • Figure S3 • Figure S4 Correspondence to: M. Poinelli<EMAIL_ADDRESS>2013). As consequence, the apparent growth rate, for example, measured by remote sensing, is substantially slower than the actual propagation rate. Marshall and Kattenhorn (2005) improved on the cycloid growth model of Hoppa et al. (1999b) by describing the cusp formation in response to resolved shear stress at the cusp itself, which results from tailcracking processes. Hurford et al. (2007) further updated the cycloid model with the introduction of secular stress contribution and the variation of material parameters along the cycloid. This allowed improved fits of existing cycloids by introducing a shift in longitude due to secular rotation of the shell (e.g., due to nonsynchronous rotation). Furthermore, Rhoden et al. (2010) implemented a parameter-searching algorithm that is capable of quantitatively inferring rotational parameters of Europa by matching cycloids' shape with observations of the Galileo mission. The latter model suggests that diurnal stress due to nonzero eccentricity, obliquity, and physical libration represents the tidal model that better reproduces the shape of observed cycloids. In summary, cycloids have been extensively studied although their propagation rate has not been sufficiently described in past models. We contribute to this question by introducing a fracture mechanics approach that is capable to model episodic and instantaneous cracking events. On Earth, large availability of in situ data and observations from remote sensing can be used as a reference to better understand how ice sheets and crevasses evolve. Among numerical models of ice fractures, one of the most straightforward approaches is linear elastic fracture mechanics (LEFM; Tada et al., 2000). This type of model approximates ice as an elastic medium, allowing for discontinuities in the structure. For example, Van der Veen (1998a, 1998b adopts LEFM to calculate equilibrium lengths of surface and bottom crevasses on terrestrial glaciers. Other models that compute horizontal propagation of ice rifts in Antarctica used techniques that can be derived from LEFM theory (Hulbe et al., 2010;Larour et al., 2004b). In addition, LEFM models of crevasse propagation have been tested and validated against a series of observational campaigns in Iceland (Mottram & Benn, 2009). The LEFM approach has been previously applied to Europa with focus on the potential of cracks to penetrate through the entire ice crust, offering a direct connection between the surface and deep ocean (Crawford & Stevenson, 1988;Lee et al., 2005;Qin et al., 2007;Rudolf & Manga, 2009). The LEFM methodology could also provide insights into directions and angles of tailcrack formation at the surface (Marshall & Kattenhorn, 2005). However, no previous study has dealt with horizontal propagation of cycloids on Europa with terrestrial-based models of ice rifts. Although knowledge of surface deformation on Europa is limited, fracture mechanics tools and terrestrial analogs provide first-order estimations, which could be validated by future missions. In the current work, we follow the LEFM approach of Larour et al. (2004b) to compute crevasse propagation rates of cracks on brittle ice. As starting point, we fix orbital and rheological parameters, and we use elastic fracture mechanics to model the episodic propagation events. Differently to past studies, which assumed that a cycloidal arc is formed continuously, our model is capable to calculate the growth rate of cycloids by including dormant periods between propagation events. Periods of inactivity between single events depend on the local stress-strain rate state and the length of the existing crack. After presenting the LEFM methodology and the results of our model, we discuss the implications that these new findings bring to our knowledge of Europa and to its future exploration. Methods Surface stresses on Europa can be calculated through the computation of the tidal potential for internally differentiated spherical bodies (Harada & Kurita, 2006;Jara-Orué & Vermeersen, 2011;Wahr et al., 2009). Tidal stresses on the ice layer change in magnitude and orientation as function of Europa's orbital position around Jupiter . These stress fluctuations are thought to drive crevasse propagation on the moon, which can be modeled through LEFM. On Europa, diurnal components of the tidal potential (scaled over one orbital period of the moon, 3.55 days) are due to nonzero eccentricity and nonzero obliquity. Among other models, Jara-Orué and Vermeersen (2011) compute global deformation tensors from the application of Laplace transform-based normal mode theory to tidal responses. We use the analytical formulas of time-dependent tensor representing surface stresses provided by Jara-Orué and Vermeersen (2011) and due to nonzero eccentricity of 0.0094 (Wahr et al., 2009) and nonzero obliquity of 0.1 • (Bills, 2005). We neglect the contribution of relaxation modes, because the time scale of the simulation in the elastic regime is significantly smaller than the Maxwell time we adopt: 9,000 years. Further details of how the tidal stress 10.1029/2019GL084033 is computed and of the interior model are provided in Supporting Information S1. In terms of rheology, the elastic crust is sufficiently represented by Young's modulus and Poisson's ratio. We use values of 9.29 GPa and 0.33, respectively, appropriate for water ice and consistent with previous studies (e.g., Hoppa et al., 1999a, Rhoden et al., 2010. In fracture mechanics, three different cracking modes are possible, depending on the local mechanisms of deformation (Tada et al., 2000). In this study, we limit ourselves to Mode-I fracturing. This type of propagation assumes that crack flanks are moving apart from one another under stresses normal to the fracture plane. The current model is based on the calculation of the Mode-I stress intensity factor (also known as K I ) at the tip of an opening ice crevasse. Crack length, geometry, and external loads contribute to the stress intensity factor buildup. Eventually, extreme variations of these parameters lead the material to reach failure strength, called material toughness, that follows crack propagation. LEFM is usually adopted to calculate equilibrium length of cracks when this threshold is reached. Examples of LEFM application to glaciers on Earth are Hulbe et al. (2010) and Van der Veen (1998a, 1998b. These works use this technique to compute propagation of (surface and bottom) crevasses on terrestrial ice sheets, when these are loaded with longitudinal stress, overburden ice pressure, and hydrostatic pressure of water-filled crevasses. On Europa, stress intensity factors of water-free crevasses consist of two main components: the first caused by tensile stress due to tides and the second by overburden ice pressure. If we restrict the simulation to cycloids at the surface, we can neglect the contribution of overburden pressure. Therefore, the stress intensity factor at the tip of a crevasse can be written as (Tada et al., 2000) K where is the tensile stress at the tip of the crack and l the crack length. F is an analytical function which is related to a centered crack test specimen and based on interpolation of stress intensity factor curves derived in laboratories (e.g., Tada et al., 2000; the formula is provided in Supporting Information S1). The tensile stress at the tip together with the length of the already existing crack determines whether K I reaches the material toughness of ice leading to crack propagation. If the toughness is hit and the crevasse grows in time, the increment in length l influences the stress intensity factor at the new tip. If the background stress is time dependent, at the new tip, the stress state will also be different from the previous. In the current model, we fix material toughness at 10 KPa m 1/2 , appropriate for water ice (Rist et al., 2002;Van der Veen, 1998a), and we calculate K I for advancing tips along existing cycloids. Following Larour et al. (2004b), we adopt the displacement formulation of K I factor to compute jump-arrest propagation rates. Beside stress intensity factors, LEFM provides tools that calculate opening widths of cracks. For example, in the centered crack test specimen under stress control, the opening width at the crack center can be written as (Tada et al., 2000) = where E is the Young's modulus of the material (here, ice). V(l) is a geometrical function similar to F in equation (1) (formulas are included in Supporting Information S1). We then derive equation (2) in time, in order to relate crevasse propagation rate to strain rate and width derivative ∕ t, which represents the opening rate. In symbols which can be rearranged in the following form: where l∕ t is the crevasse propagation rate and . the strain rate. Equation (4) implies that the propagation rate at the crack tip is a function of stress, strain rate, crack length, and opening rate. The knowledge of these four factors at the tip of the crevasse allows to estimate the crack propagation rate. This type of fracture growth is referred as stick-slip propagation with jump-arrest increment of the crack (Larour et al., 2004b). This fracturation process is characterized by extremely rapid propagation events that follows phases 10.1029/2019GL084033 of inactivity due to the conditions that are unfavorable to allow further growing. This approach was able to estimate both the nearly instantaneous cracking events that characterize propagation of rifts on Ronne Ice Shelf in Antarctica and the apparent slower growth rate that is actually measured by remote sensing (Larour et al., 2004a(Larour et al., , 2004b. There are different ways to measure the opening of a crack on ice. For example, Larour et al. (2004aLarour et al. ( , 2004b measure opening widths and opening rates of Antarctic rifts with InSAR data. Since observational constraints on surface deformation of Europa are lacking, we approximate opening width of cycloids with the displacement at the center of the crack derived from LEFM theory, that is, equation (2). Furthermore, we approximate the opening rate as follows: where . is the local strain rate. For the deformation being elastic, strain and stress solved on a plane are proportionally related by the compliance matrix, function of the Young's modulus and Poisson's ratio (Timoshenko & Goodier, 1970): where the coordinates x and y refer to zonal and meridional directions while E and are Young's modulus and Poisson's ratio. Since the elastic compliance matrix is not time dependent, stress rate and strain rate are related by the same proportion of stress and strain. Thus, time derivation of the stress tensor of Jara-Orué and Vermeersen (2011) is sufficient to compute strain rate for any location at any orbital position of Europa. However, equations (2) and (5) are first-order approximations of crack width and opening rate. In the current model, opening rates reach a magnitude of millimeters per year. On the other hand, the opening width is calculated as few meters, which is lower than the geological measurements of Galileo (Coulter et al., 2009;Kattenhorn & Hurford, 2009). We develop a local approach that computes a series of stress intensity factors along an existing lineament, in order to calculate the growth rate of cycloids and the dormant period between following propagation events. To achieve this, we map cycloidal arcs and cusps at their exact location in the QGIS Geographic Information System software as a series of nodes separated by equidistant segments within each arc. The nodes that discretize the cycloids are obtained by using the equidistant cylindrical sphere projection of the U.S. Geological Survey global mosaic of Europa obtained from Voyager and Galileo data (Courtesy of Astrogeology Science Center, U.S.G.S, 2002). After we obtain latitude and longitude of each node through the shaping process, orientation and length of each segment are calculated with standard spherical geometry formulas (Wertz, 2001). We then compute deformation for each node of the mapped features by rotating stress tensors from the global geographic coordinate system (Jara-Orué & Vermeersen, 2011) to local referential components, using Mohr's circle (Timoshenko & Goodier, 1970): where n and t are tensile and tangential stress whereas x , y , and xy are zonal and meridional stress components, obtained from Jara-Orué and Vermeersen (2011). The angle represents the orientation of each segment, which can be computed by spherical geometry routines from the coordinates of the nodes (Wertz, 2001). Figure 1a shows one of the observed equatorial cycloids where the rotation of the stress tensor from geographical to local direction of a single node is sketched in the inset. In the figure, the rotation angle characterizes the orientation of different segments; thus, the same angle is used in equations (7) and (8). By fixing direction of propagation and initial coordinates of the first node, the model computes stress intensity factors of advancing nodes along observed crevasses. Stick-slip propagation rates with jump-arrest increment of the crack are computed if the ice toughness is reached at the advancing node. If this is the case, the crack grows in length along the prescribed segment. Figure 1b shows a sketch of the model setup. The model calculates intensity factor, opening width, and propagation rate at N-1 advancing nodes, which are related to equidistant segments. While the background stress is constant along a single segment, different segments (at different locations) are subjected to diverse stress state. In order to determine whether the segments adjacent to the tip are contributing to the length of the open crack, our model calculates whether these are forced with tensile stress. If this is the case, we assume that the segments behind the advancing node do influence the crevasse length. This process conserves the variation of stress at different locations of Europa and the different orientation of the segments. Thus, the model repeats the analysis until the entire shape of the observed cycloid is reproduced or one of the node is not experiencing conditions that are allowing further propagation. Results and Discussion The cycloids observed on Europa represent an ideal case for the application of our fracture model to brittle ice. The extreme cold temperature at the surface and the consistency of the cycloid orientations with tensile stress justify our assumptions of elasticity and opening fractures. The new model is capable to reproduce the prescribed shape of four cycloids observed on the surface of the moon. We model two features at low latitudes and Delphi and Cilicia flexi at the south pole. We name the equatorial cycloids as EQ1 and EQ2. EQ2, Delphi and Cilicia are also reproduced in Rhoden et al. (2010). We discretize the four cycloids with 10,000 nodes and equidistant segments on the order of meters. Table 1 summarizes the results of our LEFM computations in terms of maximum propagation properties of the four features. As already mentioned, our model reproduces a single cycloid by bursts of propagation events, happening at each node. In the table, V indicates the maximum propagation rate at which the cycloid evolves, according to the model. This speed represents the maximum rate of a single propagation event. ∕ t is the maximum opening rate and the maximum crack width. The table includes the time required to the complete evolution of the observed cycloid and the time of inactivity. Finally, the growth rate is calculated by dividing the total length of the cycloid by the time required to complete the feature. The growth rate is orders of magnitude smaller than the propagation rate that characterizes the bursts of activity. Propagation rates reach values of hundred of meters per seconds, while the apparent growth rate is of tens of meters per day. This is due to the fact that certain nodes require time for the stress field to rotate in order to reach the material toughness and to allow propagation. For example, Figure 2 shows a detail of the model acting on the cusp. Extreme variations in the orientation of the segments across the cusp affect the length of the open crack and the orientation of the normal stress. This means that at the cusp, the crack remains dormant. The propagation is initiated again if the stress intensity factor reaches the material toughness. Provided the observed paths, we are able to reproduce the entire geometry of EQ2, Delphi and Cilicia. The propagation of EQ1 arrests at Node 4759, and the stress field prescribed by the current tidal setup does not permit further propagation. Our model predicts that the cycloids in the southern hemisphere have a similar behavior. Here the bursts of activity reach very high propagation rates with relative slow growth rate. On the other hand, EQ2 grows faster but with slower propagation events. This discrepancy is mainly due to the difference in the strain rate at the two locations on the surface. This is strictly prescribed by the stress forcing adopted in the model. As already mentioned, the same behavior that our model predicts for cycloids on Europa is observed on rifts in Antarctica. Here cracks grow at an apparent rate that is slower than episodic propagation events (e.g., Banwell et al., 2017. On Earth, rifts can remain dormant for years and generally grow at rates on the order of meters per day Walker et al., 2013;Walker & Gardner, 2019). Observation from remote sensing suggests that rifts on terrestrial ice shelves experience variations in the propagation rate that often have a strong seasonal dependance Walker et al., 2015). Furthermore, several works propose that the mixture of iceberg fragments, sea ice, and snow (often called as ice melange) that fill the crack can have a substantial influence in the propagation of rifts Khazendar et al., 2009;Larour et al., 2014). The filling of crevasses on Europa should also play a substantial role in the determination of how the crack grow. Differently to what happens on Earth, Europa lacks of key observation of crack behavior and physical characterization of rifts and their filling. However, future explorations missions will observe the moon in order to address prominent questions about the nature of the surface and interior and the prospect for material exchange between the two. For example, National Aeronautics and Space Administration's Europa Clipper (Phillips & Pappalardo, 2014) is designed to perform about 45 flybys over the nominal mission lifetime of 3.5 years. This means roughly one flyby every month, with only few opportunities for overlapping observations. The first science campaign of Europa Clipper will cover the anti-Jovian hemisphere with clusters of data overlaps during two subphases of about 6 months each (Lam et al., 2018). Therefore, overlapping observations of the same areas should be less than 6 months apart. We estimate that crevasses evolve on an average time frame of around hundreds of days. This period is roughly similar to the total observation period of a single hemisphere of the moon, designed to last less than 1 year. Average opening width would be of a few meters (value in Table 1). Therefore, we conclude that Europa Clipper should be able to detect partial development of crevasses provided that the imaging resolution of the spacecraft is of meters per pixel. Differently to previous studies on cycloids, we model dormant periods between growth events. As consequence, our model estimates that cycloids on Europa grow at rates that are 1 to 2 orders of magnitude lower than past studies (e.g., Hurford et al., 2007, Rhoden et al., 2010. By introducing periods of inactivity, the model is capable to calculate the propagation rate at the exact location. This is a substantial difference with past models, which require longitudinal shift in order to fit some cycloids, including Delphi and Cilicia (Hurford et al., 2007;Rhoden et al., 2010). This shift was explained by secular rotation of the crust such as nonsynchronous rotation. Our model suggests that dormant period between growth events could improve fits of lineaments at their exact locations on the moon. Finally, it could also alleviate the necessity of secular motion of the crust to explain the presence of cycloids. Conclusions Our model extends LEFM techniques to estimate horizontal propagation scenarios by using the observed shape of cycloids on Europa. We find that crevasses evolve in a relatively short amount of time by a series of nearly instantaneous fracturing events (order of hundreds of meters per second), similarly to what is observed on rifts in Antarctica. We introduce the simulation of periods of inactivity of these cracks. This capability was not considered by previous fractures models which calculate much faster growth rates, as consequence. Beside improving fits of existing cycloids, the introduction of this critical feature could have important implication in our understanding of the rotation state of Europa. We argue that it might also lead to explain the presence of cycloids even without considering secular motion of the crust. Furthermore, we suggest that the Europa Clipper spacecraft should be able to detect propagation events provided that the flyby repetition time over the same area is of few months apart. Our findings estimate that cracks should evolve in hundreds of days which is a roughly similar timescale compared to the coverage of the same area by the most recent trajectory baseline of the spacecraft (less than 1 year). In addition, we estimate that opening width of crevasses measures a few meters. Hence, we suggest that an imaging resolution of meters per pixel would be required to capture the scale of these events.
5,679.8
2019-11-07T00:00:00.000
[ "Environmental Science", "Geology", "Physics" ]
Metric fluctuations and its evolution during inflation W discuss the evolution of the fluctuations in a symmetric $\phi_c$-exponential potential which provides a power-law expansion during inflation using both, the gauge invariant field $\Phi$ and the Sasaki-Mukhanov field. I. INTRODUCTION AND MOTIVATION It is now widely accepted that dominant cause of structure in the Universe is a spatial perturbation. This perturbation is present on cosmological scales a few Hubble times before these scales enter the horizon, at which stage it is time-independent with an almost flat spectrum. One of the main objectives of theoretical cosmology is to understand its origin [1]. The usual assumption is that the curvature perturbation originates during inflation of the slow-rolling inflaton field. As cosmological scales leave the horizon, the quantum fluctuation is converted to a classical gaussian perturbation with an almost flat spectrum, generating inmediately the required curvature perturbation which is constant until the approach of horizon entry [2]. This idea has the adventage that prediction for the spectrum is independent of what goes on between the end of inflation and horizon entry. The spectrum depends only on the form of the potential and on the theory of gravity during inflation, provinding, therefore, a direct probe of conditions during this era. Stochastic inflation has played an important role in inflationary cosmology in the last two decades. It proposes to describe the dynamics of this quantum field on the basis of two pieces: the homogeneous and inhomogeneous components [3][4][5][6][7][8][9][10]. Usually the homogeneous one is interpreted as a classical field φ c (t) that arises from the vacuum expectation value of the quantum field. The inhomogeneous component φ( x, t) are the quantum fluctuations. The field that take into account only the modes with wavelengths larger than the now observable universe is called coarse-grained field and its dynamics is described by a second order stochastic equation [10,11]. Since these perturbations are classical on super Hubble scales, in this sector one can make a standard stochastic treatment for the coarse-grained matter field [10]. The IR sector is very important because the spatially inhomogeneities in super Hubble inflationary scales would explain the present day observed matter structure in the universe. In this work we consider gauge-invariant fluctuations of the metric in the early inflationary universe [2]. Metric fluctuations are here considered in the framework of the linear perturbative corrections. The scalar metric perturbations are spin-zero projections of the graviton, which only exists in nonvacuum cosmologies. The issue of gauge invariance becomes critical when we attempt to analyze how the scalar metric perturbations produced in the early universe influence a background globally flat isotropic and homogeneous universe. This allows us to formulate the problem of the amplitude for the scalar metric perturbations on the evolution of the background Friedmann-Robertson-Walker (FRW) universe in a coordinate-independent manner at every moment in time. On the other hand, the Sasaki-Mukhanov (SM) field takes into account both, metric and inflaton fluctuations [12]. One of the aims of this work is the study of the evolution of the SM field during inflation to make a comparison with gauge-invariant metric fluctuations. II. FLUCTUATIONS Matter field fluctuations are responsible for metric fluctuations around the background FRW metric. When these metric fluctuations do not depend on the gauge, the perturbed globally flat isotropic and homogeneous universe is described by [2] where a is the scale factor of the universe and (ψ, Φ) are the gauge-invariant perturbations of the metric. In the particular case where the tensor T αβ is diagonal, one obtains: Φ = ψ [2]. We consider a semiclassical expansion for the inflaton field ϕ( x, t) = φ c (t) + φ( x, t) [10], with expectation values 0|ϕ|0 = φ c (t) and 0|φ|0 = 0. Here, |0 is the vacuum state. Due to 0|Φ|0 = 0, the expectation value of the metric (1) gives the background metric that describes a flat FRW spacetime: After linearizing the Einstein equations in terms of φ and Φ, one obtains where β = 0, 1, 2, 3, a is the scale factor of the universe and the prime denotes the derivative with respect to φ c . The dynamics of φ c is given by the equations and H =ȧ/a is the Hubble parameter. Furthermore, the scalar potential can be written in terms of the Hubble parameter The equation (2) can be simplified by introducing the field This field can be expanded in terms of the modes where α k and α † k are the annihilation and creation operators that complies with the commutation relations The equation for the modes Q k isQ where ω 2 k = a −2 (k 2 − k 2 0 ) is the squared time dependent frequency and k 0 separates the infrared and ultraviolet sectors, and is given by Since the field Q satisfy a Klein-Gordon like equation on a FRW background metric ds 2 = dt 2 − a 2 dx 2 , also satisfy the commutation relationship This implies that the modes Q k are renormalized by the expressioṅ A . Particular solutions If the inflaton field oscillates around the minimum of the potential at the end of inflation the particular solutions whenφ c = 0 andφ c = 0 are very important. On the pointsφ c = 0 we obtain that Q k = 0. However, the solutions for Φ k are nonzero where φ 0 k is the initial amplitude for Φ k , for each wavenumber k. This means that the amplitude of each mode decreases exponentially with time. Other interesting particular solution is located at the pointsφ c = 0, when the field is at the minumum of the potential. In these points the equation (11) adopts the form where Φ k = a −1/2 Q k . B. The Sasaki-Mukhanov field A manner to study both, metric and inflaton fluctuations, can be made by means of the SM field [12]: S = φ +φ c Hc Φ. The modes of this field obeys the following equation where the modes S k complies with the renormalization conditioṅ C. Power spectrum One can estimate the power spectrum of the fluctuations for the fields Φ and S. The spectrum of the fluctuations for Φ are whilst the power spectrum for the SM field is the same that of the inflation field It is well known from experimental data [13] that the universe has a scale invariant power spectrum on cosmological scales. III. AN EXAMPLE: SYMMETRIC EXPONENTIAL φ C -POTENTIAL: POWER-LAW INFLATION As a first example we consider a scalar potential given by V (φ c ) = V 0 e 2α|φc| , where α 2 = 4π M 2 p p gives the relationship between α and the power of the expansion p. This potential is related to a scale factor that evolves as a ∼ t p (with constant power p), which corresponds to a Hubble parameter H(t) = p/t, which can be written in terms of the scalar field and H e = p/t e is the value of the Hubble parameter at the end of inflation. The temporal evolution for |φ c (t)| is given by where t ≥ t 0 . Sinceφ c = −sgn(φ c ) 1 αt andφ c = sgn(φ c ) 1 αt 2 (we assume sgn(φ c ) = ±1 for φ c positive and negative, respectively), the equation that describes the evolution for Φ results After make the transformation Q = Φe (p+2)t −1 dt , we obtain the differential equation for Q The general solution for the modes Q k (t) is where (C 1 ,C 2 ) are constants, (H (1) ) are the Hankel functions of (first, second) kind with x(t) = t p o k a 0 (p−1)t p−1 and ν 1 = p+1 2(p−1) . Using the renormalization conditionQ * k Q k −Q k Q * k = i, we obtain the Bunch-Davis vacuum [14] In the UV sector the function H (2) ν 1 [x] adopts the asymptotic expression (i.e., for x ≫ 1) whilst on the IR sector (i.e., for x ≪ 1) it tends asymptotically to The Φ-squared field fluctuations on the IR sector are ( Φ 2 ) IR = 1 2π 2 ǫk 0 (t) 0 dkk 2 |Φ k | 2 , and becomes where ǫ = k (IR) max /k p ≪ 1 is a dimensionless constant, k (IR) max = k 0 (t * ) at the moment t * when the horizon entry and k p is the Planckian wavenumber (i.e., the scale we choose as a cut-off of all the spectrum). The power spectrum on the IR sector is P Φ | IR ∼ k 3−2ν 1 . Note that ( Φ 2 ) IR increases for p > 2, so that to the IR squared Φ-fluctuations remain almost constant on cosmological scales we need p ≃ 2. We find that a power close to p = 2 give us a scale invariant power spectrum (i.e., with ν 1 ≃ 3/2 for ( Φ 2 ) IR . Furthermore, density fluctuations for matter energy density are given by δρ/ρ = −2Φ, so that δρ 2 1/2 / ρ ∼ Φ 2 1/2 . On the other hand, in the UV sector these fluctuations are given by IV. FINAL COMMENTS In this paper we have studied the evolution of the fluctuations in a symmetric φ cexponential potential which provides a power-law expansion using both, the gauge invariant field Φ and the Sasaki-Mukhanov field. This last takes into account simultaneously, the inflaton and metric fluctuations. The results obtained from the evolution of ( Φ 2 ) and ( S 2 ) are different in both treatments. The reason can be explained from the fact that the field S = φ +φ c H Φ is not gauge-invariant and hence do not describes correctly the fluctuations for φ and Φ. The fluctuations are well described by the field Φ which is gauge invariant and predicts a scale invariant power spectrum on the IR sector for p → 2. Note that we have not considered back-reaction effects which are related to a second-order metric tensor fluctuations. This topic was considered by Abramo and Nambu, who investigated a renormalization-group method for an inflationary universe [15,16]. A different approach to describe the metric fluctuations was considered more recently by Lyth and Wands [17] (see also [18]), who suggested that curvature perturbation could be generated by a light scalar field named curvaton.
2,454.2
2003-12-12T00:00:00.000
[ "Physics" ]
Bioenergy Production from Wastes by Microalgae as Sustainable Approach for Waste Management and to Reduce Resources Depletion Immense increase in world population leads to endeavour the alternate energy sources and renewable sources have shown high potential in the production of biofuels. Bioenergy is a promising, sustainable source to combat the rising environmental, economic, and technological issues related to depleting fossil fuels. Algae are the most attractive feedstock for bioenergy than terrestrial crops and are well adapted to all ecosystems. Conversion of wastes to energy helps in resource conservation and environmental safety. Bioenergy production using wastewater, effluents, food and agricultural wastes as growth substrates for microalgal cultivation for both biomass and lipid production is one of the sustainable approach. This review presents the biomass and lipid production by microalgae cultivated in various wastes for effective waste management and to reduce the depletion of fossil fuel resources. Introduction Increasing global energy demand and depleting fossil fuel resources is an increasing concern worldwide. Resources and energy from wastes has increased significantly in the recent for reducing fossil fuel consumptions and resources depletion. Conversion of wastes to energy helps in resource conservation and environmental safety on a sustainable basis. Production of bioenergy using sustainable sources has been studied extensively due to diminishing fossil fuel reserves. Organic wastes are ideal and inexpensive substrates for microbial oil production by oleaginous microorganisms. Microalgae are regarded as the most promising feedstocks for biofuel production due to the advantages of higher oil content, higher rate of photosynthesis, no direct competition for agricultural land and easy cultivation. Use of organic wastes and wastewater as substrate for microalgae cultivation has potentials to produce microbial oils and to reduce nutrient concentrations in wastes. This review presents the biomass and lipid production by microalgae cultivated in various wastewater, food wastes and agricultural wastes for effective waste management and to reduce the depletion of fossil fuel resources. Waste Water Wastewater contains large number of nutrients and using microalgae to consume the nutrients in conjunction with waste water treatment is economically sustainable. Wastewater could be utilized to large scale microalgae-based biofuel production which substantially reduces the nutrient expenses and eater resources [1,2]. Wastewater including municipal wastewater, domestic wastewater, urban wastewater and digested animal manure effluents are used to cultivate microalgae for biomass and biofuel production. In a study by Wu involving industrial waste water, Chlamydomonas sp.TAI-2 has removed ammonium, nitrate and phosphorous with the lipid content of 18.4% [3]. The better lipid accumulation properties (27.36 and 27.27%) of Scenedesmus quadricauda SDEC-13 cultivated in waste water [4]. FAME analysis exhibited palmitic acid as the predominant constituent followed by oleic acid as second major component. The results indicated the use of sewage water as better substrate for biodiesel production than the chemical medium. In another study, Scenedesmus acutus cultivated in municipal waste water discharges resulted with biomass productivity and lipid accumulation of 79.9 mg L -1 and 282 mg L -1 respectively [5]. Investigations on Chlorella sorokiniana and Scenedesmus obliquus for nutrient removal and lipid production using sewage water was studied by Gupta [6]. S. obliquus demonstrated better lipid accumulation (23.26 % w/w) than C. sorokiniana (22.74% w/v) thereby highlighting the algal species selection for wastewater treatment and lipid production. Integrating algae based biodiesel production with wastewater treatment using Auxenochlorella protothecoides UMN280 [7]. The results of batch cultivation showed high biomass (269 mg L -1 d -1 ) and lipid (78 mg L -1 d -1 ) productivity. FAME analysis showed that lipids were mainly composed of C16/C18 fatty acids, which are suitable for biodiesel production. An increase in triacylglycerol production International Journal of Environmental Sciences & Natural Resources by Scenedesmus sp with increasing cultivation period using domestic wastewater was reported [8]. The lipid production was increased from 32 mg L -1 (21 days) to 148 mg L -1 (45 days) in primary effluent. The capability of Chlorella sp utilizing meat processing water with biomass yield of 0.675-1.538 g L -1 [9] ( Table 1). Demonstrated nutrients in anaerobically digested activated sludge effluent can be remediated through assimilation into algal biomass with 2.43 g L -1 biomass concentration and 29.76% lipid content [10]. Heterotrophic cultivation of C. protothecoides using digested chicken manure filtrate yielded total lipid of 5.28 g L-1 [11]. In another study, Chlorella sp was cultivated in piggery wastewater for a period of 10 days batch culture and resulted in 0.839 d-1and 0.681 g L -1 d -1 of specific growth rate and biomass productivity. The highest lipid content and lipid productivity were 29.3% and 0.155 g L -1 d -1 at 25% wastewater, respectively [12]. Hydrolysate obtained from ultrasonic pre-treatment of wasted activated sludge was used as an alternative carbon source for cultivation of Chlorella protothecoides. The lipid content of the culture was 21.5 % with biomass yield of 0.5 g L -1 [13] indicating the use of activated sludge. Use of volatile fatty acids produced from sewage sludge fermentation system as carbon source for cultivation of Chlorella vulgaris was investigated [14]. The cultivation resulted in biomass productivity of 433 ± 11.9 mg L -1 d -1 and lipid contents ranging from 12.87 -20.01%. Waste substrate from brewer fermentation and crude glycerol were used as carbon and nitrogen source for the cultivation of C. protothecoides [15]. The lipid productivity of microalgae was compared with the basal medium and was observed higher in the waste substrate medium highlighting the alternative substrates for biofuel production. In a study by Hongyang, C. pyrenoidosa attained biomass productivity of 0.64 g L -1 d -1 and lipid productivity of 0.40 g L -1 d -1 when cultivated in soybean processing waste water [16]. Neochloris oleoabudans grown in anaerobically digested dairy manure yielded 10-30% fatty acid methyl esters on dry with basis [17]. Hu reported that Chlorella sp. grown on acidogenically digested manure could be used as a feedstock for high-quality biodiesel production [18]. Marjakangas has examined the suitability of piggery wastewater as a nutrient source for production of lipid from C. vulgaris [19]. At diluted concentrations of wastewater, the biomass production decreased, and lipid content was increased. The highest lipid content and lipid productivity of 54.7 wt% and 100.7 mg L -1 d -1 was obtained with 20× and 5× dilution respectively. Immobilized International Journal of Environmental Sciences & Natural Resources cells of Nannochloropsis sp grown in secondary effluent of palm oil mill produced biomass and lipid production of 1.3 g L -1 and 0.356 g L -1 , respectively. The repeated-batch cultivation improved the biomass and lipid production and the scale up in 3 L-fluidized bed photobioreactor gave the maximum biomass of 3.28 g L -1 and lipid production of 0.36 g L -1 [20]. Food Waste Food waste generated from vegetables, fruits, cereals, meat mainly consists of carbohydrates, proteins, lipids, and traces of inorganic compounds. The carbon footprint of food waste is estimated to contribute to the greenhouse gas emissions by accumulating approximately 3.3 billion tonnes of CO2 into the atmosphere per year. Incineration of food waste reduces hinders the recovery of nutrients and valuable chemical compounds thus reducing the economic value of the substrate and may cause severe health and environmental issues. Utilization of food waste as nutrient source for cultivating microorganisms offers to recycle organic matters and production of microbial value-added products. However, food wastes require a pretreatment step to recover low molecular weight nutrients, which the microalgae can easily assimilate. Food waste hydrolysate was used as culture medium for Schizochytrium mangrovei and Chlorella pyrenoidosa in which 10-20 g microalgal biomass were produced [21]. Enzymatic hydrolysates of food waste were used for the cultivation of Chlorella pyrenoidosa by Plesissner [22]. Under nutrient sufficient batch culture, the cells of C. vulgaris contained 103.8 mg g -1 lipids whereas it was three times higher in biomass cultured under phosphate/nitrogen limited conditions. The conversion of nutrients derived from food waste by C. vulgaris with production of 200 mg g -1 lipids makes microalgae for recycling of food waste and fuel production [23]. Scenedesmus bijuga cultivated in food wastewater effluent has produced lipid content in the range of 13.81 -15.59 mg L -1 d -1 which is higher than the cells grown in synthetic medium [24]. Similarly, biomass productivity was higher (39.4 -50.75 mg L -1 d -1 ) in the microalgae when cultivated in diluted food wastewater effluent. Recent studies by Zhang demonstrated that anaerobically digested effluent from kitchen waste is a potential medium for cultivation of Chlorella sorokiniana and Scenedesmus sp with the optimal biomass productions of 0.42 g L -1 and 0.55 g L -1 [25]. When compared to the BG 11 medium, the lipid contents of C. sorokiniana and Scenedesmus grown in kitchen waste effluent were in the range of 30.27-41.69% and 35.97-47.39% respectively [26]. Agricultural Waste Agricultural wastes can be utilized as a substrate for the cultivation of microalga due to their carbon and nitrogen content. Hydrolysates of sugarcane bagasse was used as a carbon source for the cultivation of Chlorella sp [27] and resulted in biomass concentration and lipid content of 5.8 g L -1 and 34.0% respectively. Enzymatic hydrolysates of sweet sorghum and rice straw were used for heterotrophic cultivation of Chlorella vulgaris and Scenedesmus obliquus. Maximum biomass was achieved in combined hydrolysates medium in C. vulgaris (4.8 g L -1 ) followed by S. obliquus (4.3 g L -1 ). Total lipid content of Chlorella was ranged from 11.26-29.36 and 15.43-27.24% in Scenedesmus. The qualitative analysis of fatty acids showed very high values of stearic acid (28.41 and 31.01%) and palmitic acid (23.54 and 26.21%) in both microalgae [28]. Hydrolysate of oil crop biomass residues was used to cultivate the microalgae to accumulate higher lipids [29]. Cyperus esculentus waste was used as the carbon source for C. vulgaris [30]. Fed-batch culture has produced maximum biomass, lipid content and lipid productivity of 20.75 g L -1 , 36.52%, and 621.53 mg L -1 d -1 respectively [31]. Conclusion Energy security, rising oil prices and economic objectives are stimulating a strong interest in the development of bioenergy. Biofuel production from microalgae using various waste water, food and agricultural wastes is economically feasible and sustainable approach. Chlorella sp was reported to grow in wide range of waste water and organic wastes for the production of biomass and lipid. However, considering the complexity and nutrient variations among different wastes, there is a need to conduct productivity and techno-economic analysis for the sustainable bioenergy production using microalgae on a larger scale.
2,429
2018-07-17T00:00:00.000
[ "Engineering", "Environmental Science" ]
Construction of some algebras of logics by using intuitionistic fuzzy filters on hoops : In this paper, we define the notions of intuitionistic fuzzy filters and intuitionistic fuzzy implicative (positive implicative, fantastic) filters on hoops. Then we show that all intuitionistic fuzzy filters make a bounded distributive lattice. Also, by using intuitionistic fuzzy filters we introduce a relation on hoops and show that it is a congruence relation, then we prove that the algebraic structure made by it is a hoop. Finally, we investigate the conditions that quotient structure will be di ff erent algebras of logics such as Brouwerian semilattice, Heyting algebra and Wajesberg hoop. Introduction Hoops are algebraic structure which are introduced by B. Bosbach in [5,6] are naturally ordered commutative residuated integral monoids. In the last years, some mathematician studied hoop theory in different fields [1-3, 5-10, 17, 19]. Most of these results have a very deep relation with fuzzy logic. Particularly, by using of some theorems and notions of finite basic hoops, in [1] the authors could find a short proof for completeness theorem for propositional basic logic, which is introduced by Hájek in [11]. BL-algebras, the familiar cases of hoops, are the algebraic structures corresponding to basic logic. In algebra of logics, there are some sub-algebras that are very important and have a fundamental role in these algebraic structures. They are very similar to normal subgroups in group theory and ideals in ring theory which we called them filters and by using these notion we can introduce a congruence relation on algebraic structures and study the quotient structure that is made by them. Kondo, in [14], introduced different kinds of filters such as implicative, positive implicative and fantastic filters of hoops and investigated some properties of them. Borzooei and Aaly Kologani in [7], investigated these filters deeply and they introduced some equivalent characterizations of these filters on hoops. Also, they studied the relation among these filters and they showed some equivalent characterizations of these filters. Zadeh in [21] introduced the notion of fuzzy sets and different kinds of operations on them. After that mathematicians studied on them and applied it to diverse fields. Actually, fuzzy mathematics have reached by studying of fuzzy subsets and their application to mathematical contexts. Nowadays, fuzzy algebra is an important branch of mathematics and mathematicians studied it in different fileds. For example, Rosenfeld in [18] studied fuzzy sub-groups in 1971. After using the concept of fuzzy sets to group theory and defined the notion of fuzzy subgroups, in [21] by Rosenfeld, the different fuzzy algebraic concepts has been growing very fast [12,13,16] and applied in other algebraic structures such as lattices, semigroups, rings, ideals, modules and vector spaces. Moreover, the concepts related to fuzzy sets have been used in various fields, including its use in fuzzy graphs and its application in decision theory. Borzooei and Aaly Kologani in [8], studied the notions of fuzzy filter of hoops and the relation among them and characterized some properties of them. Also, they defined a congruence relation on hoops by a fuzzy filter and proved that the quotient structure of this relation is a hoop. Atanassov for the first time introduced the term, an intuitionistic fuzzy set [4] that is an extended form of a fuzzy set. These are the sets containing elements having degrees of membership and nonmembership. Intuitionistic fuzzy sets are more adaptable and real intending to the uncertainty and vagueness than the conventional fuzzy sets. The foremost critical property of intuitionistic fuzzy sets not shared by the fuzzy sets is that modular operators can be characterized over intuitionistic fuzzy sets. The intuitionistic fuzzy sets have basically higher depicting conceivable outcomes than fuzzy sets. Also, there are a lot of applications of intuitionistic fuzzy sets in decision making, pattern recognition, medical diagnosis, neural models, image processing, market prediction, color region extraction, and others. In the last years, some mathematician studied intuitionistic fuzzy sets in different fields [15,20]. In decision-making problems, the use of fuzzy approaches is ubiquitous. The purpose of these intuitionistic fuzzy sets is, to provide a new approach with useful mathematical tools to address the fundamental problem of decision-making. The generality of the fuzzy set is given special importance, illustrating how many interesting decision-making problems can be formulated as a problem of fuzzy sets. These applied contexts provide solid evidence of the wide applications of fuzzy sets approach to model and research decision-making problems. Now, in the following we define the notions of intuitionistic fuzzy filters and intuitionistic fuzzy implicative (positive implicative, fantastic) filters on hoops. Then we show that all intuitionistic fuzzy filters make a bounded distributive lattice. Also, by using intuitionistic fuzzy filters we introduce a relation on hoops and show that it is a congruence relation, then we prove that the algebraic structure made by it is a hoop. Finally, we investigate the conditions that quotient structure will be different algebras of logics such as Brouwerian semilattice, Heyting algebra and Wajesberg hoop. Preliminaries In this section, we refer to the basic concepts and properties required in the field of hoop and fuzzy sets that we will use in the following sections. A hoop is an algebraic structure (H, * , ↠, 1) such that, for any d, s, q ∈ H: On hoop H, we define the relation ⪯ by d ⪯ s iff d ↠ s = 1. Obviously, (H, ⪯) is a poset. A hoop H is said to be bounded if H has a least element such as 0, it means that for all d ∈ H, we have 0 ⪯ d. If H is bounded, then we can introduce a unary operation " ′ "on H such that d ′ = d ↠ 0, for any d ∈ H. The bounded hoop H is said to have the double negation property, or (DNP), for short if (d ′ ) ′ = d, for any d ∈ H (see [1]). The next proposition provides some properties of hoop. Proposition 2.1. [5,6] Suppose (H, * , ↠, 1) is a hoop and d, s, q ∈ H. Then: Proposition 2.2. [5,6] Suppose H is a bounded hoop and d, s ∈ H. Then: Then for any d, s, q ∈ H, the next conditions are equivalent: [10,14] The family of all intuitionistic fuzzy sets on H will be denoted by IF S(H). and C = (ς C , ϱ C ) be two IF-sets on H. Then, for any d ∈ H, we define a relation between them as follows: A fuzzy set ς on hoop H is called a fuzzy filter of H if for all d, s ∈ H, ς(d) ⪯ ς(1) and Notation. In the following, we will consider in this article H as a hoop and ς and ϱ as fuzzy sets. Moreover, the set of all fuzzy filters of H and anti-fuzzy filter of H are denoted by FF(H) and AFF(H), respectively. Construction of hoops by anti-fuzzy filters In this section, the concept of anti-fuzzy filter on hoop H is defined and some related results are investigated. A complement of ς is the fuzzy set ς c which is defined by, ς c (d) = 1 − ς(d), for any d ∈ H. Proof. Since ϱ ∈ AFF(H), for all d ∈ H, we get Moreover, since ϱ is an anti-fuzzy filter, by Propositions 2.1(x), 3.5(iii) and Remark 3.3(2), we have So, d ≈ ϱ q and this means that ≈ ϱ is transitive. Hence, ≈ ϱ is an equivalence relation. Now, we prove that ≈ ϱ is a congruence relation. . For any e ∈ H, [e] ϱ denotes the equivalence class of e with respect to ≈ ϱ . Clearly ϱ | e ∈ H} and operations ⊗ and → on H ≈ ϱ defined as follows: Then Since ≈ ϱ is the congruence relation on H, we get that all above operations are well-defined. Thus, by routine calculation, we can see that H ≈ ϱ is a hoop. Now, we define a binary relation on Easily we can see ( H ≈ ϱ , ⪯) is a partial order monoid. □ Construction of hoops and distributive lattices by intuitionistic fuzzy filters In the following, we define the concept of intuitionistic fuzzy filter on hoop H and investigate some related results. The family of all intuitionistic fuzzy filters of H will be denoted by IF F (H). is an intuitionistic fuzzy filter on H. Proof. By Definitions 3.1, 4.1 and Proposition 2.5, the proof is clear. □ In the next example, we show that the condition In the next proposition we prove that by a fuzzy filter (anti-fuzzy filter) on A we can make an IF-filter. be an IF-set on H and for any r ∈ [0, 1], Then B is an IF-filter of H iff for any r ∈ [0, 1], B r ∅ is a filter of H. Proof. By Theorems 2.6, 3.4 and Proposition 4.3, the proof is clear. □ Now, we mainly investigate the lattice of all IF-filters by introducing the notion of tip-extended pair of IF-sets. be a set of IF-filters of H and fuzzy sets i∈I ς B i and i∈I ϱ B i on H are defined as follows: Let L be an IF-set on H. The intersection of all IF-filters containing L is called the generated IF-filter by L, denoted as ⟨L⟩. Then ≈ B is a congruence relation on H. Proof. The proof is similar to the proof of Theorems 2.7 and 3.6. □ Construction of Brouwerian semilattice by intuitionistic fuzzy (positive) implicative filters Here, we define the notions of intuitionistic fuzzy (positive) implicative filters on hoops and some related results are investigated. We find some equivalence characterizations of them. Definition 5.1. An IF-set B = (ς B , ϱ B ) on H is called an intuitionistic fuzzy implicative filter or an IF-implicative filter of H if for all d, s, q ∈ H, it satisfies the next conditions: Proof. Let B = (ς B , ϱ B ) be an IF-implicative filter. By (IFIF 1 ), obviously (IFF1) holds. Since d * s ⪯ d * s, by Proposition 2.1(ii), s ⪯ d ↠ (d * s). Then by (IFIF 1 ), ϱ B (d ↠ (d * s)) ⪯ ϱ B (s). Also, since ϱ B is an IF-implicative filter, it is enough to choose q = 1, then Similarly, we can see In the next example we can see that the converse of the previous theorem does not hold. Proof. In this proof, we just prove the items of ϱ B . Also, since B is an IF-filter of H, by Definition 4.1, ς B ∈ FF(H) and ϱ B ∈ AFF(H). Then as we notice, ϱ B (1) ⪯ ϱ B (d) and ς B (1) ⪰ ς B (d), for any d ∈ H. (i) ⇒ (ii) It is enough to let d = 1 in (IFIF 2 ). Then Hence, B is an IF-implicative filter of H. Also, by Proposition 2.1(ii), d ⪯ s ↠ (d * s). Then by ( Therefore, B is an IF-filter of H. □ Next example shows that the converse of the previous theorem does not hold. Example 5.11. According to Example 5.2, introduce ς B by ς B (1) = r 3 , ς B (k) = ς B (e) = r 2 and ς B (0) = r 1 such that 0 ⪯ r 1 < r 2 < r 3 ⪯ 1. Routine calculation shows ς B ∈ FF(H) and is not an IF-positive implicative filter. Because Theorem 5.12. Let B = (ς B , ϱ B ) be an IF-filter of H. Then for any d, s, q ∈ H the next conditions are equivalent: Proof. In this proof, we just prove the items of ϱ B . Also, since B is an IF-filter of H, by Definition 4.1, ς B ∈ FF(H) and ϱ B ∈ AFF(H). Then as we notice, ϱ B (1) ⪯ ϱ B (d) and ς B (1) ⪰ ς B (d), for any d ∈ H. As we prove, By the similar way, ς B (d ↠ d 2 ) = ς B (1). Hence, by Theorem 5.12(v), B is an IF-positive implicative filter. □ The next example shows that the converse of the pervious theorem does not hold, in general. Since H has (DNP), by (HP3), we have, By the similar way, Moreover, since B is an IF-positive implicative filter of H, by Theorem 5.12(iii), Proposition 2.2(ii) and (DNP), we get By the similar way, . Therefore, by Theorem 5.6(ii), B is an IF-implicative filter. Thus, we have, Since B is an IF-implicative filter, by Theorem 5.4, B is an IF-filter. Thus, by Theorem 5.6(ii), Since B is an IF-positive implicative filter, by Theorem 5.12(ii), we have Also, by assumption, On the other hand, by Proposition 2.1(iv), s ⪯ d ↠ s and by Proposition 2.1(xiii), (d ↠ s) ↠ d ⪯ s ↠ d. Since B is an IF-positive implicative filter, by Theorem 5.10, B is an IF-filter, thus, Since B is an IF-positive implicative filter, by Theorem 5.6(iii), we have Construction of Heyting semilattice and Wajesberg hoops by intuitionistic fuzzy fantastic filters In the following, the concept of intuitionistic fuzzy fantastic filter on hoops is defined and some related results are investigated. Example 6.2. According to Example 5.5, routine calculation shows that B is an IF-fantastic filter. By the similar way, we can see that ς B (d) ∧ ς B (d ↠ s) ⪯ ς B (s). Therefore, B is an IF-filter of H. □ In the next example we show that the converse of the previous theorem does not hold, in general. Example 6.5. According to Example 5.5, B is an IF-filter but it is not an IF-fantastic filter. Because Theorem 6.6. Let B = (ς B , ϱ B ) be an IF-filter of H. Then the next statements are equivalent, for all d, s ∈ H, (i) B is an IF-fantastic filter of H, Proof. In this proof, we just prove the items of ϱ B . Also, since B is an IF-filter of H, by Definition 4.1, ς B ∈ FF(H) and ϱ B ∈ AFF(H). Then as we notice, ϱ B (1) ⪯ ϱ B (d) and ς B (1) ⪰ ς B (d), for any d ∈ H. (ii) ⇒ (i) Let d, s, q ∈ H. Since B is an IF-filter, we get ϱ B (s ↠ d) ⪯ ϱ B (q) ∨ ϱ B (q ↠ (s ↠ d)). Thus, by (ii), Since B is an IF-filter, Now, since B is an IF-fantastic filter, we have Then by (HP3), Moreover, since B is an IF-implicative filter, Hence, by Theorem 6.6(ii), B is an IF-fantastic filter. □ The next example shows that the converse of the pervious theorem may not be true, in general. Since B is an IF-positive implicative filter, by Theorem 5.10, B is an IF-filter of H, we have Also, since B is an IF-positive implicative filter, by Theorem 5.12(ii), we get On the other side, by Proposition 2.1(iv) and (xiii), . Moreover, since B is an IF-fantastic filter, by Theorem 6.6(ii), From B is an IF-filter, then Thus, by Theorem 5.6(ii), B is an IF-implicative filter. □ Conclusions In decision problems, the use of fuzzy approaches is ubiquitous. Given the importance of fuzzy concepts in solving decision problems, we decided to use these concepts, intuitionistic fuzzy sets, in a specific logical algebra to provide a new approach with useful mathematical tools to address the fundamental decision problem. In this paper, the concept of anti-fuzzy filter of hoops is defined and the concepts of intuitionistic fuzzy filters, intuitionistic fuzzy (positive) implicative and intuitionistic fuzzy fantastic filters of hoops are introduced and the properties and equivalent characterizations of them are discussed. Moreover, it was proved that all intuitionistic fuzzy filters make a bounded distributive lattice. Also, the relations between different kinds of intuitionistic fuzzy filters are investigated and studied that under which conditions they are equivalent. Also, a congruence relation on hoops is defined by an intuitionistic fuzzy filter and proved the new structure is a hoop. Finally, the conditions that quotient structure will be Brouwerian semilattice, Heyting algebra and Wajesberg hoop are investigated.
3,809.4
2021-01-01T00:00:00.000
[ "Mathematics" ]
Performance of GaN-Based LEDs with Nanopatterned Indium Tin Oxide Electrode 1School of Electronic and Information, Guangdong Polytechnic Normal University, Guangzhou 510665, China 2State Key Laboratory of Optoelectronic Materials and Technologies, School of Materials Science and Engineering, Sun Yat-Sen University, Guangzhou 510275, China 3College of Electromechanical Engineering, Guangdong Polytechnic Normal University, Guangzhou 510635, China 4School of Electronics and Information Technology, Sun Yat-Sen University, Guangzhou 510275, China Compared to the other techniques, NSL has the advantages of low cost and high throughput, which is very suitable for surface patterning.In the previous works, the twodimensional photonic crystal structure was fabricated on an InGaN/GaN multiple quantum well structure by a silica nanosphere lithography, and several times enhancement in photoluminescence intensity was observed [21,22].However, there was no research on the electroluminescence.Comparing to the silica nanosphere, the size of polystyrene (PS) nanosphere will be decreased with increasing the etching time as an etching mask.Through the PS NSL method, nanopillars with different diameters can be obtained and the LEE of LEDs can be optimized [23].In addition, the nanopillar structure will take the characteristic of small top and big bottom, and this is helpful for the light to escape due to the gradient refractive index.In order to reduce the influence of etching on the electrical characteristics of the LED device, the nanostructures should be prepared on the ITO transparent electrode layer.However, this research has rarely been reported.In this work, by PS NSL technology, GaN-based blue LEDs with surface patterned ITO electrode were fabricated and the optical and electrical performances of the LEDs with nanopatterned ITO electrode were analyzed and discussed.The electroluminescence intensity of the ITO patterned LEDs is increased by 17% at 100 mA injection current compared to that of conventional LEDs.Finally, the light output enhancements are simulated based on three-dimensional finite difference time domain (3D-FDTD) method to verify the experimental results. Experimental Methods The GaN-based ( = 465 nm) epitaxial wafer was grown on a 2-inch sapphire (Al 2 O 3 ) substrate using a metal-organic chemical vapor deposition.After the growth of a 2 m undoped GaN (u-GaN) buffer layer and 3 m n-GaN layer, an active layer of five-period InGaN/GaN MQWs and a 150 nm p-GaN layer were deposited.The device fabrication process was as follows.A transparent ITO electrode with a thickness of about 400 nm was first deposited on the p-GaN surface.Then, the LED chips with dimensions of 300 m × 300 m were formed by mesa-etching the exposed n-type GaN via standard lithography, ITO wet etching, and subsequent inductively coupled plasma (ICP) etching.Cr/Pt/Au (200/400/2000 nm) was finally deposited on the top of ITO surface as well as the exposed n-GaN layer as a contact metal for both the p-and the n-GaN layers. After the formation of the contact metal electrodes on the p-and the n-GaN layers, the nanopatterned ITO layer was fabricated through the NSL, and the process flow was shown in Figure 1.First, a hexagonal close-packed monolayer of PS nanospheres with a 450 nm diameter was formed on the ITO layer, as shown in Figure 1(a).Second, the ITO layer with a monolayer mask of PS spheres was etched by an ICP etching machine via the gas flow of BCl 3 /Cl 2 /Ar, as shown in Figure 1(b).Finally, the PS spheres were removed by trichloromethane solvent with sonication and the periodic ITO nanopillar arrays were obtained, as shown in Figure 1(c).Figure 1(d) presented the schematic of the modified LED structure with a nanopillar patterned ITO electrode. Results and Discussion By the method described above, we could fabricate the LEDs with periodic ITO nanopillar arrays.When the ITO was etched by ICP, the mask of PS spheres was also etched and the size of the PS spheres was altered.Therefore, various height and diameter of the top part of the ITO nanopillar could be obtained by changing the ICP etching time.In order to explore the different effects of various ITO nanopillar structure on LEE of the LED, three nanopatterned ITO samples were fabricated by different ICP etching time.The three samples with the nanopillar ITO layer etched by ICP for 60 s, 80 s, and 100 s were marked as samples A, B, and C, respectively.The three nanopatterned ITO samples and conventional LEDs were fabricated from the same InGaN/GaN LED wafer to eliminate the differences in the device characteristics.Figure 3(a) shows the light output intensity (LOI) as functions of injection current for the three nanopatterned LEDs and a conventional LED.At the same injection current, the light output intensity of the three nanopatterned LEDs is higher than that of the conventional LED.At an operating current of 100 mA, the light output intensity of the samples A, B, and C is approximately 3%, 14%, and 19% higher than that of the conventional LED, respectively.Figure 3(b) shows the forward current-voltage (I-V) characteristics of the four samples.It is clear that the LED with nanopatterned LEDs exhibits nearly the same I-V characteristics as the conventional LED.The forward voltages at 100 mA are 4.42, 4.46, 4.46 V, and 4.37 for the samples A, B, and C and conventional LED, which consequently indicates an acceptable electrical performance for the nanopatterned LEDs.In order to eliminate differences in input electric power (IEP) of the samples, the IEP-LOI curves were calculated from the curves in Figures 3(a conventional LED at the whole IEP range.At an operating current of 100 mA, the LOI of the samples A, B, and C are approximately 3%, 12%, and 17% higher than that of the conventional LED, respectively.Therefore, we believe that the proposed technology is an effective method to improve the LEE of GaN-based LEDs. The results showed that the LEE of the samples with nanopillar patterned structures were enhanced.With increasing the etching depth, the size of the nanopillar became smaller, and the light extraction efficiency of the sample would be relatively higher.With the nanopillar patterned ITO surface, photons should experience multiple scattering at the sample surface and could escape from the device easily, as shown in Figure 4.For the LEDs with the flat surface ITO structure, the emitted light having the incident angle smaller than the critical angle was the only light that can be extracted, as shown in Figure 4(a).According to Snell's law, the critical angle of the total internal reflection satisfied the formula sin = 2 / 1 , where 1 = 1.9 and 2 = 1 are the refractive indexes of ITO and air, respectively.Then, the critical angles of total reflection at air/ITO interface are around 31.8 ∘ .Thus, the majority of photons are reflected from the interface of conventional ITO LEDs [17].The nanopillar patterned LEDs have higher LEE, which can be explained from various points of view.Firstly, the nanopillar array played the role of roughness.Secondly, the periodic nanopillar array served as a two-dimensional grating.Such Bragg scattering assisted the waveguide modes to become the radiation mode; then the photons have taken more opportunities to escape from the ITO surface into the air, as shown in Figure 4(b).The transmitted and reflected light can be expressed as [24] This will make the photons that were originally emitted out of an escaping cone go back into the escaping cone and improve the LEE of the LEDs. In order to verify the experiments, we also perform 3D-FDTD simulations to give the light extraction of the LEDs with nanostructures.The simulated LED structure is shown in Figure 5(a), which consisted of a 1000 nm sapphire substrate, a 5200 nm GaN layer (including the u-GaN, MQW, and p-GaN layer), and a 400 nm ITO layer.In simulation, the lateral space of simulation region is 8000 nm × 8000 nm, and there are about 18 × 20 nanopillars in the simulation region.To explore the effect of the diameters and heights of the nanopillars on the LEE, eight samples are simulated here.For samples 1, 2, and 3, the diameters are set as 400 nm, 390 nm, 380 nm, and the heights are set as 60 nm, 100 nm, and 140 nm, respectively.For samples 4-8, the diameters of the nanopillars are set as 380 nm and the heights are set as 180 nm, 220 nm, 260 nm, and 300 nm, respectively.The light extraction effect of LED with different nanostructures is simulated by the 3D-FDTD method.In the simulation, the wavelength of incident light is 465 nm, which corresponds to the center wavelength of emission spectrum.The refractive indices of GaN and ITO are approximately 2.49 and 1.9 at the wavelength of 465 nm.The simulation results are shown in Figure 5(b).The horizontal coordinate is the number of simulated samples, and the longitudinal coordinate is the increase ratio of LEE.The simulation results show that the LEE of samples 1-3 is increased, which is consistent with the experimental results.In addition, we also see that simulated sample 3 (height of 140 nm) is a local optimal value, and the structure of the optimal value is near sample 6 (height of 260 nm).This is consistent with the rough calculation [25] where is the incident light wavelength and ITO-pc is the effective refractive index of the ITO nanostructures layer. Conclusions Benefiting from the excellent electrical conductivity and light transmittance, ITO transparent electrode has replaced nickel gold alloy as a transparent electrode of the LEDs.However, the refractive index of ITO is 1.9 and is much higher than the refractive index of air, which limits the light escaping from LEDs.In this paper, we fabricated LEDs with nanopillar patterned ITO layer via nanosphere lithography.The optical and electrical performances of the LEDs with nanopatterned ITO electrode were investigated.The results show that the LEE was enhanced with increasing the etching depth.The electroluminescence intensity of the ITO patterned LEDs was increased by 17% at 100 mA injection current compared to that of conventional LEDs.The enhancement of the LEE can be ascribed to the fact that the total reflection of the ITO surface is broken by the periodic nanopillars structure.The LEE may be further improved by optimizing the nanopillars structure.Therefore, this is a promising method for realizing high-efficiency LEDs. Figure 1 : Figure 1: Schematic fabrication process for nanopatterned ITO electrode.(a) Deposition of PS spheres on the ITO electrode surface.(b) ICP etching to nanopatterned ITO electrode.(c) Removal of the PS nanospheres.(d) Schematic illustration of the GaN-based blue LEDs with nanopatterned ITO electrode. Figure 2 : Figure 2: SEM images of the nanopillar patterned ITO layer.Scale bars are 400 nm.(a) The original surface of ITO layer.(b)-(d) 30 ∘ tilt SEM images of samples A, B, and C, respectively.(e)-(f) Cross-sectional image of samples B and C. The ITO layers of samples A, B, and C were etched by ICP etching for 60 s, 80 s, and 100 s, respectively. Figure 2 shows scanning electron microscope (SEM) images of ITO layer of samples A, B, and C and the conventional LED.As shown in Figures 2(b)-2(d), when the ICP etching time is longer, the surface of ITO is etched more obviously and the height of the nanopillars becomes higher.The diameters of nanopillars for samples A, B, and C are found to be about 400 nm, 390 nm, and 380 nm from the SEM images, respectively.Figures 2(e) and 2(f) show cross-sectional images of samples B and C. The heights of the nanopillars of samples A, B, and C are about 60 nm, 100 nm, and 140 nm, respectively. Figure 3 : Figure 3: Electroluminescence curves of three nanopillar patterned LEDs and a conventional LED.(a) The optical output power versus injection current (O-I) characteristics.(b) The current versus voltage (I-V).(c) The optical output power versus power (O-P) characteristics. Figure 4 : Figure 4: Light emission of (a) conventional ITO LEDs and (b) LEDs with nanopatterned ITO electrode. Figure 5 : Figure 5: (a) Schematic illustration of LED simulation model.(b) The 3D-FDTD simulation result, in which the -axis is the serial number of the simulation samples, and the -axis is the increase ratio of LEE compared with reference sample. 2 sin (cos , sin ) 1 and 2 are refractive indices of ITO and air, respectively. is the wavelength of light in the vacuum.Λ and Λ are periods of the lattice in the and directions. and are integers indicating the diffraction orders. , , and are the incident angle, transmitted angle, and reflected angle, respectively. , , and are the azimuthal angle of incident light, transmitted light, and reflected light, respectively.So, the waveguide modes can be coupled to the radiation modes.
2,853.2
2016-09-01T00:00:00.000
[ "Materials Science" ]
Motion of a Self-Propelling Micro-Organism in a Channel Under Peristalsis: Effects of Viscosity Variation Abstract. The motion of a self propelling micro-organism symmetrically located in a rectangular channel containing viscous fluid has been studied by cons ideri g the peristaltic and longitudinal waves travelling along the walls of the channel. The expressions for the velocity of the micro-organism and time average flux have been obtained under long wavelength approximation by taking into account the vis cosity variation of the fluid across the channel. Particular cases for constant v iscosity and when it is represented by a step function have been discussed. It has been o s rved that the velocity of the micro-organism decreases as the viscosity of the peripher al layer increases and its thickness decreases. Introduction The study of the self propelling micro-organism in a biofluid was initiated by Sir Taylor [1], who modelled it as a two dimensional sheet of zero thickness with sinusoidal wave travelling down its length.Since then, many researchers have studied the problem under various conditions, Hancock [2], Shack et al. [3], Shukla et al. [4].It may be noted that the motion of micro-organism is affected by the nature of biofluid, dynamical interaction of the duct walls, and the cilia motion, if any, in the lumen of the duct.In some physiological situations, e.g.mid period of menstrual cycle, James et al. [5], it is known that the biofluid viscosity shows variation across the channel cross-section.Further, in situations such as oviduct, it is observed that the muscular activity of walls and the action of cilia generate peristaltic and longitudinal waves on the duct walls, Guha et al. [6], Blake et al. [7], Shukla et al. [8].Philip and Chandra [9] analyzed the self-propulsion of spermatozoa through mucus filling a channel with flexible boundaries. Keeping this in view, a mathematical model is presented here to study the effect of viscosity variation of biofluid on the motion of micro-organism.Further, peristaltic and longitudinal wave motion travelling along the channel wall is considered to account for the dynamical interaction of the walls.The micro-organism is taken as a two dimensional sheet of finite thickness with transverse wave motion along its surface.The expressions for propulsion velocity of micro-organism and the time averaged flow flux have been obtained under long wavelength approximation. Mathematical model Consider the swimming motion of a thin flexible sheet of finite thickness 2δ in a Newtonian incompressible fluid flowing through a two-dimensional channel having flexible boundaries.It is assumed that the fluid filled in the channel has varying viscosity and that the sheet, while swimming, sends down lateral waves of finite amplitude along its length.Further, peristaltic waves of finite amplitude are imposed along the flexible walls of the channel in the direction opposite to the motion of the sheet.The sheet is considered to be swimming with a propulsive velocity V p in the negative axial direction (Fig. 1).It is assumed that the waves travelling along the channel walls and along the sheet are in synchronization under steady state and thus have the same wave speed c (along positive axial direction) and the wavelength λ.In a fixed frame of reference, (X , Y , t ), the wall of the channel (Y = ±H ) and the sheet (Y = ±H 1 ) at an instant t are given by As the sheet is self propelling, the forces exerted by the fluid on its surface must balance its motion.This force equilibrium condition on the surface of the organism can be written under symmetrical situation as where T is the resultant of the forces acting on the surface of the micro-organism, ∆p is the pressure rise over a wavelength and S is the surface of the micro-organism. The Reynolds number of the flow in such situations is of the order 10 −3 and hence the inertia terms can be neglected.Further, in a frame moving with velocity c − V p in the positive axial direction, the boundaries of the channel and the micro-organism appear stationary.Thus, transforming various quantities from the stationary frame (X , Y , t ) to the corresponding quantities in the moving frame and using the following non-dimensionalization scheme, the equations of motion under long wavelength approximation reduce to the following simple form The boundary conditions for u in moving frame can be written as where g(x) = c 1 sin 2πx gives the longitudinal wall motion due to the presence of cilia in the lumen of the channel and The force equilibrium condition, in this case, becomes Now using the symmetry, we solve these equations for the region y ≥ 0 only. Analysis The equation ( 5) can be solved, using boundary conditions (7) to give where The flux q, in the moving frame is constant and is obtained from, which on using the expression for u, gives ∂p ∂x as Integrating (11) we get where The force equilibrium condition on using ( 9) and ( 10) gives and Further, the fluxes in the stationary and wave frames are related by Hence the time averaged flux Q in stationary frame can be obtained from the relation By eliminating q from equations ( 12)-( 14), the expressions for the propulsive velocity V p and the time averaged flux Q in stationary frame can be obtained in terms of ∆p and µ(y). It may be noted that for the case of thin sheet (ε 1 = 0, δ = 0) the expressions for V p and Q can be obtained by putting h 1 = 0 in the equations ( 12)-( 14). Results and discussion To see the effects of various parameters explicitly, we consider here two particular cases. where (1−α) gives the thickness of the peripheral layer near the wall of the channel. The various integrals occurring in equations ( 12) and ( 13) are numerically evaluated for these two cases, and the values of V p and Q are calculated for different values of ∆p, ε 1 and ε 2 . For the case of constant viscosity, the effects of various parameters on V p and Q are shown in Figs.2-7.We notice from Figs. 2, 3 and 4 that for given ∆p and that V p increases as |ε 1 | increases.This implies that the lateral peristaltic waves on the surface of the micro-organism increases the speed of micro-organism.The effects of ε and ∆p on V p are shown in Fig. 2 for c 1 = 0.It is observed here that V p decreases as ∆p decreases and it even becomes negative for negative ∆p, i.e. the motion of the micro-organism can be reversed by suitable pressure drop.Also, V p decreases as the magnitude of ε increases i.e., a peristaltic wave on channel wall does not facilitate the motion of the micro-organism.Fig. 3 shows that for ε = 0, V p increases as c 1 increases for ε 1 > 0 while the reverse trend is observed for ε 1 < 0. It may also be seen here that V p decreases as ∆p decreases for all values of c 1 .The Figs. 2 and 4 show that the behaviour of V p with respect to ε depends upon the value of ∆p, c 1 and ε 1 .From Fig. 5, it is observed that V p increases as δ decreases (∆p = 0, ε = 0, c 1 = −0.2). The effects of c 1 , ε and δ on Q are shown in Figs. 6 and 7 by taking ∆p = 0. Fig. 6 shows that for ε < 0, Q decreases as either c 1 increases or as ε 1 increases.However, the opposite behaviour is observed for ε > 0. The decrease in the thickness, δ, of the sheet decreases Q (Fig. 7).The effect of ∆p on Q has been found to be opposite to its effect on V p . For the case of step function viscosity, the effects of peripheral layer thickness (α) and viscosity step function µ on V p and Q are shown in Figs.8-11 by taking c 1 = −0.2,ε = 0.2 and δ = 0.2.Figs. 8 and 9 show that for ∆p = −0.5,V p decreases and Q increases with the increase in peripheral layer thickness (1 − α) as well as with the decrease in µ.This effect is observed for all the values of ε 1 .Further, the effect of ∆p on V p is similar to the case of µ = 1 (Fig. 10).However, for ∆p > 0, V p decreases as µ increases while reverse trend is observed in the case of ∆p < 0. Fig. 11 shows that Q decreases with ε 1 and that Q increases as ∆p decreases.The effect of µ on Q depends upon the sign of ∆p. Using this data, the propulsion velocity V p is calculated as 0.044 mm/s.This differs from the observed value by about 12 %.Further, the flux Q is calculated as 0.006 ml/s whereas the observed value is given by 0.007 ml/s. Conclusion Mathematical model to study the effect of the viscosity variation across the cross-section on the swimming of a micro-organism has been presented.It has been shown that the velocity of the micro-organism decreases as the viscosity of the peripheral layer increases and its thickness decreases.Further, the motion of the micro-organism can be reversed by applying peristaltic wave on the channel wall.
2,177
2007-07-25T00:00:00.000
[ "Physics", "Engineering" ]
A Monte Carlo Approach to the Worldline Formalism in Curved Space We present a numerical method to evaluate worldline (WL) path integrals defined on a curved Euclidean space, sampled with Monte Carlo (MC) techniques. In particular, we adopt an algorithm known as YLOOPS with a slight modification due to the introduction of a quadratic term which has the function of stabilizing and speeding up the convergence. Our method, as the perturbative counterparts, treats the non-trivial measure and deviation of the kinetic term from flat, as interaction terms. Moreover, the numerical discretization adopted in the present WLMC is realized with respect to the proper time of the associated bosonic point-particle, hence such procedure may be seen as an analogue of the time-slicing (TS) discretization already introduced to construct quantum path integrals in curved space. As a result, a TS counter-term is taken into account during the computation. The method is tested against existing analytic calculations of the heat kernel for a free bosonic point-particle in a D-dimensional maximally symmetric space. Introduction The worldline formalism applied to Quantum Field Theory is a powerful tool to compute relevant physical quantities. The guidelines of such method were already introduced in the 1950 by Feynman, who proposed a quantum mechanical path integral model representing the dressed scalar propagator in scalar QED [1]. In the 80's the method started to be systematically used as an alternative to standard secondquantized approaches to QFT, in particular for the computation of anomalies [2][3][4][5][6], effective actions and Feynman diagrams [7][8][9]. This formalism was then extended to various quantum field theories in curved spacetime (see Ref. [10] for a review) and several applications have so far been considered such as, oneloop effective actions [11][12][13], the one-loop graviton photon mixing in an electromagnetic field [14], gravitational corrections to Euler-Heisenberg lagrangians [15], worldline representations of quantum gravity [16,17], supergravity [18] and higher spin field theories [19,20], and simplified methods for anomaly computations [21,22], just to name a few. The effort to move towards non-perturbative worldline calculations led to the development of numerical methods for the evaluation of quantum mechanical path integrals in flat space. The need for such a generalization was in the several interesting applications that rely on non-perturbative physics, such as the chiral symmetry breaking. The main issue was the numerical generation of the worldlines in an Euclidean space according to their relative probability distribution. For such a purpose, Gies and Langfeld [23] proposed a first numerical algorithm, adopting a Monte Carlo sampling of the coordinate space as an improvement of a previous work by Nieuwenhuis and Tjon [24]. Such methods are now known as Worldline Monte Carlo (WLMC) and have been used for several application in QFT, such as the Casimir effect [25], Schwinger pair production in inhomogeneous fields [26], quantum effective actions [27] and strongly-coupled, large-N fermion models [28]. In these works, all the proposed algorithms are based on a Monte Carlo sampling which models the free Brownian motion. Thus, instead of using the full (kinetic + potential) action as a weight to select the points on the worldline, only the kinetic part is used. As explained in the recent work by Edwards et al. [29], the reason for such a choice is universality, which then allows the application of the method to a variety of different cases, regardless of the specific potential involved. Moreover, therein they propose an efficient algorithm, called YLOOPS, to generate closed worldlines around a point, 1 which is a modified version of the alghoritm VLOOPS originally designed in [25]. Below, due to its efficiency and easiness of application, we find it convenient to adopt the YLOOPS algorithm for our calculation: more precisely, we present a slight modification of such algorithm, where a fictitious quadratic term is inserted in the kinetic term, in order to make the evaluation of the numerical path integral faster and more stable. The main goal of the present manuscript, is to extend WLMC techniques to curved spaces, i.e. to non-linear sigma models representing the propagation of a scalar particle in curved space. This nonperturbative numerical method would presumably help tackling a large class of problems in quantum field theory in curved space such as, for example, the Casimir effect and the Schwinger effect on a non trivial background, and the computation of effective actions and anomalies in curved space. In order to test our construction we restrict ourselves to maximally symmetric spaces. In such a context indeed, recently it was shown that the heat kernel expansion for a scalar particle [30,31] (later generalized to a spinor particle [32]) can be efficiently reproduced using an effective model with a flat kinetic term, where the effects of the curvatures are taken into account through a suitable effective potential. Therefore, as we show, such a model can be easily simulated with the conventional (flat space) WLMC methods, mentioned above. Moreover, this constitutes a perfect benchmark to verify our extension of the WLMC techniques to a genuine non-linear sigma model representing a particle in curved space. The paper is organized as follows. In Section 2 we review the Worldline Monte Carlo theory in flat space, providing an example of the calculation of the propagator associated to a 4-dimensional harmonic oscillator, to get familiar with the method. In Section 3 we build the WLMC setup in curved space and propose our strategy to compute numerical path integrals on non-trivial backgrounds. In Section 4 we describe the theoretical basis of the case study which we will use to test our method, i.e. the diagonal part of the heat kernel of a free scalar point particle moving on a D-sphere and D-hyperboloid, i.e. maximally symmetric spaces. The worldline realization of such quantity is given in terms of a quantum mechanical path integrals with periodic boundary conditions (defining closed trajectories or loops). After that, we will report our numerical simulations, studying their convergence with respect to the discretization parameters and the curvature of the sphere. A possible extension to the case of open trajectories is discussed in Section 5. In Section 6 we sum up our conclusions, with possible outlook for future applications. Finally, in Appendix A we report the details of the YLOOPS algorithm generalized to the case of a non-vanishing quadratic term, whereas in Appendix B we see the effect of inserting such mass term in the WL sampling discussed in Section 2. Worldline Monte Carlo in flat space In order to introduce our computation, let us first review the main ingredients of worldline Monte Carlo in flat space. Let us consider a D-dimensional flat space heat kernel computed through an Euclidean quantum mechanical path integral with and Expression (1) can be seen as the heat kernel of a unit mass non-relativistic point-particle x(τ ) in Ddimensional Euclidean space, or conversely the (Schwinger integrand of the) propagator of a bosonic relativistic particle, where the affine parameter τ is the proper time. Its Hamiltonian reads The formal expression (1) involves the sum over all the possible worldlines joining x and x in time β, weighted by the action (3). To provide a numerical realization of the worldline path integral (1), some comments are needed. First of all, it is not possible to span numerically the entire configuration space, hence a selection on the worldlines must be taken into account: let us denote N W L the number of worldlines which are considered. Secondly, each worldline has to be discretized: this is usually done with respect to the affine parameter, taking a number N of points per worldline. As pointed out in [29], here discretization is not realized on the points x(τ ) of the manifold, 2 but directly on the affine parameter τ . To fulfill the above two requirements, what is usually done in WLMC approaches [23,[25][26][27]29] is to compute worldline averages like x(β)=x with S KIN [x] and S P OT [x] being the kinetic and potential terms in (3) respectively. In this way, the exponential e −S KIN characterizing each worldline can be used as a weight function to build the worldlines used for the calculation. To be more precise, worldlines are constructed point by point using Monte Carlo algorithms: each trajectory produced, then, satisfies a Monte Carlo-selection criterion which allows to consider all the trajectories obtained as a set of almost equally meaningful trajectories to simulate the quantum propagation of the point-particle in spacetime. The aforementioned choice of the WL weight function is the most popular in literature, as it has an evident feature of universality, in fact in such case the selection of the worldline is independent of the model considered. Once the configuration space is sampled with the points x where all the worldlines start at x and end at x . At this point, expression (5) can be approximated by the arithmetic average for which, clearly, a good estimate of the path integral is obtained when N and N W L are large enough. Expression (7) is the main prescription for our WLMC computations. Note that, unlike (5), the latter does not occur in the form of a weighted average. However, the weighing procedure takes place beforehand, upon the selection of the different worldlines present in the ensemble. As mentioned before, different algorithms have been developed in order to generate worldlines. However, historically, the most commonly used algorithm is the VLOOPS developed by Gies [23]. Here we find it convenient to use the YLOOPS routine developed by Edwards et al. [29], with a slight modification due to the inclusion of a regularizing quadratic contribution in the kinetic term that will be taken into account for the computations in curved space. Such algorithm is developed to diagonalize the discretized version of the flat kinetic term in (3), having vanishing Dirichlet boundary conditions x 1 = x N = 0. The details are discussed in Appendix A. A simple flat-space test In order to conclude this preliminary section let us review a simple example where the WLMC setup can be applied, i.e. the propagator of a bosonic harmonic oscillator with null endpoints in a D-dimensional Euclidean space. Such quantity can be written as where we have assumed ω = 1 = m. The analytical result is well known and reads On the other hand, the propagator can be written as where we have multiplied and divided by the free path integral, which provides the Feynman factor x(0)=0 . Expression (10) thus reads and, combining (11) with (9), we obtain which is accessible to WLMC calculation. Indeed, in Fig. 1 we report a WLMC calculation of G(β) in four Euclidean dimensions, with N = 1000 points per loop and N W L = 1000 worldlines. It shows a satisfactory agreement with the analytical result in (12). In order to remove correlation between the data, this calculation-as well as those presented henceforth-has been performed using a different set of worldlines for each β-value [29]. As we have seen, WLMC in flat space is relatively easy to implement and, even for not very large values of N and N W L , it faithfully reproduces the theoretical curve. We will show that the flat space WLMC setup can be fruitfully exploited also for numerical applications in curved space problems: the price to pay is the introduction of further potential-like terms that incorporates curvature effects. Worldline Monte Carlo in curved space Similarly to what we did in the previous section for the flat space case, let us consider the classical Hamiltonian of a D-dimensional bosonic scalar point-particle in Euclidean curved space At the quantum level the Einstein invariant Hamiltonian operator reads (see [10] for a complete treatment of particle path integrals in curved space) with g(x) = det g µν (x), and the heat kernel associated to (14) can be expressed as the following particle path integral where the Einstein-invariant formal measure reads and the resulting particle action is a one-dimensional non-linear sigma model. With respect to the flat space case, there are three main differences: the root factor in (16), the spacetime dependent metric tensor g µν (x) and the counter-term V CT (x) appearing in (17). A few comments are in order. In perturbative computations, one needs a regularization scheme to treat divergent Feynman diagrams and V CT (x) is the suitable, finite, counterterm needed, which depends on the adopted regularization scheme. On the other hand, in non-perturbative computations, no divergences are expected to occur since these Feynman path integrals represent quantum mechanical transition amplitudes which are thus finite. However, in curved spaces, there are ordering ambiguities as the kinetic hamiltonian mixes coordinates and momenta. Yet, a well known scheme that allows to obtain Feynman path integrals from the associated quantum mechanical transition amplitudes exists: it is the Time Slicing approach, which requires to Weyl-order the Einstein-inviariant hamiltonian, in order to employ the so-called "mid-point rule" in the costruction of the path integral. Such reordering produces an order-potential, V CT (x), which is the same potential needed at the perturbative level, since the Time Slicing also encodes a regularization procedure. For historical reasons we will refer to such potential as the "counterterm potential". Finally, the root factor in (16) renders the formal measure Einstein invariant: for perturbative calculations it is often customary to exponentiate the » g(x) in terms of a particle path integrals over Lee-Yang ghost fields. However, for our non-perturbative, numeric computation we find more convenient to keep it as an external factor, evaluated at each point of the various worldlines involved in the sums. Now, the key idea in order to perform a numerical WLMC average in curved space (like the one in (5)) is to extract the flat space kinetic term from the curved one and use it to sample the worldlines, and treat the remaining part as a potential contribution, so that it is possible to bring the curved space problem back to a flat space one-provided one takes into account the suitable counterterm and measure factors. Namely, after a flat space Monte Carlo sampling of the worldlines, the path integral average will thus be given by where and is the middle point between adjacent points of the worldline s. In particular, the first potential term appearing in (20) is the one arising from (18) and reads It is easy to verify that, as discussed in the Appendix A, definitions (20) and (22) respect the backward convention on the first derivative. Focusing on the counter-term potential V CT , it ought to be observed that the proper time discretization of the path integral which we performed is equivalent to the Time Slicing (TS) procedure adopted to regularize particle path integrals in curved space from first principles [10]. Thus, the discretized counter-term is 4 Now we have all the ingredients to test our setup. In the following section we consider a case study which comes particularly handy for our purpose: it is the study of the free heat kernel of a scalar point particle constrained on a D-sphere. Recently, in Ref. [30] (see also [31]), it was shown that, using Riemann normal coordinates (RNC), it is possible to map the problem of computing the heat kernel on spheres, to the evaluation of a flat space heat kernel with a suitable potential which takes into account the curvature effects. Moreover, as found in [21], for a maximally symmetric geometry, the metric tensor, in RNC, can be neatly written in closed form as the flat space metric plus a curvature-dependent contribution. Hence, our numerical problem can be studied both from a purely flat perspective, by mapping it into the effective flat space model of [30]-thus using the conventional flat space Worldline Monte Carlo reviewed in Section 2-, and using the curved space WLMC setup discussed in Section 3. In other words, we use the effective flat space model as a benchmark test for our curved construction. 4 Case study: free scalar heat kernel on maximally symmetric spaces Let us briefly review the model [30] which we are going to use to implement our numerical WLMC method in curved space. The Hamiltonian operator of a free scalar point-particle in curved space can be used to build its heat kernel satisfying the operatorial equation The matrix elements of the kernel operator satisfy the heat equation DefiningK and using RNC, the curved space heat equation (28) can be turned into the flat space heat equation if the curved space is a space with maximally symmetry (a D-sphere for definiteness), for which In such a case we have Moreover, the effective potential appearing in (30) can be expressed in closed form as and can be used in the associated path integral representation of the problem Hence, the heat kernel for a particle on a maximally symmetric space can be obtained through a linear sigma model with an effective potential that encodes the curvature effects. We use such effective representation of the heat kernel to test our numerical method. Effective Specifically, we summarize in Table 1 the details of the two numerical calculations that we have performed. The first one, which we refer to as "Effective Potential Method" (EPM) makes use of the potential given in Eq. (33) to effectively reproduce the heat kernel of a scalar particle on a maximally symmetric space through a linear sigma model (flat metric). In the second calculation, we compute the same heat kernel making use of a "genuine" non-linear sigma model action, which amounts to consider the two potentials (22) and (23) within the WLMC technique-we refer to this case as the "Non-linear Sigma Model Method" (NSMM). Note that, for the EPM computation we have adopted the original YLOOPS algorithm, developed in [29], whereas for the NSMM we found it convenient to use a generalized version of YLOOPS, where a tiny quadratic term is included in the kinetic part of the action. Namely, Let us stress that such mass term is used solely at the stage of the construction of the worldline ensemble, to improve the convergence, and does not contribute to the potential term, which is left untouched. The details of the generalized YLOOPS algorithm are discussed in Appendix A, whereas its numerical outcomes are shown in Appendix B. (a) (b) Figure 2: Calculations of the heat kernel of a free scalar particle in a D=4 sphere with endpoints x = x = 0 for various parameters (N W L , N ) both using the effective potential method (green data) with known potential and using the non-linear sigma model method (blue data). The red curve is the analytical and perturbative computation for small β performed in [30]. In Figure 2, we report the results of our calculations for a D = 4 sphere with M = 1. Figures 2(a) and 2(b) show a comparison between the data of the Non-linear Sigma Model Method (NSMM, blue) and the Effective Potential Method (EPM, green), together with the perturbative, analytical computation performed in [30], for couples of values (N W L , N ) = (100, 100) and (3000, 3000) respectively. As we can see, the EPM is considerably more precise than the NSMM, presumably because of the form of the associated potential: in the first case it is expressed in regular form (33), whilst in the second case by means of numerical derivatives (22), and this contributes to reduce precision. Thus, we take the green points as a benchmark to test the blue ones. We notice that the agreement between the two calculations gets better as the discretized approximation improves, i.e. when N W L and N become large. In fact, the absolute value of the maximum relative error passes from ∼ 35% with (N W L , N ) = (100, 100) ( fig. 2(a)) to ∼ 4% with (N W L , N ) = (3000, 3000) ( fig. 2(b)). In fact, it was noticed that WLMC calculations generally lose accuracy at large time values due to the numerical phenomenon of undersampling [29] of worldlines. A priori, it would seem that our results confirm this feature. However, in the present case the lose of accuracy is to be ascribed to minor efficiency in representing the derivative interactions. In the attempt to increase the efficiency of the code, one may think of using a higher order discrete derivative to insert into (22), like the two-point method or, say, the five-point method However, the specific form of the counter-term (23) is compatible only with a first order backward derivative, as it was derived [10] in this way, and numerical simulations confirmed that. What we observe is that, locally, the decrease of curvature improves the agreement between NSMM and EPM-the precision has gained roughly two orders of magnitude (the full-scale changes from 4% to 0.04%) passing from M = 1 to M = 0.1. This is due to the fact that, magnifying the radius of the sphere, the space in the vicinity of x = x = 0 gets closer to be flat, reducing the stronger effects which affect the precision of the NSMM. In general, the different degree of precision between the two methods considered, can be ascribed to the fact that-as already observed in perturbative computations [30]-the flat effective model (EPM) takes into account of the curvature effects in a much more efficient way than the non-linear sigma model, as in the former the curvature effects are all encoded in the effective potential. In fact, in the non-linear sigma-model, derivative interactions do appear, which are less efficiently represented in the heat-kernel expansion. This can be understood, by rewriting the action (17) in terms of a rescaled time s = τ /β which yields in which, the potentials appear with an order-β 2 higher than the derivative terms. Thus, in a perturbative expansion about a fixed point, one needs higher order terms from the Taylor expansion of g µν to match the corresponding orders in the potential. On the other hand, the flat effective model-unlike the nonlinear sigma model-has, so far, been shown to work only in maximally symmetric backgrounds. Finally, we present the calculation of the free particle heat kernel constrained in a D = 4 maximally symmetric Anti-de Sitter space, i.e. a 4-hyperboloid. Following [30], the sectional curvature is negative M 2 < 0 and, defining |M | = √ −M 2 , the metric is written as withf All other quantities appearing in (32) are defined in the same way as before, whereas in the potential (33) we have the replacement M → iM . Figure 5 shows a satisfying agreement between the curved method and the flat one for (N W L , N ) = (1000, 1000) for a range of β-values which is now extended to 10. The reason for such agreement at large β may be reasonably ascribed to the exponential suppression of the heat kernel on this scale, implying a sensitive reduction of Monte Carlo fluctuations too. Figure 5: Calculations of the heat kernel for the free scalar particle on a constant curvature-AdS space with D = 4 and |M | = 1. An extension to open worldlines So far we have seen an efficient way to numerically compute particle path integrals in curved space, with a direct application to the case of the heat kernel of a free scalar particle on a maximally symmetric space with Dirichlet boundary conditions at the origin, x(0) = x(β) = 0. In this section we present a possible way to easily extend the above computation to the case of non-zero boundary conditions. First of all, let us denote the endpoints with x(0) = y and x(β) = z for the particle x(τ ) propagating from time τ = 0 to time τ = β. The path integral we are interested in, is with where the potentialṼ (x) includes the counter-term (23) and a possible external potential. Now, we perform a splitting of the particle path into the classical one x cl (τ ) (i.e. a straight line, being the solution of the classical equation of motion for the free particle) and quantum fluctuations q(τ ), In particular, the fluctuations satisfy q(0) = q(β) = 0. Hence, the average of the path integral (41) takes the form with As equations (44)-(47) show, it is possible to calculate the averaged path integral I(y, z; β) with nonvanishing endpoints in terms of another one with null Dirichlet boundary conditions (like those computed in Section 4), provided a few replacements/adjustments, namely: • a factor which depends upon the squared difference of the endpoints appears in front of the average (44)-which corresponds to the free flat action; • the sampled points of the worldlines around zero (which here are denoted by q) must be displaced by the corresponding points of the classical path x cl when we evaluate the potentials (46) and (47). In Figure 6 we report the calculation of the heat kernel of a free scalar particle propagating on a maximally symmetric space with D = 4, from point y = (0, 0, 0, 0) to point z = (5,5,5,5), done using both numerical methods introduced above, and showing very good agreement. Conclusions and Outlook In the past couple of decades the numerical Monte Carlo approach to the Worldline formalism in flat space has seen a variety of applications. In the meantime, worldline techniques have been extended to QFT in curved spaces, but restricted to perturbative, analytic computations. The present manuscript intends to be a first attempt to link these two avenues, in order to be able to extend the Worldline approach to QFT in curved space beyond analytic perturbation theory. Applications can be numerous. Firstly, the study of quantum field theory effective actions can be extended beyond the regime where the inclusion of gravity is treated perturbatively, as for instance in the derivation of gravitational corrections to the Euler-Heisenberg lagrangians [15]. On the other hand, the effect of curvatures in strongly-coupled fermionic models, such as the Gross-Neveu model, which describe the low-energy limit of several physical systems, can be studied with great efficiency with the present setup. In fact, in flat space, the Worldline Monte Carlo approach to large N fermionic models was already successfully considered [28,33], whereas, at the perturbative level, the heat kernel expansion of the Gross-Neveu model in 3d curved space with constant curvature was studied, for example in Ref. [34]. The results that we have presented here clearly show the possibility of performing worldline path integrals associated to scalar point particles moving in curved space by means of Monte Carlo numerical procedures. In the present manuscript, we have considered maximally symmetric geometries. By using a recently studied effective flat space model-which encodes the curvature effects inside a suitable potential-as a benchmark, we have described a possible extension of the Worldline Monte Carlo to curved spaces with maximal symmetry. We have found it convenient to add a mass term in the kinetic action that serves to construct the worldline ensemble. Let us stress that, although the computation presented above focuses on maximally symmetric spaces, in the numerical NSMM approach there is a priori no particular constraint on the curved space background, nor is there any limitation on the chosen set of coordinates. However, as in other numerical methods, the main drawback of the present approach is the necessity of having a specific background, dependent on a finite set of real parameters. The construction described in the present manuscript focuses on the computation of the diagonal part of a particle heat kernel which is linked to the computation of one-loop effective actions. However, a possible strategy to simulate open worldlines has been presented and might also be used to study dressed particle propagators, for instance. Finally, the inclusion of spinorial degrees of freedom would certainly be a welcome generalization. This problem can be tackled either with the inclusion of suitable matrix-valued potentials (spin factors) or in terms of spinning particle models, which involve Grassmann odd coordinates, along with the geometric, Grassmann even, coordinates. Numerical approaches in the presence of Grassmann odd variables were already considered in [35], and it would be helpful to apply such construction to the present worldline Monte Carlo method. A Modified YLOOPS algorithm To diagonalize the flat kinetic term (3), we first define the discretized derivative as the backward differenceẋ The index k parameterizes the discretized x-points in the same way as the continuous parameter τ parameterizes the continuous x-points. Now, the full kinetic term will be proportional to the summation which we generalize to Following the same steps of [29], and The algorithm finally reads as follows. 3. Construct the loop according to It is easy to see that coefficients C (α) k reproduce those of [29] for vanishing α, i.e. k+1 k . B WLMC convergence with tiny mass term The effect of adding a tiny mass term to the sampling algorithm of the worldlines (cfr. Eq. (50)) has an important effect on WLMC computations. In Figure 7, we report the case of two calculations of the heat kernel introduced in Section 4. Both calculations have been performed with the same parameters, namely N W L = 1000, N = 1000, D = 4 and M = 1: however, for the one on the left a tiny fictitious mass has been introduced in the sampling, whilst for the other it has not. It can be easily noticed that 7(a) follows with good accuracy the expected behaviour, while 7(b) not only has a lot more broadened data, but they also qualitatively deviate from the green reference points. In this context, the addition of a mass term has the role of a regulator for the sampling of the worldlines. We stress that this fictitious mass has not been taken into account during the evaluation of the potential; rather, its only role is to modify the diagonalization coefficients (53) appearing in (51) and (52). Finally, we investigated a possible qualitative dependence of the fictitious mass which minimizes the error between the non-linear sigma model method and the effective potential method, with respect to the curvature of the maximally symmetric space. We considered the hyperboloid case with |M | ranging from 1 to 0.01. However, nothing would forbid to consider the positive curvature case, and the results are expected to hold unchanged. In Figure 8 we report the cases of |M | = 1 and |M | = 0.01 (intermediate curvatures exhibit the same behaviour). Each point of the plot represents the average error between a simulation with the nonlinear sigma model method and the associated effective potential method for different values of the α parameter. It can be seen that the optimization value α min does not change over two orders of magnitude of the curvature of the space. Hence we conclude that the fictitious mass parameter α is substantially curvature-independent. Figure 8: Comparison between heat kernel calculations of a free scalar particle on the hyperboloid for |M | = 1, 0.01. Each point denotes a single simulation: on the horizontal axis we have the value of the fictitious mass parameter α adopted, whereas vertically we have reported an arithmetic average of distance between the data of the non-linear sigma model method and the ones of the effective potential method (assumed as a benchmark). The α-value which minimizes the error does not qualitatively depend on the curvature of the space.
7,291.2
2020-06-04T00:00:00.000
[ "Physics" ]
Dynamical Aspects of Knowledge Evolution . Undeniably, in this rapidly changing world some knowledge granules change over time. Tracking these changes seems to be one of the most crucial processes in knowledge management. Every potential change is a result of knowledge adoption and application to solve a given problem or task in particular domains. However, there is a lack of a model that provides an event-driven framework, along with the core adoption process explicitly expressed with related factors, which together serve as an efficient tool to adopt and reuse knowledge on one hand, while on the other, to measure and evaluate the various aspects of knowledge quality and usefulness. This paper aims to fill this gap by introducing a knowledge adoption process and an ontology-aided encapsulation knowledge (OAKE) model. While the former breaks down the tacit adoption process into two explicit sub-processes and measurable factors, the latter exposes knowledge evolution over time by a sequence of recorded events. Introduction "Tacit, complex knowledge, developed and internalized by the knower over a long period of time, is almost impossible to reproduce in a document or database.Such knowledge incorporates so much accrued and embedded learning that its rules may be impossible to separate from how an individual acts" [1]. Moreover, considering the issue of knowledge reproduction, one could ask: how does knowledge change over time, and what are the origins and reasons of the occurring changes?Similarly, advanced-in-age knowledge has accumulated so many changes as a result of the cognition of its nature and has been so enriched by learning that its evolution may be impossible to reconstruct. Knowledge is a wide and abstract term, which has been the subject of an epistemological discussion of western philosophers since times of ancient Greece.Since the second half of XX century, knowledge has been widely studied in numerous research papers, uncovering many definitions, contexts and phenomena and in the end leading to a legitimate new scientific discipline, defined as knowledge management [2,3,4].For an organization, knowledge has become the most powerful leverage to achieve a competitive advantage, therefore it is crucial to effectively manage own resources [5,6,7]. These days, people and machines produce countless volumes of data and information, consciously and intentionally transformed into knowledge.All of the aforementioned are important assets in knowledge-driven environments and the last is by far the most labour-and time-consuming [8,9,10].In consequence, some employees spend the majority of their working hours doing manual, highdemanding intellectual work, supported by computers processing and manipulating large amounts of data as an input, and producing information or even knowledge as an output [11,12,13].As a result, a new concept of an employee was coined: "a knowledge worker", whose job primarily involves the creation, distribution and application of knowledge [14].By many, Peter Drucker is credited to be the first to use this term in his 1959 book, "Landmarks of Tomorrow" [15,16]. Data sets encoded in a computer memory differ in format, size and type.In general use, there are two primary data formats: binary and text, and four primary data types: text, drawing, movie and voice.Ordered sequences of characters, images and spoken words are perceived as explicit and unique information objects.Here, we can point out objects that are in everyday use, such as documents, presentations and spreadsheets, email-, voice-and video-messages, and webblogs, forums and pages.Each object processed and interpreted by an individual human mind, applicable and legitimate in a specified environment, where the consequences of the application are known or can be predicted, is considered to be a knowledge object.All of these objects, gathered and redacted, cleaned and reprocessed, organized and integrated in one consistent repository, along with a user interface that facilitates SCRUD operations (an acronym for search, create, read and delete), constitute a unified platform for knowledge workers. However, knowledge workers are still looking for a comprehensive solution to manage knowledge in such a manner that it will not only serve as pure technology but also provide interaction with other humans and available resources [17,18,19].At present, in the development of knowledge management (KM), to the best of our knowledge, there is a lack of a consensual framework, or generic process model, for tracking knowledge evolution; instead, to some extent, each organization follows its own set of principles, design criteria and practices in this area.Most existing frameworks and tools broadly touch the area of KM, and only a few are targeted specifically at tracking knowledge evolution.This paper aims to fill this gap by proposing an ontology-aided knowledge encapsulation (OAKE) model, along with a knowledge cognition model. The rest of the paper is organized as follows.The literature review is given in Section 2. In Section 3, at first, the knowledge cognition model is introduced followed by the OAKE model.Final conclusions are included in Section 4. Literature review The recent interest in knowledge management (KM), observed both in business and science, is nothing new.Therefore, it is no secret that nowadays, information and communication technologies (ICT) are the basic means to efficiently support every phase of the KM process.However, diverse technologies, such as knowledge management systems, knowledge discovery systems and knowledgebased systems are currently working with different types of knowledge [20,21,22].In our paper knowledge management is a term for any operations focused on knowledge granulas embracing all typical phases: discovering, registering, transformation and utilization.Another words, KM is a discipline that covers ideas and concepts from a variety of other disciplines, including artificial intelligence, data mining, distributed databases, information systems, intellectual capital and innovation [23,24,25]. Knowledge management is the process of continually managing knowledge of all kinds to meet existing and emerging needs, to identify and exploit existing and acquired knowledge assets and to develop new opportunities [26].From a practical business perspective, it is a deliberate, systematic business optimization strategy that selects, distills, stores, organizes, packages, and communicates information essential to the business of a company in a manner that improves employee performance and corporate competitiveness [27].In a narrow sense, it can be defined as a set of principles, processes, and techniques leading to the creation, organization, distribution, use and exploitation of knowledge [28,29].Crucial for the defined paper topic seems to be consideration of the phases directly connected with knowledge dynamics.In the next sections these selected KM operations will be discussed. Knowledge transformation There are two basic forms of knowledge: tacit and explicit [30,31].The former refers to that which is unarticulated, undocumented and held in peoples' heads, while the latter is expressed, structured, codified and accessible for those other than the individuals originating it [32].Thus, knowledge exists on the spectrum of these extremes and its transformation means moving from one extreme to another [33,34,35]. There are many reasons to engage means to perform knowledge transformation.The same or very similar problems do not need to be solved again -the particular pieces of knowledge can be reused.Effective reuse is apparently related to the effectiveness of the organization [36], and is an even more frequent concern when compared to knowledge creation, being viewed as somehow more important and difficult to manage [37].In the theory of knowledge reusability, Markus [38] emphasizes the role of knowledge management systems and knowledge repositories, often called organizational memory systems, in the efficient preservation of "intellectual capital" [39,40,41].Basically, knowledge transformation process can be identified with changing of existing knowledge or even with its creation applying for example process-driven approaches [42,43,44]. Knowledge codification The codification of knowledge is the process of converting knowledge into a form in which it can be handled by particular technology to store, transfer and share it [45].In addition, it makes knowledge visible, accessible and usable in a form and a structure meaningful to the user [46].Note, the knowledge code used during implementation (moving to a computer memory) is crucial to evaluate its usefulness and appropriateness.Coded knowledge should have a unique identity and an adequate form of representation, such as a rule, a decision table or tree, a model for problem solving and case-based reasoning or a knowledge map.To store and disseminate knowledge across an organization, various IT technologies, such as databases [47], intranets [48,49] and business intelligence tools, are usually put into action [50,51,52].In such a context the codification phase can be considered as the supporting operation of knowledge transformation but stressing its more technological nature [53,54,55]. Knowledge adoption and reuse Knowledge adoption concerns an internalization phase of organizational knowledge transfer [56], in which explicit information is transformed into internalized knowledge and meaning [57].In general, adoption usually begins with the recognition of the need for information, then moves to searching in possessed repositories, next to the initial decision to accept the received information, followed by validation in practice, and ending with absorption.On the other hand, knowledge provides the means to analyze and understand data and information [58,59,60], delivering the circumstances for an internal agreement between what we know and what we want to know. The process of knowledge reuse consists of the following phases: capturing, packaging, distributing and reusing [38,61].In the human mind, the latter involves both recall and recognition, while the former concerns information attributes, such as: the author, the date of creation, the representation form, and eventually the storage location.Moreover, the latter tries to determine the relevance degree of the incoming information, and possibly append it to pending knowledge to be applied again [62]. Retained and reused knowledge can improve project management capabilities [63], support managers in the decision-making process [64,65,66] and guide the product design [67].To be innovative and develop novel products and services, organizations need to gain knowledge of both external and internal worlds [68,69].To achieve these ends, the principal goal should be to focus on tracking changes occurring in internal bodies of knowledge. Ontology-Aided Knowledge Encapsulation Model All the mentioned knowledge phases are crucial in the created models supporting its dynamicity.In order to define a concept useful in modelling dynamical aspects of knowledge important assumptions should be declared.The name of the elaborated model comes from a conscious merger of the major concepts involved.Though the first term -ontology-aided may be unquestionable, while the termencapsulation needs to tell a brief story.By definition, data and any appropriate operations should be grouped together i.e. encapsulated, and the implementation details of both should be hidden from the users [70].A similar assumption was made in the elaborated model, where an operation is featured by an event.To implement the TBox part of the ontology, i.e. terminological knowledge declared as axioms and defined by a set of concepts and roles (the global axioms and core taxonomy), the Cognitum Ontorion system was used with the built-in capability of English semi-natural language support [55,71]. This section begins with a description of the prior model, which provides the operational foundation for the later model. The knowledge adoption model Knowledge adoption has been defined in many different ways.Beesley and Cooper [72] defined it as "identification of new products, services, markets, or processes", while Brown [73] as "the means through which policy-makers digest, accept then 'take on board' research finding".However, to the best of our knowledge, none of them does not reflect the general idea laying behind the nature of the process.Thus, for us, knowledge adoption means the acceptance of the state about the way things are and how they work, followed by the confirmation and judgment of its significance and value in the frame of present context and individual beliefs [31,74]. These two actions we have previously defined as verification and evaluationboth included to more universal term: validation, respectively [12,75].For others, the former refers to reaching an agreement over the meaning of a term [76], involving concept matching and relation comparison [77,78], while the latter refers to the evaluation of quality and usefulness [79,80]. In terms of knowledge verification, three factors have been distinguished: adequacy, completeness and consistency [81].The first factor corresponds to the degree of applicability or relevance to a given problem or task, the second refers to the degree to which the knowledge for completing a task or making a decision is passable and available, and the third refers to the degree of a logical match between the object and the content.In terms of knowledge evaluation, two factors have been identified: reliability and effectiveness.Both factors concern some kind of knowledge assessment, while the former reflects a degree of agreement to selfbeliefs and experience, the latter refers to the outcomes of the applied knowledge.These general factors can be expanded and elucidated in the form of interpretable numeric, logic or fuzzy metrics, to an extent appropriate to the context and the size of the knowledge object.If some errors, obstacles and constraints are observed, a need for change in a body of knowledge occurs. The OAKE model The objective is neither to introduce a model which outlines all possible phases, tasks or relationships underlying the knowledge evolution process, nor to set up a strict list of guidelines to follow which positively affect organizational performance.Instead, the model highlights a few major factors that can expose the origins of and reasons for the occurred changes in particular bodies of knowledge over time (Fig. 2). The aim of building the model is to capture changes in such a way that allows us to query and infer from the gathered knowledge.It is based on the observations Knowledge Verification Knowledge Evaluation collected from a requirements elicitation project for virtual on-line agents, where different groups of stakeholders, during the development of the knowledge base, reported heterogeneous requests to include itemized changes, often comparable or self-conflicting. Each change is represented by a unique event, performed by a knowledge worker on the knowledge object.The notion of a single event is structured and formalized in the form of an ontology that provides a common understanding of performed operations and perceived observations.Each single event object has a unique identifier and occurrence date, both automatically generated by the system.A knowledge worker inputs the subject that should generally reflect the idea laying beyond the event.Next, a type of performed operation on the knowledge object is selected, where a set of five options are available in multiple choice (apply, modify, read, run, print).The degree of priority, applicability and relevance are assigned, where each can be defined as low, medium or high.Next, a knowledge worker points to what degree (void, partially, complete) he found a solution to a problem or a task in the particular knowledge object and, if necessary, can also add a comment and attach a file.The event object is connected through two separate relations with the knowledge object and the knowledge worker. The knowledge object has a name that, specified by a user, should reflect its content in general terms, as well as an accurate type in which the information is encoded for storage in a computer file.The creation date is a predefined property that corresponds to the date of the first version, while the last modified property shows the date where the last changes were made.A built-in mechanism provides unique version numbers for unique states of knowledge objects, assigned in increasing order to new developments. The knowledge worker is identified by their first and second name and may play two different roles: an author (a creator) of the knowledge object, responsible for the quality of its content by admission and including incoming changes, or a user who simply utilizes available knowledge objects in the decision-making processes. The history of changes is not visible in the knowledge object; however, they are stored in the ABox part of the ontology.This mirror of knowledge evolution over time facilities various evaluations which contribute to the refinement of existing knowledge and to the production of new knowledge.A concept of implementation of changes is demonstrated in the next section. Knowledge dynamics The capture of ontology changes is triggered by either [82]: • changes in the domain, • changes in the shared conceptualization, • changes in the ontology specification. The first type is known from the area of the database schema versioning.Domain evolution, reflected and described by the changes, concern seven different facets [83]: • heterogeneous instances: over time different occurrences of the same value have different meanings in a domain extension; for instance, if the organization merge or split departments, then the preserved naming represent a different set of resources (e.g.employees, faculties); • cardinality changes: in particular, cardinality relationships between domains might also change over time; in other words, the number of occurrences in one entity which are associated to the number of occurrences in another are not always constant; for example, a 1-to-n relationship between departments and faculties may be changed to m-to-n as a result of new legal regulations; • granularity transition: from existing population values, having different granularity, might be added to a domain extension; for instance, the numeration of rooms or buildings might be changed due to the merge or acquisition [84,85]; • encoding changes: particular values might have also encoded meaning, which neither is known, nor provided elsewhere; for example, the naming of projects successfully delivered are eventually different from the others (failed, cancelled, etc.; see [86]); • time zone and unit differences: organization sites use local time zone and units which globally differ; thus directly comparing such values may be irrelevant; • identifier changes: the organization needs changes over time; as a consequence the indexing strategies may also change over time, leading in parallel or overlapping naming schemas; for instance, the codes of the products, previously 4-digits numbers, now having additional 6 zeros, are different for both the users and IT systems; • field recycling: in some systems it is difficult or even infeasible to alter certain database properties; in this case there might be a need to shrink the database or even implement a new instance with a different naming schema, replacing the existing ones; for example, a company might shift from hierarchical to a matrix structures, remodeling data structures [87,88,89]. The second type of the source of the ontology changes concern the assumption of the static nature of the shared conceptualization.Nowadays, it is at least naive to define specification in terms of the fixed settings, undeniably constant over time.On the contrary, many studies describe ontologies as dynamic networks of meaning [90,91,92]. Eventually, the third type is associated with ontology encoding, which may vary in types and formats.Along with ontology evolution, engineers are currently facing also the issue of merging ontologies [93,94,95]. In order to tracking of changes ontology presented in the OAKE model we reduce list of objects (Fig. 3). Conclusions This paper introduces two models which bring a contribution to the discipline of knowledge management.Both are the effect of broadly self-conducted research and participation in other research projects, supported by a critical analysis of literature, narrowed down to major concepts and those highly related with the discussed subject. The OAKE model, presented here, incorporates events with knowledge objects and workers, and exposes knowledge evolution over time on one hand, while on the other hand, is a baseline to measure and evaluate the various aspects of knowledge quality and usefulness. The knowledge cognition model breaks down the tacit cognition process into two explicit sub-processes and measurable factors.It is, ipso facto, an attempt to unambiguously generalize the spectrum of cognitive processes inherently processed in an individual human mind. A retrospective view of the elaborated models gives the impression that each of them can be independently adopted to any extent and in any application domain.However, both only embody general concepts with a high degree of abstraction, but not biased at any level, and can be further extended and attributed, eventually providing a framework to adopt and reuse knowledge with support for event-based tracking of changes. Fig. 1 . Fig. 1.The model of the knowledge adoption process (KAP) Fig. 3 . Fig.3.Initial stage of the partially incorporated OAKE model The most important objects are placed in the Figure; previously defined relationships are actual: Knowledge, Domains and Solutions.Assuming changes in the Domains the discussed ontology is presented in Fig.4. Fig. 4 . Fig.4.Final stage of the partially incorporated OAKE model List of the defined categories has been extended as a result of knowledge dynamics; next specimens appeared in case of Domains and Solutions and particular relationships mapped these new items.Comparing gradually appearing versions of ontology we are able to monitor knowledge dynamics supporting new solutions and taking into account new domains.
4,678.8
2017-08-20T00:00:00.000
[ "Computer Science" ]
An overview of polarized neutron instruments and techniques in Asia Pacific Polarized neutron scattering is an indispensable tool for exploring a vast range of scientific phenomena. With its dynamic scientific community and significant governmental support as well as the rapid economic growth, the Asia–Pacific region has become a key player in the worldwide neutron scattering arena. From traditional research reactors to cutting-edge spallation neutron sources, this region is home to a myriad of advanced instruments offering a wide range of polarized neutron capabilities. This review aims to provide a comprehensive overview of the development and current status of polarized neutron instruments and techniques in the Asia–Pacific region, emphasizing the important role of the Asia–Pacific region in shaping the landscape of global polarized neutron scattering development. Introduction Not too long after the discovery of the neutron by James Chadwick [1], physicists realized that neutrons could be a very useful tool to study condensed matter because the wavelength of slow neutrons is on the order of interatomic distances and the energy is comparable to many excitations in condensed matter.Therefore, neutron scattering can provide abundant information on the chemical structure and the dynamics of atoms.The neutron has no charge so it can penetrate deep into matter and directly interact with nuclei, whereas X-rays mainly interact with orbital electrons in atoms.Following the world's first nuclear reactor Chicago Pile-1 reaching criticality in 1942 led by Enrico Fermi, the nuclear age started.A year later in 1943, the Graphite Reactor at the Oak Ridge National Laboratory (ORNL) went critical.Physicist Ernest Wollan and Cliff Shull quickly realized the great potential of the neutrons produced by the Graphite Reactor and embarked on a series of neutron diffraction experiments including the diffraction experiments showing the direct evidence of antiferromagnetism in MnO below its Curie temperature [2] and confirming the ferrimagnetic model for Fe 3 O 4 [3].These pioneering works opened the gate to a new era in neutron scattering.Between the 1950s and 1970s, a great number of research reactors were built and put into use across the world, some of which are still running nowadays.Table 1 lists the major neutron research reactors built between this time frame.Nuclear reactors provided a reliable way of getting high-flux neutrons, which greatly advanced the development of neutron scattering both in technique and instrumentation beyond diffraction.Bertram Brockhouse developed neutron spectroscopy to study the dynamics of a material by building the first triple-axis spectrometer in the world at the Chalk River Research Reactor in Canada.Both Shull and Brockhouse were awarded the Noble Physics Prize in 1994 for their significant contributions in neutron scattering.The construction of research reactors has slowed down or even stopped in most part of the world since 1980.Meanwhile, accelerator-based spallation neutron sources have gained popularity among the neutron community.All major spallation neutron sources since the 1980s are listed in Table 2. Unlike reactor sources, which produce a continuous and constant neutron flux, the spallation neutron sources usually send out neutrons in pulses with typical frequencies between 10 and 60 Hz.The time-averaged flux in today's pulsed neutron sources is still lower than that of a high-flux reactor source, but the peak flux is often much higher.For example, the neutron brightness at 1 Å of the High Flux Isotope Reactor (HFIR) at ORNL is about 200 times higher than the time-averaged brightness of the Spallation Neutron Source (SNS), but the SNS's peak brightness at the same wavelength is about 10 times that of HFIR.By taking advantage of the time-of-flight (TOF) technique and optimized instrumentation, a pulsed neutron beamline can provide higher wavelength resolution, access broader (Q, ω) space, and generally have lower background.Over the last 70 years or so, neutron scattering has made huge progress in every aspect including the source, instrumentation, techniques, and applications.Today, neutron scattering has become an indispensable tool in many disciplines of science and technology including physics, biology, chemistry, materials science, engineering, and many interdisciplinary fields. Compared to X-rays, a unique feature of the neutron is that it has a magnetic moment, which allows the neutron to interact with other magnetic moments and thus serve as an ideal probe of magnetic properties in magnetic materials.The fact that neutrons can be polarized further enhances the capability of neutron scattering in studying magnetism.The first polarized neutron experiment was performed in 1959 by Nathans et al. to study magnetic scattering by iron and nickel in which the incident neutron beam was polarized [4].The method is now often referred to as the "half-polarized" or "flipping ratio" method, which greatly increases the sensitivity of probing small magnetic scattering amplitudes.In 1969, Moon et al. pioneered the polarization analysis method by adding a neutron polarization analyzer after the sample [5].This method is now called longitudinal polarization analysis (LPA) because the scattered beam polarization is only measured along the same direction as the incident polarization.LPA provides a convenient way to separate nuclear, magnetic, and spin-incoherent scattering components, which are otherwise hard to decouple.With the advances in neutron optics over the last 50 years, LPA has become the most widely used polarized neutron technique in the world.In the 1970s, Mezei developed the neutron spin echo (NSE) technique based upon Larmor precession of the neutron spin in magnetic fields [6].NSE encodes the neutron energy transfer in the Larmor precession angle of the neutron polarization to achieve the highest energy resolution in neutron spectroscopy and thus is ideal to study systems with slow dynamics.In the 1980s and 1990s, the polarization analysis method was extended to three-dimensional polarimetry by Tasset [7,8], now known as spherical neutron polarimetry (SNP).Compared to LPA, SNP exploits the vectorial nature of the neutron polarization and measures the full polarization change in the scattering process, which has found use in determining complex magnetic structures that are otherwise hard to determine unambiguously using other methods [9][10][11][12].There are also many other notable development of polarized neutron techniques including but not limited to XYZ polarization analysis [13], neutron resonance spin echo [14][15][16], Larmor diffraction [17][18][19], polarized neutron imaging [20][21][22], and dynamic nuclear polarization (DNP) [23,24].The diverse applications of polarized neutrons in today's neutron scattering highlight the importance of developing polarized neutron capabilities in modern neutron facilities. Neutron scattering has a long history in the Asia-Pacific region as well, although it started slightly later than in Europe and North America.Japan emerged as a major player in neutron research in the region in the early 1960s following the completion of the Japan Research Reactor No. 2 (JRR-2) and the Japan Research Reactor No. 3 (JRR-3).Japan also commissioned the world's first pulsed neutron facility KENS in 1981.With the increasing demand for higher neutron fluxes in the user community, JRR-3 was replenished to run at 20 MW in the 1990, and the new 1-MW Japan Spallation Neutron Source (JSNS) was built to replace KENS and started operation in 2008 at the Japan Proton Accelerator Research Complex (J-PARC).Other countries in the region like Australia, China, India, and Korea have also made significant strides in the development and application of neutron scattering.Today, the Asia-Pacific region has developed a robust neutron scattering community marked by advanced facilities and active international collaborations.Because of the unique power of polarized neutrons, the development of polarized neutron capabilities is also an integral part of the major neutron facilities in the region.In this review, we will survey polarized neutron development in the Asia-Pacific region, highlighting advancements and progress in polarized neutron techniques and instrumentation in the major neutron sources in the region. Polarized neutron instrumentation in major neutron facilities in Asia Pacific Over the last 60 years, the Asia-Pacific region has experienced a remarkable surge in the advancement of neutron scattering instrumentation and techniques, reflecting the growing prominence of this region within the global neutron scattering community.This progress can be ascribed to the establishment of world-class research facilities and the development of cutting-edge neutron sources throughout the region.Figure 1 shows the major neutron sources in the region.Collaborative efforts among researchers, institutions, industries, and nations in the Asia Pacific have fostered a vibrant scientific community, leading to groundbreaking discoveries and advancements in diverse areas.Almost every neutron user facility has invested a significant number of resources in the development of polarized neutron capabilities due to the unique advantages and insights that polarized neutron techniques offer in various fields of research. Australia Australia has a rich history in the field of neutron scattering.A major milestone in the history was the construction of the 10-MW High Flux Australian Reactor (HIFAR) in 1958, which began to be utilized for neutron scattering research in the late 1960s until it was finally shut down in 2007 [25].In response to the need for a new, state-of-the-art neutron scattering facility, the Australian government initiated the construction of the Open-Pool Australian Lightwater (OPAL) research reactor at the Australian Nuclear Science and Technology Organisation (ANSTO), which became operational in 2007.The OPAL reactor is a modern, 20-MW multipurpose research reactor that provides a reliable and powerful neutron source for a diverse array of neutron scattering instruments.Currently, a total of 15 neutron instruments are available to users, six of which can perform polarized neutron experiments. • Platypus: As the first instrument at OPAL to offer polarized neutron capabilities, Platypus is a versatile TOF neutron reflectometer that provides both unpolarized and polarized modes to cater to a wide variety of experiments [26,27].For polarized neutron reflectometry (PNR), Platypus employs two SwissNeutronics Fe/Si supermirrors (m = 3.8) as the polarizer and analyzer, respectively.The wavelength band for PNR ranges from 2.5 to 13 Å, which is narrower than the unpolarized band (1-21 Å) due to the limitations of the supermirrors [27].Two RF gradient neutron spin flippers are placed before and after the sample position to realize polarization analysis for the whole polarized neutron wavelength band.PNR has become an integral part of the beamline, enabling researchers to explore magnetic properties of various materials [28][29][30][31][32][33].• Taipan: Taipan is a versatile thermal triple-axis spectrometer with a high-flux thermal neutron beam [34][35][36].Both inelastic and diffraction experiments can be performed on this instrument owing to its high flux and flexible configurations.In recent years, the polarization analysis capability has been added to the instrument by using ex situ polarized 3 He neutron spin filters as both the polarizer and analyzer [37].Users have started to take advantage of this new capability for experiments [38][39][40].• Pelican: As a direct geometry TOF cold neutron triple-axis spectrometer with a wide detector bank, Pelican was designed with polarization analysis in mind from the very beginning [41,42].The planned polarized mode involves using a combination of a compact solid-state supermirror bender as the polarizer and a wide-angle 3 He neutron spin filter as the analyzer.Currently, the wide-angle 3 He system is under development.Once completed, Pelican is expected to operate in the polarized mode for a significant amount of time https:// www.ansto.gov.au/ our-facil ities/ austr alian-centre-for-neutr on-scatt ering/ neutr on-scatt ering-instr uments/ pelic an-time. Other instruments, including QUOKKA, a SANS instrument, SIKA, a cold neutron triple-axis spectrometer, and WOMBAT, a neutron diffractometer, are also equipped with polarized neutron capabilities [37,43,44].Much of the polarized neutron instrumentation at ANSTO is focused on employing polarized 3 He neutron spin filters.A metastability-exchange optical pumping (MEOP) station, developed by the Institut Laue-Langevin (ILL), is responsible of producing highly polarized 3 He for instruments [37].The MEOP station provides a fast method for producing large volumes of polarized 3 He gas and thus is key to the successful deployment of 3 He spin filters at ANSTO.As instrument development continues, polarized neutron scattering is expected to play a more significant role at ANSTO. China China's research into neutron scattering dates back to the 1950s.In 1958, the 7-MW Heavy Water Research Reactor (HWRR), the first nuclear reactor in China, reached criticality.Soon after, Chinese researchers constructed a neutron diffractometer at the HWRR [45,46].In 1960, they observed and later reported on the effects of piezoelectric oscillation, which resulted in an enhancement of neutron scattering on quartz single crystals [47].The HWRR underwent several upgrades and reached up to 15 MW in the 1980s.It was finally decommissioned in 2007 after 47 years of operation.China's neutron scattering research has experienced remarkable progress over the last two decades and has rapidly emerged as a significant force in the global neutron scattering community.Three sources have been built during the last 15 years: the China Advanced Research Reactor (CARR), the China Mianyang Research Reactor (CMRR), and the China Spallation Neutron Source (CSNS).The CARR is a 60-MW research reactor located in Beijing [48][49][50], which went critical in 2010.The CMRR is a 20-MW research reactor in Mianyang, Sichuan province, and has started operation since 2014 [51,52].The CSNS, based in Dongguan, Guangdong province, is the first spallation neutron source in China and the second in the Asia-Pacific region [53][54][55][56], which has been operating since 2018 and is currently running with a 100-kW proton beam power.The establishment of the three neutron sources serves as a testament to the country's commitment to fostering innovation and collaboration in neutron scattering.In addition to these large-scale neutron sources, China has also constructed a small accelerator-based source, known as the Compact Pulsed Hadron Source (CPHS) at Tsinghua University [57].Beyond its research capabilities, the CPHS is a dedicated platform for the education and training of the nextgeneration neutron scattering users, making it an ideal incubator for fostering future talents in the field, which is an ideal place dedicated to education and training of next-generation neutron scattering users. The CMRR is a high-performance, multipurpose research reactor with dedicated halls for thermal and cold neutrons, supporting a diverse range of scientific investigations through various experimental facilities and instruments, including neutron radiography, radiopharmaceuticals neutron activation analysis, and neutron scattering.Currently, 8 instruments have been built (4 in the reactor hall and 4 in the cold scattering hall) and are open to users [52].Among these instruments is a polarized TOF neutron reflectometer, Diting, in the cold scattering hall [58].The TOF mode of the reflectometer enables the individualized optimization of the instrument flux and resolution for each experiment.Diting takes advantages of a high-efficiency transmission supermirror (m = 2.7) to polarize the incident neutron beam and another transmission supermirror (m = 3.85) to analyze the beam.Two adiabatic fast passage RF spin flippers are implemented to flip the upstream and downstream neutron polarizations and thus enact polarization analysis.The polarized mode works over a wide wavelength band from 2.5 to 12.5 Å and covers a Q range from 0 to 0.5 Å −1 [51,52,58], and users have started to perform polarized neutron reflectometry (PNR) experiments at the instrument to study various science cases [59][60][61][62].Meanwhile, the CMRR has established a dedicated polarized 3 He team, which has played a significant role in advancing polarized neutron capabilities in China by developing polarized 3 He systems for neutron instruments [63][64][65].The use of polarized 3 He can realize the rapid deployment of polarized neutron capabilities on typically unpolarized neutron instruments.They have also performed fundamental studies using relevant techniques to make precision measurements in the search for exotic spindependent interactions mediated by axion-like particles [66][67][68].The CMRR is still fast growing.New neutron instruments are under construction and will join the user program soon.A polarized SANS (PSANS) instrument [52] and two neutron spin echo instruments, the longitudinal neutron resonance spin echo (LNRSE) spectrometer [69] and the spin echo SANS (SESANS) instrument [70], will be the latest additions to polarized neutron scattering at the CMRR. The CSNS is large accelerator-based pulsed neutron facility operating with a 1.6-GeV proton beam and 25-Hz proton pulses [55,56].It represents a significant leap forward in China's dedication to neutron scattering.The proton beam power is running at 100 kW and can be upgraded to 500 kW in the future.The current facility can accommodate up to 20 neutron instruments, with four beamlines already in the user program and more under construction or planning http:// engli sh.ihep.cas.cn/ csns/ fa/ in/.The multipurpose reflectometer (MR) is one of the three day-1 instruments at the CSNS and comes equipped with the polarized neutron reflectometry (PNR) capability [71].This reflectometer utilizes two transmission Fe/Si supermirrors (m = 4.4) as the polarizer and analyzer in PNR experiments, along with a pair of RF spin flippers positioned before and after the sample.Since its commissioning, the MR instrument has become one of the most productive instruments at the CSNS with PNR experiments being routinely conducted [72][73][74][75][76][77][78].Additionally, the CSNS has a dedicated neutron polarization group and a development beamline (BL-20) contributing to the advancement of polarized neutron devices and techniques [79].The group has developed both ex situ and in situ polarized 3 He neutron spin filters [80][81][82], built and tested flippers [83], and realized TOF polarized neutron imaging [84].The development beamline BL-20, equipped with a V-cavity polarizing supermirror to provide a highly polarized incident neutron beam, has become the go-to place for testing neutron polarization devices and exploring new concepts.For the future very small angle scattering (VSANS) instrument, efforts are being made to develop a magnetic sextuple lens to focus the incident polarized cold neutron beam onto the sample position [85,86]. The construction of the three neutron sources is far from complete.Despite the rapid progress and achievements in neutron scattering research in China, there is still considerable potential for further development and expansion.As these facilities continue to expand, more instruments with polarized neutron capabilities will be added to the current instrument suite.The ongoing advancements in polarized neutron scattering technology will enable breakthroughs across a wide range of disciplines.With a steadfast commitment to innovation and collaboration, China is poised to become a leading player in neutron scattering research, pushing the boundaries of scientific exploration and discovery. India India constructed Asia's first research reactor, Apsara, in 1956 at the Bhabha Atomic Research Centre (BARC) in Mumbai.Following Apsara's successful operation, Indian researchers began to explore the potential of neutron scattering in various fields of study.The commissioning of the second, more powerful research reactor, Cirus, in 1962 further accelerated the growth of neutron scattering research in India.The demand for higher neutron flux and better instruments propelled India to build Dhruva, a 100-MW reactor that went critical in 1985 and was designated as the National Facility for Neutron Beam Research (NFNBR) https:// www.barc.gov.in/ react or# nav-4.Currently, Dhruva is home to 12 neutron instruments, two of which have polarized neutron capabilities [87]. • Polarized neutron spectrometer: This instrument is housed in the reactor hall of Dhruva and is a polarized neutron workhorse.It uses Heusler crystals as both the monochromator and polarizer to provide a polarized thermal neutron beam of 1.2 Å and a Co 0.92 Fe 0.08 crystal as the polarization analyzer [88,89].Two RF spin flippers are employed to enable polarization analysis at this beam line.Notably, this instrument has been extensively utilized for experiments employing the neutron depolarization technique, which is a well-established method to study ferromagnetic materials [90][91][92][93].The onedimensional neutron depolarization technique of the instrument provides a way to investigate the domain magnetization and magnetic inhomogeneity on a mesoscopic scale in the sample and serves as a useful addition to conventional neutron diffraction [88,[94][95][96][97][98][99][100][101].• Polarized neutron reflectometer: As in other neutron facilities, PNR has become an indispensable tool in studies of magnetic thin films.Situated in the cold guide laboratory next to the reactor hall, the reflectometer at Dhruva can switch between unpolarized and polarized modes and delivers an incident neutron beam with a wavelength of 2.5 Å [102].In the polarized mode, a polarizing supermirror is used to polarize the incident beam and a Mezei flipper to flip the incident neutron polarization.Although the option to insert a supermirror analyzer to perform polarization analysis is available, it is generally not implemented because of the relatively low neutron flux of the instrument [102].Nevertheless, the reflectometer remains productive, applying PNR to a diverse range of samples [103][104][105][106][107][108]. Neutron scattering in India continues to evolve.With the ever-growing demand for better resolution, higher neutron flux, and more modern neutron instruments, a new spallation neutron source has been proposed to be built in India [109][110][111].Polarized neutron capabilities will be no doubt a significant part of the new instruments at the spallation neutron source.With the continued growth of the neutron scattering community and enhanced international collaborations, neutron scattering in India looks even brighter. Japan Japan has played a significant role in the development and advancement of neutron scattering research since the 1960s.Japan has also been a front-runner in developing and utilizing polarized neutron techniques.Currently, two major neutron facilities in Japan, the JRR-3 and JSNS, are providing neutron beams for users across the world.The two sources are located within walking distance to each other in Tokai, Japan, which helps users take advantage of the complementary capabilities of both facilities to perform more comprehensive and diverse experiments.The research reactor JRR-3 first went critical in 1962 with a power of 10 MW and was later upgraded to 20 MW in 1990.JSNS at J-PARC is a 1-MW accelerator-based pulsed neutron source debuted in 2008.Both facilities serve as hubs for regional and international collaboration in neutron scattering research, attracting users from all over the world. Currently, the JRR-3 has a total of 31 neutron instruments currently running https:// jrr3.jaea.go.jp/ jrr3e/2/ 21. htm, many of which have polarized neutron capabilities.Here, we highlight several notable polarized neutron instruments: • TAS-1: This instrument is a conventional thermal triple-axis spectrometer with unpolarized and polarized modes.The spectrometer utilizes doubly focusing Heusler crystals (Cu 2 MnAl) in polarized mode to polarize neutrons and analyze polarization, thereby enabling longitudinal polarization analysis (LPA) [112].Moreover, TAS-1 has also been equipped with an advanced spherical neutron polarimetry (SNP) device called CRYOPAD, which was developed at the Institut Laue-Langevin (ILL) [113].The addition of CRYOPAD, along with versatile sample environment, has enabled the instrument to perform more complicated experiments [114][115][116].• PONTA: PONTA is another thermal triple-axis spectrometer at JRR-3 with the polarization analysis capability.Like at TAS-1, Heusler crystals are used as the polarizer and analyzer for PONTA [117].In addition to LPA, PONTA has also tested thermal neutron spin echo spectroscopy [118][119][120].Compared to classical triple-axis spectroscopy, the spin echo addition provides a unique way to achieve higher energy resolution without sacrificing neutron flux.Recently, PONTA has added an option to use a V-cavity supermirror as the polarizer, which, in combination with a pyrolytic graphite monochromator, can lead to higher flux and incident neutron polarization.• SUIREN: SUIREN is a magnetic reflectometer dedicated to studying magnetic films and solid-liquid interfaces [121,122].SUIREN can choose to run between the unpolarized mode and the polarized mode.The polarized mode enables polarized neutron reflectometry (PNR) by using one Fe/ Ge reflection supermirrors as the polarizer and the other one as the analyzer [121,123].Together with two Mezei flippers, one before and one after the sample, four cross sections (+ + , + -,-+ , and --) can be measured, where the first + or − sign in the cross sections represents the incident neutron polarization direction and the second sign denotes the analyzed neutron polarization direction.• iNSE: This is a conventional neutron spin echo spectrometer designed to mainly study dynamics in soft matter [124][125][126][127].The two specially designed symmetric main precession coils responsible for spin echo provide homogeneous field integrals as well as strong magnetic fields [124].The neutron beam is polarized by a polarizing supermirror bender guide and analyzed by a multichannel supermirror bender, both manufactured by Swiss-Neutronics and working well for neutron from 4 to 15 Å [126].• SANS-J-II: This is a 20-m-long small angle neutron scattering (SANS) instrument capable of doing polarized neutron experiments [128,129].The uniqueness of this beamline lies in utilizing focusing lenses to achieve an accessible minimum scattering vector Q min on the order of 10 -4 Å −1 and thus enable ultra-small angle scattering [129,130].Polarization analysis is available at high Q with a supermirror analyzer and a high-angle detector, mainly used to separate the coherent and incoherent signals [131]. In addition, a dynamics nuclear polarization (DNP) device has been developed for SANS-J-II to polarize sample nuclei [132], providing an increased signalto-noise ratio, especially in neutron crystallography of proteins. Some other instruments at JRR-3 also have polarized neutron capability or have tested it.For example, TOPAN is another triple-axis spectrometer with the capability of polarization analysis, and the powder diffractometer HERMES also tried polarized neutron diffraction using a polarized 3 He polarizer [133].The neutron optics beamline NOP has served as a test bed for the development of polarized 3 He neutron spin filters [134][135][136] as well as other neutron optics like magnetic neutron lenses [137,138]. The JSNS is located at the Materials and Life Science Experimental Facility (MLF) in J-PARC, which provides high peak neutron brightness at 25 Hz to 21 currently installed neutron instruments [139].Since its inception in 2008, several state-of-the-art instruments with polarized neutron capabilities have been constructed, offering unique capabilities to users: • SHARAKU: SHARAKU is a TOF-polarized neutron reflectometer.Compared to its counterpart at JRR-3, SHARAKU enables the measurement of reflectivity profiles over a wide range of scattering vector Q values.Polarizing Fe/Si supermirrors are installed as the neutron polarizer and analyzer.A Drabkin twocoil spin flipper effectively flips the incident neutron polarization [140], and a Mezei flipper is used to flip the downstream neutron polarization [141,142]. Additionally, an in situ polarized 3 He system has also been tested at the instrument to serve as an analyzer for off-specular scattering [143,144].The 3 [145][146][147][148].NRSE and MIEZE are two variations of the neutron spin echo technique.In NRSE, high-frequency spin flippers replace the long, large, and strong magnetic precession coils as seen in the conventional spin echo setup [14,16], making NRSE instruments more compact than conventional NSE ones.MIEZE is a single-arm NRSE technique in which the polarization analyzer is placed upstream before the sample to avoid polarization manipulation after the sample, and the modulated signal would not be affected by a depolarizing sample or high magnetic fields around the sample [149][150][151].At VIN ROSE, the TOF MIEZE beamline is already in the user program [152][153][154], while the NRSE beamline is still under tuning.• TAIKAN: This is a TOF SANS instrument that covers a wide Q range (0.0008-17 Å −1 ) for unpolarized neutrons at a single configuration setup [155,156]. For polarized neutrons, TAIKAN has a V-cavity transmission supermirror installed as the polarizer [157,158].Polarization analysis is enabled by adding a polarized 3 He analyzer or a supermirror analyzer to separate coherent and incoherent scattering for organics samples [158,159] or to study magnetic phenomena in magnetic materials [160][161][162].Half polarized diffraction experiments without polarization analysis have also been performed on magnetic materials [163][164][165].Moreover, DNP was also tested on the instrument to provide spin contrast variation for the sample [166].• POLANO: POLANO is a dedicated direct geometry polarized neutron spectrometer with a wide detector bank aiming to perform polarization analysis for neutrons up to 100 meV [145,[167][168][169]].An in situ polarized 3 He neutron spin filter has been developed as the neutron polarizer [170,171], which can also work as a neutron spin flipper.A wide-angle supermirror array has been made to serve as the polarization analyzer for neutrons up to 40 meV with an angle coverage of up to 40° [172].In addition to the supermirror analyzer, POLANO also plans to use a wide angle polarized 3 He analyzer to reach higher neutron energies, covering more science cases [173]. In addition to these instruments dedicated to polarized neutrons, several other instruments at JSNS can also be utilized for polarized neutron experiments.The neutron imaging beam line RADEN has an option to do polarized neutron imaging to visualize magnetization distribution [174][175][176][177]. Polarized neutron imaging can also be performed at the instrument NOBORU [178][179][180], which is a development and test beam line for new techniques and devices.A 7 T DNP apparatus has been successfully tested at the diffractometer iMATERIA by achieving high proton polarizations and is now available for industrial users [181,182].The neutron optics and fundamental physics beamline NOP have a polarized neutron branch by using a polarizing supermirror [183,184], which has been used to measure the neutron lifetime [185] and test other neutron polarization devices [144,186].The neutron-nucleus reaction measurement instrument ANNRI has also utilized polarized neutrons for nuclear physics [187,188]. Japan has a long history of excellence in the field of neutron scattering, contributing to groundbreaking research across various disciplines.The development of polarized neutron scattering instrumentation and techniques has further expanded the scope of research conducted at Japanese facilities.To date, the neutron facilities in Japan have enabled polarized neutron capabilities in almost every category, covering hard matter, soft matter, and fundamental physics.Japan has expert teams dedicated to developing new polarized techniques and instrumentation.For example, the 3 He team has developed both in situ and ex situ polarized 3 He systems for various instruments [136,144], and the supermirror team has the capability to fabricate high-performance supermirrors [123,189].As facilities like the JSNS and JRR-3 continue to invest in new polarized neutron instruments and methodologies, Japan's role as a leading player in neutron scattering research is set to strengthen further. Others In addition to the aforementioned neutron facilities, both the 30-MW High-flux Advanced Neutron Application Reactor (HANARO) in South Korea [190,191] and the 30-MW Reaktor Serba Guna G.A. Siwabessy (RSG-GAS) in Indonesia [192,193] are two major neutron sources in the region.At HANARO, polarized neutron experiments can be performed on the polarized neutron reflectometer REF-V [191,194] and the 40-m SANS instrument [195].Polarized 3 He neutron spin filters have also been developed at HARARO [196,197] and will be applied to the triple-axis spectrometers and the neutron imaging beamline in the future [197,198].New instruments are still being commissioned or constructed at HANARO, which will expand the scope of applications for polarized neutrons. The neutron scattering program at RSG-GAS in Indonesia is relatively small compared to other large neutron facilities in Asia-Pacific, but it still managed to equip the triple-axis spectrometer with the polarized neutron capability [193].Despite the challenges often faced by developing countries like Indonesia, such as limited funding and lack of experienced researchers in the field of neutron scattering, the country has shown a strong commitment to improving its research capabilities in this area, which bodes well for its future advancements in neutron scattering. Conclusions Over the years, polarized neutron scattering has emerged as a powerful tool in a variety of scientific domains.Fields ranging from physics, chemistry, and materials science to earth science and biology have leveraged the capabilities of polarized neutron scattering to gain significant insights into their respective domains of study.For instance, in the field of magnetism, polarized neutron scattering has been invaluable in the study of magnetic structures, spin dynamics, and magnetic phase transitions, and in the field of soft matter, neutron spin echo has been instrumental in studying the dynamics of polymers and proteins. The Asia-Pacific region, with its vibrant economic growth and scientific advancements, has been seeing rapid development and utilization of polarized neutron techniques.The region, though a relatively late entrant, has now become an active participant in the global neutron scattering community.The extensive neutron facilities across the region collectively demonstrate a broad engagement in neutron scattering research.The region has embraced almost all polarized neutron techniques, including longitudinal polarization analysis, spherical neutron polarimetry, conventional and resonance neutron spin echo, dynamic nuclear polarization, magnetic lenses, and polarized neutron imaging.This impressive breadth of technique adoption showcases the region's adaptability and eagerness to keep pace with the global trends. The government support within the region has been vital to these advancements, with significant investment in the construction of new facilities and the upgrading of existing ones.For example, over the past two decades, this region has seen the construction and successful operation of two new large state-of-the-art spallation neutron sources.These facilities account for half of the spallation neutron sources currently operating globally.The increasing accessibility to these facilities, in turn, has opened up many opportunities for collaboration both within the region and with international partners. In the coming years, as more new facilities and modern instruments come online, and as new polarized techniques become more available, the Asia-Pacific region is expected to play a leading role globally in the applications of polarized neutron scattering. Fig. 1 Fig. 1 Major neutron sources in the Asia-Pacific region Table 1 Major reactor neutron sources built between the 1950s and 1970s Table 2 Major spallation neutron sources around the world
7,399
2023-10-02T00:00:00.000
[ "Physics" ]
Network Interconnection Security Buffer Technology for Power Monitoring System In recent years, the risk of malicious attacks on power monitoring systems has increased, and there have been many attacks on power systems in the world. Aiming at the network interconnection security problem of the core control system, the concept of “security buffer” is introduced, and a network security buffer method forpower monitoring system is proposed, which is composed of three parts: paradigm check, behavior analysis, and dynamic conversion and jointly realizes the multilevel security inspection of interconnection requests. Experimental verification results show that the proposed method has a protective effect on malicious attacks of power monitoring system. Introduction In recent years, an increasing number of security incidents have happened to the industrial control system, especially to the power monitoring system. In 2010, Stuxnet invaded the Iranian nuclear power station, disabling 20 percent of centrifuges and severely impeding the implementation of Iran's nuclear power plan. Stuxnet was a destructive worm specifically targeting the industrial control system. It aimed to attack the PLC, and data acquisition and supervisory systems of Siemens, steal its system permission, and further maliciously changed control parameters. In 2015, the Black Energy left more than half of Ukraine without power. In 2018, TSMC's machine equipment was used for blackmail, which got its chip production in trouble. Frequent industrial control security incidents have attracted extensive attention from home and abroad. China and European and American countries have included the industrial control system in their national strategies [1][2][3]. Some studies have been conducted targeting network security of industrial control systems. e studies comprise two aspects: on the one hand, the learning algorithm is used to train the model and detect the attack behavior based on extracted data features or traffic characteristics. e literature [4,5] used SVM to model the flow interval and the length of data packets for the network traffic of industrial control system and designed an intrusion detection system; Zhao Guicheng [6] proposed building a behavior model based on function code and start address in Modbus protocol and applying SVM algorithm to the analysis of abnormal behavior. Zhu et al. [7] designed and achieved a multiclass SVM algorithm for the intrusion detection in the perspectives of function code or behavior characteristics; Li Wei et al. [8] proposed a SCADA system intrusion detection approach, which sets out intrusion detection rules by the white list and based on analysis of behavior protocol; Parvania et al. [9] presented a behavior-based intrusion detection system for communication behaviors and protocol specifications of smart grid system by means of statistical analysis of traditional network features and specificationbased detection. However, as the attack has turned to slow penetration, statistics of network flow cannot satisfy the demand. At present, there are also some scholars who propose the addition of relevant parameters (such as control command) and semantic descriptions (such as trusted measured values) to the detected characteristics to detect system attacks such as wrong command injection and tampering messages. On the other hand, protocols are subject to the uniform description by protocol analysis in order to detect noncompliant protocols. Suda et al. [10] put forth an intrusion detection algorithm of time-series features extracted based on time characteristics of series, which extracts effectively the time series features by recurrent neural network (RNN); by virtue of time series loop structure of RNN, and the temporal dependence of samples, Yan Binghao et al. [11] proposed an intrusion detection model based on deep recurrent neural network (DRNN) and region adaptive synthetic oversampling algorithm. But the jobs give little consideration for the behavioral interdependence among control commands. e protocol descriptions, which are either too complicated to popularize or less expressive to explain complex protocols and have slow protocol analysis problems, are unsuitable for the scenes of the power monitoring system. erefore, the concept of "security buffer" is introduced to this paper, and a network security buffer method for power monitoring systems is proposed. A security buffer is a memory area that is used between the input and output devices and the CPU to store safety data. It enables the lowspeed input/output devices and the high-speed CPU to work in coordination, avoiding the low-speed input/output devices from taking up the CPU and freeing up the CPU so that it can work efficiently. e method is composed of three parts: paradigm check, behavior analysis, and dynamic conversion and jointly realize the multilevel security check on network requests. e paradigm check module examines message format and filters data packets that fail to meet the standard message specification; the behavior analysis module analyzes the sequence of packets and blocks request sequence targeting multiple packets' abnormal behaviors; the dynamic conversion module utilizes format conversion or confusion to implement data structure changes and unload the attacker's attack modes such as buffer overflow attacks. Compared with existing works, the main contributions and innovations of this paper are as follows: (1) A unified description language for defining interconnection protocol packets of power monitoring systems is given, which supports the description and parsing of complex heterogeneous protocols and provides a basis for subsequent unified analysis. (2) Introducing the idea of redundant heterogeneity and adding a dynamic conversion function in the security buffer, which can prevent attackers from trying to speculate the normal working mode and then carry out precise attacks by dynamically adjusting the conversion strategy. (3) Experimental evaluation of the proposed method shows that the proposed method has a high accuracy rate of detecting multipacket anomalous behavior and the proposed dynamic conversion strategy is effective for offloading buffer overflow attacks. e other parts of this paper are organized as follows: Section 1 introduces problematic scenes that this method targets; Section 2 deals with detailed design of the method; Section 3 makes an assessment of the proposal by experiment; at last, Section 4 concludes the paper and discusses work to be done next. Problematic Scenes With computers, communication equipment, measurement and control units as basic tools, the power monitoring system provides a basic platform for real-time data acquisition, switch status detection, and remote control of power generation, transmission, transformation, and distribution systems. e system, together with detection and control equipment, can make up any complex supervisory system. e network interconnection of power monitoring system is mainly exposed to the following security risks, as shown in Figure 1: (1) e network architecture of the current power monitoring system is relatively simple. Equipment and core control system are directly accessible through network protocols by operators and at data acquisition places, which provides a springboard for attackers to use the vulnerability of the core control system to attack the system and then destroy the power security. For the primary technological means, hackers exploit vulnerabilities of application protocols in the power monitoring system and create attack load elaborately, triggering buffer overflow vulnerability; then they inject attack loads such as viruses and Trojan horses into the core control system, thus undermining the system security. (2) e current power monitoring system and other systems on the main network side are accessible. Attackers can first break through other systems, and then use this springboard to scan vulnerabilities of the core control system, operating system, and middleware; then, they use the vulnerabilities to launch brute force attack and remote code injection and finally destroy the security of the core control system. erefore, it poses a great risk by directly exposing the core control system of the power monitoring system to operators, data acquisition points, or other business systems. For this reason, this paper proposes a security buffer before the core control system to offload the attacks towards the core control system and secure it. How It Works. Based on the above analysis, this paper presents a network interconnection security buffer technology for the core control system of power monitoring system. is method adds a security buffer between the core monitoring system and other systems or operating terminals to defend against malicious attacks. e method architecture is shown in Figure 2. e security buffer deploys three main functional modules: paradigm check, behavior analysis, and dynamic conversion. Paradigm Check. Paradigm check is to examine the protocol specifications of network interconnection packets of the power monitoring system. is section offers a packet paradigm, which supports uniform descriptions of varied network interconnection protocol packets of IEC 104, IEC, and 101 power monitoring systems. e paradigm can be used to define specifications for the Internet protocol data, i.e., rules for the analysis and check of request packets. e data packet will continue to be carried forward depending on the subsequent analysis of the request packet and the check that the packet matches protocol data specification. XML-Based Uniform Description of Internet Protocols. To support checks of more Internet protocol data specifications, this paper presents a multiprotocol packet paradigm based on the extensible markup language (XML), which is applied for the uniform description of various Internet protocol data specifications of the power system. e descriptions of the XML-based Internet protocol packet paradigm for the power system are shown in Table 1. Packet Analysis and Format <filed name � "time"></filed> Analysis of network layer mmainly to check the source IP address (SourIP) and the destination IP address (DesIP) on the network layer of packets. e identity information of visitors can be acquired through IP address detection, which provides support for access control and intrusion detection. e information below is saved: Check of transport layer mainly to examine the source port number (SourPort) and the destination port number (DesPort). Different applications usually use different ports for communication. Port check may help discover some application's connection and access to the target application resources. e following information is saved after analysis: <filed name � "SourPort" ></filed> <filed name � "DesPort" ></filed> Analysis and check of application layer: it is the focus of check on the request packet paradigm, which mainly examines protocol information on the application layer, including the function codes and field values that represent the control behavior. e paradigm check algorithm of request packets is shown in Table 3. e Internet protocol data specification defined based on the paradigm in 3.2.1 (e.g., IEC104.xml) is first parsed to construct the set S � {S1,S2,S3,... ,Sm}, where Sj {j � 1,...,m} represents a field in the protocol in the form of a key-value pair of name and value, i.e., Sj � (name, value), and a rule set R � {(R1,R2, R3,...,Rn} is generated as well, where Ri{i � 1,...,n} is the specification in the protocol data, representing the specification requirements of a particular field; then the captured request packets are formatted according to S for unification, and finally S is checked according to R to see, for example, whether the function code is compliant and whether the value of the data is out of the range of values. e master station sends the master call request "68 0E 00 00 00 00 64 01 06 00 01 00 00 00 00 00 00 14" to the slave station, which is subject to algorithm check before output in the format, as shown in Table 4. e compliant request packets through analysis and check on the application layer, plus the information field extracted from the transport and network layers are saved based on the XML paradigm shown in Table 2, and noncompliant packets are directly discarded. Extraction of Behavior Sequence. e indexes of control behavior sequence mainly focus on the control operation interaction process between every two devices on the network of power monitoring system. For the purpose of realtime monitoring and calculation, it is necessary to depend on the analysis result of request packets over a period of time. As stated in Section 3.2, a compliant request packet that has passed paradigm check corresponds to an XML file, whose format is shown in Table 4. rough the time window of time span, we captured the packet analysis result corresponding to Table 3: Paradigm check algorithm for request packets. Input: Request data, protocol name. XML file Output: 1 and compliant packets/0, discard noncompliant packets; "1" indicates the request packet meets the protocol data specification, and "0" means noncompliance 1 input RequestData 2 analyze protocol name.XML, and generate data structure "S" and rule "R" 3 analyze and unify format of RequestData according to the structure of "S" 4 for i � 1 to n/ * traverse the rule set "R" * / 5 for j � 1 to m/ * traverse the set "S" and search the corresponding field based on the field name in the rule set * / 6 if S[j].name � R[i].name/ * compare the fields, if the field names are the same * / 7 e field if conforms to the protocol rule defined by the user 8 i++ 9 else data frame is discarded, return 0/ * discard the data in case of inconformity * / 10 end if 11 else j++ 12 end if 13 end for 14 end for 15 output the parsed data structure "S" in the format defined in Table 2 16 return 1 this period of time and extracted a phased behavior sequence. At the time of extraction of behavior sequence, it is necessary to extract the source IP (SourIP), destination IP (DesIP), source port (SourPort), destination port (DesPort), application layer protocol) (Proto), and function code that represents control behavior (Control) and capture time (time). e IP address and port number at both ends of the control behavior sequence and the protocol type are used as identifiers to distinguish the control behavior sequence. e packets in the time window are grouped according to the quintuple of identification fields (<SourIP>, < DesIP >, < SourPort >, < DesPort >, <Proto>), and all control behaviors are sorted according to time to obtain the behavior sequence [<Controlk>] ranked by control operations, thus obtaining the characteristic data of the control behavior sequence. When the time window strategy is used to capture packets, to avoid mis-segmentation of multiple single control operations of a continuous related control behavior, the extraction accuracy of the control behavior sequence can be improved based on the partition length "T" and the incremental window of "ΔT" length. e principle of the incremental window extraction mechanism is shown in Figure 3: Abnormal Behavior Recognition. e information acquired in power monitoring systems may have problems such as inconspicuous data labels and the noisy samples. Given that One-Class Support Vector Machine (OCSVM) algorithm has the features of not requiring neither any algorithm for modeling nor abnormal samples and being robust to noisy samples during training, this paper introduces OCSVM, which has significant advantages over other unsupervised learning methods, to identify the anomalous behavior of network interconnection in power monitoring systems. Security and Communication Networks With the Lagrangian function and the Gaussian kernel function introduced, the dual problem of the objective quadratic programming problem can be obtained as below: where, K (x i ,y j ) is the kernel function, and the vectors that satisfy 0≤α i ≤ 1/vl are called support vectors; the final decision function obtained is shown as Formula (6), in which NSV is the number of support vectors. e process of building an abnormal behavior recognition model based on OCSVM is shown in Figure 4. First, the extracted behavior sequence feature data are classified according to the quintuple, and the behavior sequence si (that is, Control k > above) is obtained by time window partition as the data set "S". e sequence si in S is vectorized and transformed into a feature vector x k of specified k dimension to generate a training sample set X. e OCSVM model is obtained according to X training. When the unknown type of behavior sequence s' is obtained, it is vectorized to generate x', and the resulting feature vector is substituted into the training model to check whether the output x ' is a normal behavior; thus, the recognition of abnormal behavior sequences is achieved. e specific algorithm is shown in Table 5. When getting the detection result, the security buffer decides whether the current control behavior is allowed or blocked depending on the result; and if not, this packet is discarded. Dynamic Conversion. After analyzing the behavior, the data information carried by the packet will be subject to dynamic conversion. Generally, attacks are a pattern of attacks that are carefully designed by the attacker to make the attack successful after he or she is familiar with the system. erefore, this paper designs and introduces a dynamic conversion module to the security buffer to converse transmitted data according to a predefined policy, so that the attack mode is changed and the data entering the system does not make an attack on the system, which is equivalent to an effective defense against the corresponding attack. By reference to the idea of "redundant heterogeneity", which refers to the use of multiple functionally or performanceequivalent heterogeneous components in parallel, multiple conversion policies are designed in the dynamic conversion module. A policy is selected randomly each time, and the policies are updated from time to time so that the dynamic conversion module itself can remain effective. Format Conversion of Protocol Data. After a parsed request packet is obtained, select a number randomly from the parsed request packet. e positions of the front and back fields are swapped centering on the "random number" to transform the protocol data format. e reason for random number is to ensure security and prevent tampering attacks during data transmission. e specific algorithm is shown in Table 6. To take data in Table 4 as an example, if the random is 04, the format of converted data in the application layer is shown in Table 7. e corresponding packet changes to "00 00 64 01 06 00 01 00 00 00 00 14 00 68 0E 00 05". Data Obfuscation. An attacker may modify the control fields in the application layer protocol to achieve illegal control of the device or host. erefore, protection can be provided by adding redundant bits of data. For example, the type identifier of the behavior in IEC 104 is a 1 byte 8digit number; it can be converted into 32 bit. e specific algorithm is shown in Table 8. Table 5: Abnormal behavior recognition algorithm. Input: a Training set of normal behavior sequence S and a unknown behavior sequence s' Output: 1/0, 1 represents s', belonging to normal behavior, and 0 indicates abnormal behavior 1 read the training set of normal behavior sequence S 2 e constructed vector model transform it into k-dimensional feature vector x k to generate the training sample set X 3 train the OCSVM model based on the training sample set 4 vectorize the unknown behavior sequence s' to obtain the feature vector x' 5 substitute x' into OCSVM model and check whether x' is normal behavior 6 If it is normal behavior of the model 7 output 1 8 else output 0 Table 6: Format conversion algorithm of protocol data. Input: Data output from paradigm check Output: OutData after redundancy is added to the control domain 1 input data, and extract the control field action in the data 2 action is an eight-digit number, with storage location is 76543210 from low to high; the eight-digit number is divided into four parts, i.e., 76,54,32, and 10, which are, respectively, the first, the second, the third, and the fourth bytes in the 32-digit number 3 generate a random 32-digit number "ActionPro", each digit of which is random; two maximum digits are taken from each byte 4 e first two bits of each byte are sequentially spliced together to form a new data 5 data mod 7 � z 6 then, starting from the zth bit of each byte (from left to right), the numbers 76,54,32, and10 in aciton are stored sequentially, generating AcitonPro 7 write AcitonPro back to the data to generate OutData 8 output OutData Still taking the data in Table 4 as an example, the random 32 bit number is 56 82 C2 32; when the control domain is obfuscated, the converted data format in the application layer is as shown in Table 9. e corresponding packet changes to "68 0E 54 80 c0 30 54 80 C0 30 54 80 C0 30 54 80 C0 30 64 01 06 00 01 00 00 00 00 14". It should be noted that the implementation of conversion policies affects the performance of the power system to a certain extent; thus, the policies can be dynamically increased or decreased depending on the specific scenario and different security protection requirements. Experimental Environment. We performed simulation experiments on three computers with Windows10, Intel Core i7-9700F, 3.0 GHz CPU, and 32 GB memory, in which one was used as a security buffer functional computer to implement and deploy paradigm check module, behavior analysis module, and dynamic transformation module; one to build Ubuntu16.04 virtual machine to simulate the attacked core control center; and one as a remote device to launch an attack on the target computer. e experimental topology is as shown in Figure 5. Effect of Behavior Analysis. In this experiment, we used OCSVM as the learning algorithm for security buffer behavior analysis. Based on UNSW-NB15 data set, we simulated the control behaviors under various scenarios including remote control, remote signaling, remote regulation, and telemetry in the power monitoring system, a total of 1500 sequences of control behavior under normal operation conditions, to give normal sequence model training. Meanwhile, for the common types of attack on the core control systems, and considering the difficulty in obtaining abnormal sequences, abnormal control behavior sequences generated by several attack types such as random operations, repetitive instructions, inversion of time series, and unknown commands were stimulated in the experiment based on construction, clipping, swapping, and falsification for normal behavior sequences. Abnormal behavior sequences and some normal behavior sequences are selected to generate a test set. e experiment adopted precision and recall as indexes to test the effect of the behavior analysis method proposed in this paper. e computing method is as below [15,16]: precision � true positive predicted positive × 100%. According to the above experimental results, the behavior analysis method proposed in this paper put in a good performance on precision, which can be above 90 with the changes of gamma's value, but the recall remains to be improved. Considering the difficulty in obtaining and marking malicious samples in the actual power monitoring system, abnormal behavior identification based on OCSVM is still an effective and feasible solution. To further test the effectiveness of OCSVM, it was compared with the unsupervised learning methods K-Means clustering algorithm [19] and PCA algorithm [20], where gamma � 0.5 and nu � 0.1, as shown in Table 10. As can be seen from Table 10, OCSVM has significant advantages over other methods in terms of accuracy and recall and is suitable for the security protection system of power monitoring systems. Effect of Dynamic Conversion. e experiment demonstrates buffer overflow attack, granting common users root privileges to the core control systems and displays the effect of attack uninstallation by dynamic conversion strategy. For the purpose of better exhibition of the effect, the address randomization of the virtual machine in the core control center of the simulation was shut down during the experiment, the StackGuard protection scheme was disabled at the time of code compile, and nonexecutable stacks were turned off. In the experiment, we compiled stack.c first as a program on the virtual machine of the core control center. e function of the program is to create a 24 byte memory buffer and later transmit data to the buffer via the strcpy() function. Since the strcpy() function does not check the bounds, there is a vulnerability of buffer overflow. What comes next was to compile the program exploit.c that uses stack.c. e main function of the program is to put a piece of shellcode[] (refer to Table 11, more than 24 byte) in the memory, compute its address in the memory, and then work out the return address of stack.c in the program call stack. rough data transmission, exploit.c is sent to the virtual machine at the core control center. When stack.c is executed on the virtual machine, shellcode[] will be saved in the buffer, and due to overflow, the address of shellcode[] will overlay the return address, and the codes in shellcode[] are executed instead. Normal users on the virtual machine of the core control center can obtain root privilege to the control host by executing exploit.c and then stack.c, until # appears on the command line, as shown in Figure 8. e dynamic conversion strategy used in the experiment is to swap the contents before and after a certain position in the array. As shown in Table 12, the content of shellcode[] changes, and the contents before and after "50" have their positions swapped. e code execution results after dynamic conversion is shown in Figure 9. When the stack.c program is executed once again on the host of the control center, returned properly will appear on the command line, indicating that stack.c is successfully executed; the return address fails to leap to the other memory space, which shows that the conversion strategy takes effect. Conclusion is paper puts forward a network interconnection security buffer method targeting the core control system of power monitoring system to address the network interconnection security in the power monitoring of core control system. is method adds a security buffer between the core control system for power monitoring and the other system or the operating terminal. ree functional modules such as paradigm check, behavior analysis, and dynamic conversion are deployed in the security buffer to make multilevel security inspection of interconnection request packets. Among them, a unified description language is given for defining interconnection protocol packets of power monitoring systems, which supports the description and parsing of complex heterogeneous protocols, OCSVM in behavior analysis has significant advantages over other unsupervised learning methods and can be effectively adapted to the power monitoring system environment, by introducing the idea of redundant heterogeneity and adding a dynamic conversion function in the security buffer, the conversion policy can be dynamically adjusted to prevent attackers from trying to speculate the normal working mode and then carry out precise attacks. is method can uninstall attacks against the core control system and secure the system. e proposal increases a security buffer, which can exert a certain influence on the instantaneity of the power monitoring system and may make a few erroneous judgments on the identification of malicious behaviors. In the future, we will study more effective behavior analysis algorithms to guarantee the real-time performance of the power monitoring system and further improve the security protection capability of the system. Data Availability e labeled data set used to support the findings of this study is available from the corresponding author upon request. Conflicts of Interest e author declares no competing interests.
6,256.6
2022-05-30T00:00:00.000
[ "Engineering", "Computer Science" ]
BOULDER ATMOSPHERIC OBSERVATORY: 1977–2016 The End of an Era and Lessons Learned CapsuleThis Essay summarizes the nearly 40 year history of the Boulder Atmospheric Observatory and its contributions to the research community. Ground-breaking experiments are discussed along with lessons learned. with minimal residential and commercial development that could impact low-level atmospheric flow. Nonscientific requirements included adherence to Federal Aviation Administration (FAA) aircraft safety restrictions and acquiring a suitable expanse of land with a location large enough and geologically sound enough for the necessary 17-m-deep foundation and guy wire anchors that extended horizontally 244 m from the tower base. The scope and cost of this project [$1.5 million (in 1977 U.S. dollars) for the tower along with $1.5 million for instrumentation] naturally led to media interest and questions as to the need for such a facility. Approval by local governments took time and met with opposition. In today's political climate, a similar project would undoubtedly be met with scrutiny and opposition without clearly articulating the societal benefits relative to the costs. To better inform the public and local decision-makers, public forums were held and news articles were published explaining the goals and virtues of such a facility. Dr. C. Gordon Little, then director of the WPL, in a news article stated that without the BAO, WPL could not develop "instrumentation that can see the weather and instruct computers to devise three-dimensional pictures of the weather patterns that may someday be used to warn citizens of severe storms and prevent airplane crashes" (Denver Post, 7 November 1976). Hall (1977) described the BAO tower during its final stages of construction and explained the site selection process, including some of the scientific goals. He also stated the tower would be treated as a national facility open to government and private agencies and universities. Remote sensing within WPL was in its infancy. In addition to hosting fundamental atmospheric studies, Dr. William H. Hooke, one of the original advocates for the facility and head of the WPL Atmospheric Studies Division, stated, "One long-range objective of the BAO research program is to provide a kind of womb for the growth and development of various remote-sensing devices that can ultimately be sent off on their own as mobile atmospheric-probing packages, no longer requiring verifications by in-situ sensors" (WPL Annual Report 1978). Such discussions and justifications were required 40 years ago, and we must continue this conversation today and leverage the success of the BAO and similar observatories to promote the critical need for continuous, long-term atmospheric observations. In addition to the development and testing of novel remote sensing instrumentation, WPL also realized the need for the transfer of associated technologies, not only of the instruments but of the products they greenhouse gas baseline measurements used as part of global networks. The following article is a brief description of the BAO and its nearly 40-yr history, including some of the key studies and their findings that might not have been possible if not for this unique facility. Lessons learned from this research facility can be used to help guide and facilitate the evolution of both existing and future research observatories. MOTIVATION AND EARLY OBSTACLES. Initially accepted from the contractor in October 1977, planning for the BAO started back in February 1974 with a workshop held in Boulder, Colorado. The BAO was proposed as a joint meteorological observing facility, and the workshop was attended by representatives from NOAA, the National Center for Atmospheric Research (NCAR), and the Cooperative Institute for Research in Environmental Sciences (CIRES). Why build such a facility? Even though Boulder and the Colorado Front Range were then, as now, a hub for atmospheric research, this was not the primary motivation for the BAO. The former WPL was tasked with the mission of developing instruments to remotely sense and monitor the atmosphere. For 10 years prior to construction of the BAO tower, that research was carried out on a 150-m-tall tower near Haswell, Colorado, 267 km southeast of Boulder. The importance of this work and limitations of the Haswell site, being far from Boulder and with a tower too short to sample the full height of the PBL, prompted WPL to redefine its needs for an improved facility. Moving closer to Boulder was not without its own challenges. Scientifically, the tower had to be placed sufficiently east of the Front Range, so that the PBL development would be relatively unaffected by the Rocky Mountains, and in a region produced. This effort focused on developing what was called a Prototype Regional Observing and Forecasting Service, later renamed the Project for Regional Observing and Forecasting Service (PROFS; Schlatter et al. 1985), a forerunner of the work now conducted in ESRL's Global Systems Division with the Advanced Weather Interactive Processing System (AWIPS). Thus, the ground-based observations made at the BAO played a critical role in the development of today's weather forecasting technology, which ultimately saves lives and money. TECHNICAL DETAILS. The centerpiece of the facility was a 300-m-tall triangular tower (designed for 500 m should the need arise and necessary funding become available) initially instrumented at eight levels with both fast-and slow-response air temperature, relative humidity, and wind sensors. The site, near Erie, Colorado, was located in relatively flat terrain, initially surrounded by agricultural and rangeland fields. Today, urban development has encroached quite near to the site. The tower and its instrumentation were the focal points of the BAO, but its two elevator systems made it one of the most unique and user-friendly research towers in the world. At the time of its construction it was one of the tallest meteorological towers. Although over the years many taller towers have been erected, few if any had the capabilities of the BAO. With a three-passenger inside elevator, scientists could easily transport and mount instrumentation at any of the fixed levels or to the many other platforms throughout the height of the tower. The outside instrument carriage (IC) took the research capabilities of the BAO to another level, making it possible to profile the atmosphere from the surface to 300 m, something that had not been possible before. At 300 m, the tower extended above the nocturnal boundary layer and was able to track the evolution of the daytime convective boundary layer until late morning on most days. Under conditions of strong subsidence or downsloping winds, a capping inversion associated with air pollution events in the Denver area remained within the height of the tower. Several remote sensing systems were part of the facility along with near-real-time processing and publicly available display capabilities greatly enhancing its usefulness to scientists and the public. As part of the design concept, data were collected 24-7 with a near-real-time output every 20 min. Data were also archived on nine-track tapes. Figure 2 shows Dr. Chandran Kaimal annotating a strip chart recorder in the computer systems trailer and the instrumentation mounted on one of the eight instrumentation booms. Figure 3 shows an example of a 20-min computer output that included turbulentf lux parameters from the fast-response sensors (10 Hz), mean data from the slow-response sensors (1 Hz), and output from the microbarograph array (Hooke 1979;Bedard and Georges 2000) and optical triangle (Hooke 1979;Lawrence et al. 1972;Tsay et al. 1980) The initial on-site data collection system used a Digital Equipment Corporation (DEC) PDP-11/34 minicomputer. The data were sent in real time to Boulder where it could be visualized and analyzed by NOAA and visiting scientists. Some of the instrumentation, their design concepts, and the data collection system for the BAO tower came from the Air Force Cambridge Research Laboratories under the direction of Dr. Kaimal (Kaimal et al. 1974) and had been a part of the groundbreaking 1968 Kansas and 1973 Minnesota boundary layer field studies (Izumi 1971;Izumi and Caughey 1976). Included were redesigned sonic anemometers with fast-response platinum-wire thermometers. KEY EXPERIMENTS AND CONTRIBUTIONS FROM THE BAO. The first two major experiments at the BAO took place in the spring and fall of 1978 and required wiring and instrumenting of the tower to be completed in just six months (October 1977-March 1978 under some very challenging conditions. The first experiment was a site evaluation to help understand the effects of the uneven terrain on the temporal and spatial characteristics of the PBL. The results from this evaluation showed an atmospheric structure similar to flat terrain (Kaimal et al. 1982a,b;Haugen 1978;Haugen and Schwiesow 1979). Measurements of convective plumes (Wilczak and Tillman 1980) resulted in a better understanding of plume structure and transport. Using the continuous data to examine wind flow patterns, Hahn (1981) found diurnal wind oscillations within the PBL for days with southerly geostrophic winds. The second experiment, Project Phoenix (Hooke 1979), again was a study of the PBL looking at its growth and decay, with the goal of evaluating and comparing many of the remote sensing systems being developed by WPL as well as creating a complete dataset of the convective boundary layer and many of its underlying atmospheric processes. Both experiments used research aircraft and surface observations from the Portable Automated Mesonet (PAM; Brock and Govind 1977). The fall experiment brought together a variety of remote sensors from the Boulder research community (radars, lidars, sodars, aircraft, microwave radiometers, optical wind sensors). This initial experiment, and so-called christening of the BAO facility, led to several more important studies and research efforts over the next 10 years. These again included boundary layer studies (nocturnal and diffusion) and various instrument comparisons such as sodars and an Environmental Protection Agency (EPA) study to look at the performance characteristics of various wind sensors to measure atmospheric turbulence. Because the BAO operated 24-7 a large dataset covering a wide range of meteorological conditions was available to researchers for analysis. Over the years, the BAO hosted several large national and international experiments and numerous smaller ones. One of the largest was the Boulder Lowlevel Intercomparison Experiment (BLIE), a World Meteorological Organization (WMO)-sponsored workshop/experiment (Baynton et al. 1981;Kaimal et al. 1980a) with 11 WMO member nations participating. A total of 23 different measurement systems/sensors were deployed including radars, sodars, tethered and free-flying balloon packages, a short-instrumented tower, and some of the first instrumented unmanned aerial vehicles (UAVs; Fig. 4). The objective of these intercomparisons was to better understand the different low-level atmospheric sounding techniques. The first use of the instrument carriage was during BLIE, during which radiosonde and tethered balloon instrument packages were transported up and down the tower for both intercomparison and comparison with the tower sensors. This resulted in the refinements that appear in today's radiosondes, tethersondes, and dropsondes. In 1982 the Boulder Upslope Cloud Observation Experiment (BUCOE; Gossard 1982;Gossard et al. 1982) was conducted. After instrumenting the carriage with a suite of fast-response sensors (Fig. 5;Kaimal and Gaynor 1983) it could both profile the atmosphere or be placed at an altitude between one of the fixed tower levels (10,22,50,100,150,200,250, and 300 m) recording data as turbulent features of interest oscillated up and down the height of the tower. In addition to increasing the understanding of the fundamental properties of the lower atmosphere, the BAO played a key role in atmospheric chemistry and air pollution studies. In 1988-89 two large air pollution studies were conducted along the Front Range. The Denver Brown Cloud Studies (Neff 1997), a joint effort with NOAA, local agencies, and universities, set out to collect meteorological, aerosol, and air chemistry data to help better understand the air pollution issues in the Denver Front Range region. Results from these studies helped shape some of the early plans to improve regional air quality. The BAO was one of the major sampling sites and a major contributor to understanding the meteorology along the Front Range. B AO A S A N U C L E U S F O R O T H E R PROJECTS. As mentioned earlier, two nontower sensors were a part of the initial BAO configuration. The microbarograph array (Hooke 1979;Bedard and Georges 2000) consisted of four ground sites, one located near the base of the tower and three others located approximately 128 m away, capable of measuring the speed and direction of various infrasound signals. In the early days arrays similar to this were used to detect underground nuclear tests by the pressure waves generated as the Earth's crust was deformed. Over the years the research focused on naturally occurring infrasound as clues to avalanches, meteors, tornadoes, volcanoes, severe-weather systems, and turbulence (Bedard 2005). One famous event observed in the data from the BAO infrasound array was the May 1980 eruption of Mount Saint Helens. The optical triangle (Hooke 1979;Lawrence et al. 1972;Tsay et al. 1980) consisted of three short towers located 244 m from the main-tower base forming an equilateral triangle 450 m on a side. On top of each tower was a laser source and receiver that could measure the average wind along a light beam using scintillation techniques. Wind flowing perpendicular to each leg of this triangle represented either convergence or divergence into and out of this triangle. These data have been used to study gravity waves and were also compared to microbarograph data. In 1985 ESRL's Global Monitoring Division (GMD; formerly the Climate Monitoring and Diagnostics Laboratory) first began using the BAO for some of their research and installed a suite of radiometers to measure various components of incoming and outgoing solar radiation (Dutton 1990). In 1992 these measurements were expanded (diffuse and direct measurements) and incorporated into the World Climate Research Programme (WCRP) Baseline Surface Radiation Network (BSRN) (König-Langlo et al. 2013), 1 of 68 sites worldwide. The BSRN is a worldwide network of monitoring sites designed to provide high-quality short-and longwave surface radiation measurements for validation of satellitebased estimates and comparisons to global climate model (GCM) simulations. In 2007 GMD added the BAO to its tall-tower network (Andrews et al. 2014). Three levels (22, 100, and 300 m) on the tower were sampled continuously and provided regionally representative measurements of carbon dioxide (CO 2 ). These data were then used to help identify long-term trends, seasonal variability, and spatial distribution of carbon cycle gases as part of the Global Greenhouse Gas Reference Network. Also at 300 m a weekly flask sample was taken and analyzed for CO 2 , methane (CH 4 ), carbon monoxide (CO), dihydrogen (H 2 ), nitrous oxide (N 2 O), and sulfur hexafluoride (SF 6 ) and by the University of Colorado (CU) Institute for Arctic and Alpine Research (INSTAAR) for the stable isotopes of CO 2 and CH 4 and for many volatile organic compounds (VOCs). These measurements formed one of only six such global sites. Many studies made use of previously collected BAO tower data. One such study was in support of the Department of Energy's (DOE) wind energy research efforts (Kaimal et al. 1980b). At about the same time the BAO went operational the world's first wind farm also came online in New Hampshire. This endeavor was not totally successful because of the lack of understanding of wind variability and turbulence effects on wind turbines. In cooperation with the DOE, turbulence data from the BAO tower during two high-wind events were used to validate gust models and better understand the stresses a wind turbine might experience. These results were then applied to the design of future wind generators and farms. In the nearly 40 years since this study, remote sensors (sodars and lidars) have become extremely useful for mapping the winds of potential or existing wind farms (Banta et al. 2013(Banta et al. , 2015Wilczak et al. 2015;Lundquist et al. 2017). UNIQUE APPLICATIONS. Many interesting uses of the tower and its data have taken place throughout its history. In March of 1982 the instrumentation running on the tower and the thenoperational PROFS mesonet (Brock and Govind 1977;Pratt and Clark 1983) captured the passage of a very intense cold front. Shapiro (1984) used these data to analyze the microscale structure of a density current, which is known to be a triggering mechanism for mesoscale convective systems. The high-temporal-resolution data revealed that the horizontal gradients of the front were concentrated within a narrow 200-m horizontal distance. One of the more unique uses included a study whose purpose was to discredit the results of earlier tower measurements that seemed to show that Newton's 1/r 2 gravity law was wrong for "short" distances, where r is the distance between two objects (Cruz et al. 1991). Another nonstandard dataset collected at the BAO was from a web camera. Installed at 300 m and facing approximately south, it had a field of view from 34° to 334°. The camera was programmed to capture images at a variety of different angles and time resolutions. Two of the more interesting sets of images were a zoom looking at downtown Denver every 10 min and an hourly panorama with 6-50° panels. The view of downtown Denver showed the evolution of many pollution events and the impact of a trapping inversion. ESRL's Global Science Division devised a method to simulate clouds and aerosols (Fig. 6) using the 300° panorama data in conjunction with the Local Analysis and Prediction System (LAPS) to validate existing algorithms and assimilating observed data in numerical models ( Fig. 7; Jiang et al. 2015). The web camera had also captured many interesting events, including the Fourmile Canyon Fire in 2010 showing the smoke plume flowing eastward toward the plains. Stone et al. (2011) used this event to analyze the impacts of smoke on incoming solar radiation at the BAO along with other sites impacted by the plume. Not to be forgotten is the opportunity to drop objects off the top of a very high tower without the danger of hitting someone or something below. This capability proved useful for studying different parachute designs for dropsondes (Fig. 8) and testing how well newly designed unmanned aerial vehicles can glide to a "soft" landing. LATER YEARS AND DECOMMISSIONING. The site itself and the surroundings changed over the years. Commercial and private development, especially in the last five years, had crept closer and closer. The pros and cons of the scientific impact of these changes have been discussed. Boundary layer meteorologists and some climatologists felt this encroachment had a negative impact. Others felt the BAO could capture important anthropogenic changes to the local environment. Development also meant increases in pollution and particulates. The BAO happened to be located in the middle of the Denver-Julesburg Basin (first discovered in 1901) that underlies the Denver metropolitan area and the eastern side of the Rocky Mountains where there has been major oil and gas exploration. Around 2005 ESRL's Chemical Science Division started planning experiments collocated at the BAO to study the effects of increased oil and gas activity. Figure 9 is a timeline of some of the more important events and field campaigns throughout the history of the tower. In the earlier years of the BAO the focus was on the boundary layer and instrumentation spanning both remote sensing and in situ studies. In the latter years the focus changed more to atmospheric chemistry and aerosol research with some intercomparisons of remote sensors (lidars) whose technology had been transferred to the private sector. In 2011 and 2014, a portable instrument shelter with amenities (PISA; Fig. 10) was placed on the instrument carriage filled with various air chemistry and aerosol measuring instruments. Nearly 300 vertical profiles were made over a 26-day period during the Nitrogen, Aerosol Composition, and Halogens on a Tall Tower (NACHTT) experiment (Brown et al. 2013). Even though the BAO has been decommissioned, one can still access some of its data. In 2007 a new data collection system was implemented with continuous measurements at three levels (10, 100, and 300 m). One-minute-average temperature, relative humidity, and wind speed and direction along with surface pressure and precipitation are available. At about the same time a laser ceilometer and a monostatic sodar were acquired and run continuously. These data can be accessed through the BAO data browser described in the BAO Data Browser appendix. Data from special experiments such as the Front Range Lundquist et al. 2017) are also available, including periods of fast-response sonic data at multiple levels on the tower. These periods are described in the "Tower Data" appendix. Data from other instrumentation operated at the tower are also often available during these periods, including wind profiler, microwave radiometer, laser, sodar, distrometer, and aerosol and chemical measurements. As mentioned above, the BAO Data Browser appendix gives a brief description of the BAO "data browser" that is still available for online access to BAO data. C O N C L U S I O N S A N D L E S S O N S LEARNED. Even though the BAO is no longer operational, its legacy lives on through the research, the development of new remote sensing instrumentation and technology, its data archive, and the more than 400 journal articles citing the BAO over the last 40 years. How might the BAO and its resources have been better used, and could the BAO have somehow survived? Although the lack of sustained support was the primary cause for its decommissioning, there were other reasons for sunsetting this facility, including infrastructure age. Though the tower itself was structurally sound, components such as the passenger and instrument elevators were nearing the end of their life cycle and would require replacement at considerable cost. The boom in housing and commercial development was impacting low-level atmospheric flow, making boundary layer measurements as they were originally envisioned no longer possible. Not all measurements would be negatively impacted by the encroachment. For example, changes in atmospheric chemistry and changes in the boundary layer meteorology with the transition from rural to urban land cover could have been quantified. In addition, continuous air sampling was one way to track the impact of the growing oil and gas exploration throughout the region. Similarly, ongoing long-term continuous surface radiation measurements, as pointed out by Dutton (1990), were deemed important to understanding climatologically significant changes in cloudiness (global dimming and brightening) that occur over decadal scales. Many avenues were pursued to promote the BAO during its lifetime, including asking the research community for their support and drafting a document to be sent to the National Science Foundation (NSF) requesting the BAO be considered a National Facility as proposed by Hall (1977). Support for the BAO throughout the research community was very strong, but only in terms of acknowledging its importance as a research facility. Funding by these same supporters was not available. Probably the biggest lesson learned was that no matter how useful the BAO was seen to be, it required more than just small individual research projects for support. In the early years many large, often international, research projects were conducted at the BAO. That pattern changed with a greater number of smaller projects replacing larger efforts with a corresponding reduction in core support for the facility. Did the type of research that could be conducted at the BAO lead to its decommissioning? There is no evidence to suggest that this was the case, and in fact the ways in which the BAO was used continued to diversify. Even as the future of the BAO was being discussed research continued with the BAO serving as the centerpiece of FRAPPE (Pfister et al. 2017) and of XPIA (Lundquist et al. 2017) field programs. In hindsight everything possible was done to try to prolong the life of the BAO, but the resources needed to support it as a community facility were simply not available. ACKNOWLEDGMENTS. We thank the many people who made the Boulder Atmospheric Observatory a worldclass research facility. A great debt of gratitude is owed to the late Dr. C. Gordon Little, without whose vision and support the BAO would never have been constructed. Dr. Chandran Kaimal and his group from Air Force Cambridge Research Laboratories, who brought their boundary layer expertise to NOAA and Boulder, Colorado, were instrumental in launching initial groundbreaking studies. Finally, to the engineers, computer specialists, administrative support staff, and scientists over the years who supported the BAO and its science, we are deeply grateful.
5,621
2018-02-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Milestones in Chronic Lymphocytic Leukemia Summary The past 10 years have been an exciting ride for Chronic Lymphocytic Leukemia (CLL) aficionados. An overview of changes in management paradigms in CLL, ranging from insights into biology, via chemotherapy and chemoimmunotherapy to maintenance and novel drugs will be presented. The path to chemoimmunotherapy The 10 year anniversary of a journal is not only a reason for celebration, but also provides an opportunity to assess progress in the field over a longer period of time. In our clinical world of incremental improvements, this is a relatively rare opportunity to reflect on what has happened on a larger scale than usual and determine how happy or hopeful that leaves us. About 10 years ago, the CLL field had just left the stage where CLL was treated with chlorambucil (CLB) monotherapy or lymphoma polychemotherapy with limited success for the aggressive variants of CLL for decades. Resurrecting CLL from being the "boring" leukemia, the field developed very relevant dynamics ( [1]; Fig. 1). At the turn of the millennium, important biological insights, such as the recognition of recurrent FISH cytogenetic lesions [2], which had a very relevant prognostic impact, or the definition of two biologically distinct groups of CLL, i. e., those with mutated and unmutated IgVH status [3,4] different clinical behavior, were gained. Also, more effective chemotherapy backbones were proposed and a combination of fludarabine and cyclophosphamide (FC) had been shown to produce better responses and longer PFS in a number of randomized trials [5,6]. However, none of these trials produced anoverall survival (OS) benefit. In these trials, it also became clear that the proposed FISH cytogenetic groups were able to define subgroups with much worse performance after chemotherapy induction. Thus, del17p and del11q were defined as risk factors with predictive properties towards treatment outcome. At that time, MD Anderson pioneered the use of rituximab in a combination with FC in their typical phase 2 cohort design [7,8]. Indeed, this combination produced superior outcomes in indirect comparison with historical controls from MD Anderson itself. A little while into the decade, reviewed here, these monocentric data received randomized support from the German CLL 8 trial [9]. This trial not only showed the expected increase in responses and progression-free survival (PFS) for adding the antibody to FC, but was able to define an overall survival benefit for the first time in CLL, thus, changing the standard of care for patients fit to tolerate this intensive regimen. An additional exciting observation from the trial was that patients with del11q (a high-risk group for early failure with fludarabine monotherapy) benefitted substantially from the FCR combination [10]. The hunt for remission depth Around the same time, measurement of residual tumor mass-derived either by PCR-based or flowcytometric methods [11]-showed that patients that had achieved high quality remissions, as defined by these measurements, showed very long remission duration. Initial observations of such minimal residual 8 Milestones in Chronic Lymphocytic Leukemia The MEMO Decade disease (MRD)-negative states were available from alemtuzumab-treated patients [12]. This very effective antibody was an exciting player at the start of our memo decade, but has since fallen by the wayside due to crippling toxicities and management decisions for the drug. However, it remained unclear in phase 2 trials whether the achievement of MRD negativity was merely a sign of "good" risk disease and, thus, associated with superior response or whether it might develop into an independent treatment goal that one could strive to achieve. The concept of MRD as a treatment goal has, indeed, had more staying power than alemtuzumab. In fact, the already mentioned CLL 8 trial had extremely interesting results in this respect [13]: as expected, patients achieving MRD negativity (i. e., <10 -4 CLL cells) as measured by flow cytometry had significantly longer PFS. This turned out to be independent of the treatment arm, but it became clear that the addition of rituximab was able to roughly double the number of patients achieving MRD negativity, thus, suggesting that remission depth could be an independent treatment goal. Consequently, a number of trials tested whether one could devise more effective treatments, producing higher rates of deep remissions by intensifying therapy. The addition of additional chemotherapy agents (e. g., mitoxantrone) [14] or antibodies (e. g., alemtuzumab) [15] to an FCR backbone was thus tested. The results were, however, relatively disappointing: in the highest risk groups results could not be improved significantly, generating some sort of response plateau. Furthermore, and, equally importantly, the trials demonstrated significantly increased toxicity. With the perspective that FCR was targeted at very fit CLL patients and an option that was already too toxic for a majority of patients with CLL, these escalation trials were, thus, not producing a benefit for a relevant group of patients. Thus, the hunt for MRD negativity took a hit. Tailoring treatment intensity and duration Indeed, a parallel approach to preserve the efficacy of the antibody effects, while reducing the toxicity of chemotherapy backbones, led to the definition of therapeutic options with less intensity than FCR. Combining rituximab with bendamustin (RB) showed encouraging efficacy in phase 2 trials [16], while seeming much less toxic. A randomized comparison (the German CLL 10 trial) [17] was able to confirm this lower toxicity profile of the regimen in fit patients, but showed that the reduction of toxicity came at a price of somewhat reduced efficacy. However, the efficacy benefit of FCR in the trial was not detected in the older patients in the trial. In parallel, the toxicity difference was also very prominent in the older, but fit population. This suggested that for an older or possibly less fit population, RB may have a better risk/benefit ratio than FCR, while maintaining relevant disease control. Parallel approaches in less fit patients showed that the addition of a CD20 antibody (either of rituximab, ofatumumab, or obinutuzumab) [18,19] was able to improve responses and PFS, and, in the case of obinutuzumab, also a benefit in overall survival. Thus, within a few years after the launch of memo, the landscape of treatment in CLL was turned over with the introduction of chemoimmunotherapy regimens that could be tailored to individual tolerability. Since then, a number of approaches to investigate remission maintenance strategies after chemoimmunotherapy induction have been investigated and an improvement of PFS by maintenance with two different CD20 antibodies has been reported in full papers [20,21]. Similar improvements with lenalidomide maintenance strategies have recently been presented at ASH 2016. None of the trials have been able to report OS benefits for maintenance strategies, but this may be due to short follow-up and/or improved salvage therapies. In fact, we may not be able to fully resolve these questions in the future, given the changes in the field that dominated the last years of the memo decade. The development of novel treatment paradigms All the developments described so far had left the highest risk patients-those with del17p and p53 dysfunctions-almost untouched. These patients who had low response rates and short PFS even with intensive induction treatment options, such as FCR. The only patients in this group who were able to achieve longer survival were in the small group of CLL patients who qualify for allogeneic transplant [22]. Some success in this group had been reported for alemtuzumab therapies (e. g., in combination with high-dose corticosteroids and at a high price regarding infections), but that was swept away by things to come. In parallel to the increase in interest in treating CLL, there was a surge in understanding of the disease biology. Importantly, it became clear that CLL was not merely genetically programmed to proliferate and survive, but relied extensively on microenvironmental interactions for these outcomes [23]. It thus became clear that there would be signaling pathways that provide essential signals for the development of CLL and that these signaling pathways may serve as important targets for the development of treatment. The kinase inhibitors ibrutinib and idelalisib, targeting components involved in the B cell receptor signaling cascade, both showed very interesting phenomenology of response [24,25]. In fact, both drugs were able to rapidly shrink lymph node masses even in massively pretreated patients (a feat that chemoimmunotherapy has very big problems with). However, apparently the inhibitors did not kill the CLL cells rapidly, but rather spilled them from the lymph nodes (and bone marrow) into the peripheral blood, where a sometimes dramatic increase in lymphocytosis could be observed. This lymphocytosis then decreases over time and the drugs produce a very high rate of partial, but durable remissions under continuing treatment. Also there is a clear tendency for improvement of response qualities over time. Complete remissions, or even MRD negativity, however, remain rare. Thus, the currently used treatment paradigm for these drugs is to treat until progression. The tolerability of the drugs is good compared with chemoimmunotherapy, but both substances have a distinct set of rare side effects that mandate specific management. Excitingly the control of clones with functional p53 deficiency by these drugs is much better than with standard treatment, giving this subgroup important avenues to improved survival [26,27]. However, with longer observation times the impression is that pretreated and/or p53 deficient CLL has a steady rate of relapse even from these treatments [28]. Thus, alternatives are still needed. Most recently, direct targeting of the cell death machinery, via the Bcl-2 specific "BH3-mimetic" venetoclax has entered the field. Venetoclax produced relevant disease control in pretreated patients and in patients with del17p, adding another option to the armamentarium [29]. Importantly, venetoclax was effective in patients previously treated with kinase inhibitors, giving those patients a salvage option (only presented in abstract form so far). In initial experience, venetoclax led to a high rate of relevant tumor lysis syndromes, but since a slow ramp up of the dose has been mandated this has not proven to be a big problem. Outside of the spectrum of currently licensed options, there are highly relevant developments in the area of immunomodulation (e. g., lenalidomide) [30,31] and immunotherapy (e. g., CAR T cells) [32] that need to be mentioned but cannot be discussed in the brevity of the overview. We have, thus, arrived at an exciting moment in CLL history, with lots of options to develop in the future and quite a bit of uncertainty about the optimal treatment pathways of today and tomorrow. As combination approaches with novel drugs are in development, we face the tangible possibility of cure or very long-term control for our patients. This has clearly been a very exciting decade for those interested in CLL, leaving the community both happy and hopeful.
2,462.6
2017-02-01T00:00:00.000
[ "Biology" ]
Vacuum-Free and Highly Dense Nanoparticle Based Low-Band-Gap CuInSe2 Thin-Films Manufactured by Face-to-Face Annealing with Application of Uniaxial Mechanical Pressure Copper indium gallium sulfo-selenide (CIGS) based solar cells show the highest conversion efficiencies among all thin-film photovoltaic competition. However, the absorber material manufacturing is in most cases dependent on vacuum-technology like sputtering and evaporation, and the use of toxic and environmentally harmful substances like H2Se. In this work, the goal to fabricate dense, coarse grained CuInSe2 (CISe) thin-films with vacuum-free processing based on nanoparticle (NP) precursors was achieved. Bimetallic copper-indium, elemental selenium and binary selenide (Cu2−xSe and In2Se3) NPs were synthesized by wet-chemical methods and dispersed in nontoxic solvents. Layer-stacks from these inks were printed on molybdenum coated float-glass-substrates via doctor-blading. During the temperature treatment, a face-to-face technique and mechanically applied pressure were used to transform the precursor-stacks into dense CuInSe2 films. By combining liquid phase sintering and pressure sintering, and using a seeding layer later on, issues like high porosity, oxidation, or seleniumand indium-depletion were overcome. There was no need for external Se atmosphere or H2Se gas, as all of the Se was directly in the precursor and could not leave the face-to-face sandwich. All thin-films were characterized with scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDX), X-ray diffraction (XRD), and UV/vis spectroscopy. Dense CISe layers with a thickness of about 2–3 μm and low band gap energies of 0.93–0.97 eV were formed in this work, which show potential to be used as a solar cell absorber. Introduction Solar cells from copper indium gallium sulfo-selenide (CIGS) reach a certified efficiency of 23.35% [1] when manufactured in batches and by processes using vacuum technology like sputtering or evaporation and toxic substances like H 2 Se or at least maintaining a Se atmosphere during annealing. Vacuum-free processing and improving the scalability of absorber fabrication would be very beneficial. CIGS nanoparticles (NPs) have successfully been used recently to achieve dense absorber layers [2,3] and high-efficiency solar cells, with, for example, 13.8% [4] or 15% [5] efficiency. An alcohol-based molecular precursor yielded 14.4% efficiency [6] while using H 2 Se annealing. Ink-jet printing of CIGS nanoparticles [7] or molecular precursors [8,9], which sometimes uses hydrazine as solvent, shows the possibility of low-cost solar cell absorbers with precise deposition techniques. Even an all solution processed CIGS device with an efficiency of 13.8% was fabricated while using H 2 Se annealing [10]. Figure 1. Schematic overview on the experimental processes, showing details of nanoparticle synthesis, nano-inks for process routes I and II, as well as a schematic of the face-to-face process. Route Ia uses only ink I from Cu-In and Se NP, route Ib uses an additional sputtered Cu-In seeding layer, route II uses ink II from In2Se3 and Cu2-xSe NP and additional Se NP. SEM, scanning electron microscopy; EDX, energy dispersive X-ray spectroscopy; XRD, X-ray diffraction; EDTA, ethylenediaminetetraacetic acid; TEG, tetra ethylene glycol; PVP, polyvinylpyrrolidone. Synthesis of Nanoparticles All chemicals mentioned below were used as received without further purification and the syntheses were performed in ambient conditions if not described otherwise. Cu-In NP: Bimetallic, slightly indium-rich copper-indium nanoparticles were synthesized according to the works of [12][13][14]. CuCl2 (1 mmol) and InCl3 (1.12 mmol) were dissolved in 150 °C hot tetra ethylene glycol (TEG), while a reducing agent solution was prepared separately by dissolving NaBH4 (14.84 mmol) in 10 °C pre-cooled TEG. Both solutions were then dropped together in a third beaker at room temperature under vigorous stirring using a Reglo Digital MS-4 peristaltic pump (Ismatec, Wertheim, Germany) and immediately formed a black precipitate. To terminate the bubbling alcoholysis and remove excess BH4 − , acetone was added to the beaker 10 min after the reaction was finished. The copper-poor indium-rich nanoparticles with a Cu/In ratio of 0.89 were isolated by centrifugation, washed several times with isopropanol (IPA), and finally dispersed in IPA. Schematic overview on the experimental processes, showing details of nanoparticle synthesis, nano-inks for process routes I and II, as well as a schematic of the face-to-face process. Route Ia uses only ink I from Cu-In and Se NP, route Ib uses an additional sputtered Cu-In seeding layer, route II uses ink II from In 2 Se 3 and Cu 2-x Se NP and additional Se NP. SEM, scanning electron microscopy; EDX, energy dispersive X-ray spectroscopy; XRD, X-ray diffraction; EDTA, ethylenediaminetetraacetic acid; TEG, tetra ethylene glycol; PVP, polyvinylpyrrolidone. Synthesis of Nanoparticles All chemicals mentioned below were used as received without further purification and the syntheses were performed in ambient conditions if not described otherwise. Cu-In NP: Bimetallic, slightly indium-rich copper-indium nanoparticles were synthesized according to the works of [12][13][14]. CuCl 2 (1 mmol) and InCl 3 (1.12 mmol) were dissolved in 150 • C hot tetra ethylene glycol (TEG), while a reducing agent solution was prepared separately by dissolving NaBH 4 (14.84 mmol) in 10 • C pre-cooled TEG. Both solutions were then dropped together in a third beaker at room temperature under vigorous stirring using a Reglo Digital MS-4 peristaltic pump (Ismatec, Wertheim, Germany) and immediately formed a black precipitate. To terminate the bubbling alcoholysis and remove excess BH 4 − , acetone was added to the beaker 10 min after the reaction was finished. The copper-poor indium-rich nanoparticles with a Cu/In ratio of 0.89 were isolated by centrifugation, washed several times with isopropanol (IPA), and finally dispersed in IPA. Se NP: Elemental selenium nanoparticles were synthesized according to the works of [15][16][17]. The Se NPs were fabricated from a 0.127 mol/L Na 2 SeSO 3 solution, prepared by dissolving Na 2 SO 3 and Se powder in 80 • C hot deionized (DI) water after aging for more than 24 h. To stabilize the nanoparticles, a 1.35 mmol/L polyvinylpyrrolidone (PVP) solution was used containing trisodium citrate (2.38 mmol) as acidic buffer. Then, 25 mL of the Na 2 SeSO 3 precursor solution was mixed into the stabilizing solution and 15 mL HCl was added at room temperature. The solution immediately turned from clear to an orange-red murky color. After 1 h of stirring, the nanoparticles were isolated by centrifugation and washed several times with pre-cooled deionized water. For storage, the particles were concentrated and dispersed in IPA. Cu 2−x Se NP: Binary copper selenide nanoparticles were synthesized by a redox-reaction in deionized water according to the works of [18,19]. First, 158 mg Se was reduced to Na 2 Se and Na 2 Se x with 18 g NaOH in 40 mL DI-H 2 O. Na 2 Se as well as Na 2 Se x form Cu 2−x Se in a reaction with 350 mg CuCl 2 that was coordination-complex-stabilized with 877 mg ethylenediaminetetraacetic acid (EDTA) in 15 mL DI-H 2 O. If no EDTA is used, Cu 3 Se 2 NPs are formed instead. The particles were isolated via centrifugation, washed several times with deionized H 2 O, and then dispersed in IPA. In 2 Se 3 NP: Binary indium selenide nanoparticles were synthesized by a redox reaction in deionized water according to the works of [19,20]. First, 205 mg Se was reduced to Se 2− with 149 mg NaBH 4 in 32.5 mL DI-H 2 O under an N 2 -atmosphere following a precipitation reaction with a solution of 431 mg InCl 3 in 6 mL DI-H 2 O. After the reaction was finished, 2.5 mL acetone was added to the flask to remove any remaining BH 4 − . The nanoparticles were isolated via centrifugation, washed with deionized H 2 O, and dispersed in IPA. Processing of Absorber Layer The substrates for all samples were 25 mm × 25 mm float glass slides coated with 500 nm molybdenum. No diffusion barrier for Na was used, so Na from the glass was provided for the CISe film. All nano-inks were mixed from the respective NPs with a Cu/In ratio of 0.89 and printed on the substrates by doctor blading until no visible holes remained in the film, usually 10-30 coatings. Typically, for each layer, 40 µL of a nano-ink was injected into the film applicator (Erichsen Testing Equipment, Hemer, Germany) with a blade to a substrate distance of 200 µm, and the blade was moved with a velocity of 10-12 mm/s. In route I, the Cu-In and Se nanoparticles were used as the precursor without (a) and with a sputtered indium rich Cu-In seeding layer (b) to study the effects of a seeding layer on the CISe films. To even out the excess indium and enhance diffusion processes, Cu 2−x Se nanoparticles were used in route Ib as well. In route II, the binary selenide nanoparticles were used as the precursor, while only a single Se NP layer was deposited on Mo first to form the MoSe 2 -phase during heat treatment. The annealing was performed on a hot plate or in a tube furnace (Heraeus, Hanau, Germany) in an N 2 atmosphere at 550 • C for 5 min. All hot-plate samples were face-to-face sealed according to the work of [21], using a second uncoated float glass slide with which external uniaxial mechanical pressure of 46.1 kN/m 2 was applied to the sandwiched nanoparticle films during the annealing procedure. There was no additional Se atmosphere or H 2 Se gas needed for the face-to-face sealed samples and, using the weights, a combination of liquid phase sintering and pressure sintering could be achieved during the temperature treatment. Characterization The morphology of the processed thin film layers and their element composition (with energy dispersive X-ray spectroscopy (EDX)) were examined using a JSM-7610F field-emission scanning electron microscope (JEOL, Tokyo, Japan) and an acceleration voltage of 15 kV. The respective film thickness was measured from cross-section images over a large area. Phase analysis was performed using an Empyrean X-Ray diffractometer (XRD, Panalytical Inc., Etten Leur, Netherlands) with Cu-Kα radiation (wavelength of 0.15406 nm) and Bragg-Brentano geometry and the according software HighScore Plus (version 4.1) outfitted with the database of International Centre for Diffraction Data (ICCD). The band gap energy of the manufactured CISe films was determined from UV/vis measurements with a Lambda 950 (Perkin Elmer, Waltham, MA, USA) in reflection mode. The measured value of %-reflectivity was used to calculate the Kubelka-Munk function, F(hν) = [1−R(hν)] 2 2R(hν) , where R is the reflectivity and hν is the photon energy. Then a Tauc plot was used to determine the band gap energy from extrapolation of the slope of the linear part of the curve. Route I First, the results without the seeding layers are shown. The XRD spectrum in Figure 2 features strong and sharp CISe reflections with a full width at half maximum (FWHM) of 0.14 • , the Mo back contact at 40.5 • , and indicates the formation of the MoSe 2 -phase at 32 • . Left-over Se can also be identified as it was used with excess to remove the possibility of Se-lacking CISe, and the XRD measurement was performed before any KCN etching. No indication of oxides or binary selenides can be seen in the spectrum, but as CuSe reflections overlap with the ones of Se, especially at 56 • , it cannot be completely excluded. The band gap energy of the manufactured CISe films was determined from UV/vis measurements with a Lambda 950 (Perkin Elmer, Waltham, MA, USA) in reflection mode. The measured value of %-reflectivity was used to calculate the Kubelka-Munk function, , where R is the reflectivity and hν is the photon energy. Then a Tauc plot was used to determine the band gap energy from extrapolation of the slope of the linear part of the curve. Route I First, the results without the seeding layers are shown. The XRD spectrum in Figure 2 features strong and sharp CISe reflections with a full width at half maximum (FWHM) of 0.14°, the Mo back contact at 40.5°, and indicates the formation of the MoSe2-phase at 32°. Left-over Se can also be identified as it was used with excess to remove the possibility of Se-lacking CISe, and the XRD measurement was performed before any KCN etching. No indication of oxides or binary selenides can be seen in the spectrum, but as CuSe reflections overlap with the ones of Se, especially at 56°, it cannot be completely excluded. The surface morphology as seen in Figure 3a is a mostly closed layer with little surface roughness, and no damage from the face-to-face glass is visible. However, there is also a pin-hole that reaches down to the back contact. In higher magnifications of Figure 3b, the CISe looks smooth and very well sintered together, while only very few 100 nm grains remain that are connected to the µmsized grains. A more detailed image-series of this sample is shown in Figure A1. In the pin-hole, MoSe2 can be identified with EDX (Mo: 27.0 at %; Se: 51.3 at %; O: 12.5 at %; Cu: 5.5 at %; In: 3.7 at %), thus confirming that the hole reaches the back-contact and most likely a shunt will be formed when the buffer-layer and front electrode are fabricated. The surface morphology as seen in Figure 3a is a mostly closed layer with little surface roughness, and no damage from the face-to-face glass is visible. However, there is also a pin-hole that reaches down to the back contact. In higher magnifications of Figure 3b, the CISe looks smooth and very well sintered together, while only very few 100 nm grains remain that are connected to the µm-sized grains. A more detailed image-series of this sample is shown in Figure A1. In the pin-hole, MoSe 2 can be identified with EDX (Mo: 27.0 at %; Se: 51.3 at %; O: 12.5 at %; Cu: 5.5 at %; In: 3.7 at %), thus confirming that the hole reaches the back-contact and most likely a shunt will be formed when the buffer-layer and front electrode are fabricated. reaches down to the back contact. In higher magnifications of Figure 3b, the CISe looks smooth and very well sintered together, while only very few 100 nm grains remain that are connected to the µmsized grains. A more detailed image-series of this sample is shown in Figure A1. In the pin-hole, MoSe2 can be identified with EDX (Mo: 27.0 at %; Se: 51.3 at %; O: 12.5 at %; Cu: 5.5 at %; In: 3.7 at %), thus confirming that the hole reaches the back-contact and most likely a shunt will be formed when the buffer-layer and front electrode are fabricated. The cross-section morphology as shown in Figure 3c features a ca. 3 µm thick (as measured from the SEM image) and dense CISe layer. On the right, one of the pin-holes can also be seen, which reaches the back-contact. In Figure A2 (see Appendix), the Tauc plot derived from UV/vis reflectivity measurements shows a band gap energy of 0.93 eV that fits to copper-poor CISe. EDX analysis shows a Cu/In ratio of 0.93: Cu: 10.3 at %; In: 11.1 at %; Se: 72.1 at %, confirming the copper-poor composition and showing the Se excess that was even visible in XRD. In the second part of route I, the sputter-deposited Cu-In seeding layer was included in the process to study the effect of a seeding layer and an additional liquid In phase on the film formation. The thickness of 45 nm Cu and 190 nm In was chosen to result in a 200 nm CISe layer during selenization and the excess indium was equalized by coating Cu2−xSe nanoparticles before more Cu-In and Se-NPs were deposited as further precursor material. Figure 4 shows the XRD spectrum of the CISe layer from this hybrid precursor after face-to-face annealing. The CISe reflections are sharp with an FWHM of 0.13° and really strong when compared with the Mo signal, the MoSe2-phase is formed as well, and there are no indications for oxides or indium selenides. However, a small amount of copper selenide can be identified, most likely owing to a slight excess when balancing the In amount. The cross-section morphology as shown in Figure 3c features a ca. 3 µm thick (as measured from the SEM image) and dense CISe layer. On the right, one of the pin-holes can also be seen, which reaches the back-contact. In Figure A2 (see Appendix A), the Tauc plot derived from UV/vis reflectivity measurements shows a band gap energy of 0.93 eV that fits to copper-poor CISe. EDX analysis shows a Cu/In ratio of 0.93: Cu: 10.3 at %; In: 11.1 at %; Se: 72.1 at %, confirming the copper-poor composition and showing the Se excess that was even visible in XRD. In the second part of route I, the sputter-deposited Cu-In seeding layer was included in the process to study the effect of a seeding layer and an additional liquid In phase on the film formation. The thickness of 45 nm Cu and 190 nm In was chosen to result in a 200 nm CISe layer during selenization and the excess indium was equalized by coating Cu 2−x Se nanoparticles before more Cu-In and Se-NPs were deposited as further precursor material. Figure 4 shows the XRD spectrum of the CISe layer from this hybrid precursor after face-to-face annealing. The CISe reflections are sharp with an FWHM of 0.13 • and really strong when compared with the Mo signal, the MoSe 2 -phase is formed as well, and there are no indications for oxides or indium selenides. However, a small amount of copper selenide can be identified, most likely owing to a slight excess when balancing the In amount. In and Se-NPs were deposited as further precursor material. Figure 4 shows the XRD spectrum of the CISe layer from this hybrid precursor after face-to-face annealing. The CISe reflections are sharp with an FWHM of 0.13° and really strong when compared with the Mo signal, the MoSe2-phase is formed as well, and there are no indications for oxides or indium selenides. However, a small amount of copper selenide can be identified, most likely owing to a slight excess when balancing the In amount. The surface morphology, as seen in Figure 5a, is comparable to the films without seeding layers, but the cross-section morphology ( Figure 5b) changes a lot. In this case, on top of the back contact, a ca. 1 µm completely dense CISe film is formed from the sputtered seeding layer, which far exceeds the calculated thickness of 200 nm. Then, the morphology changes to a 2.5 µm coarse grained film. It The surface morphology, as seen in Figure 5a, is comparable to the films without seeding layers, but the cross-section morphology ( Figure 5b) changes a lot. In this case, on top of the back contact, a ca. 1 µm completely dense CISe film is formed from the sputtered seeding layer, which far exceeds the calculated thickness of 200 nm. Then, the morphology changes to a 2.5 µm coarse grained film. It is also to be noted that the 1 µm dense layer is found all along the cross section and no pin-holes to the back contact are present. Figure A2 (Appendix) shows the Tauc plot to determine the band gap of 0.97 eV of this CISe layer, again being in the low-band-gap region, but with a Cu/In ratio closer to 1. EDX shows a copper rich (Cu/In > 1) elemental composition of Cu: 24.7 at %; In: 22.8 at %; Se: 46.7 at %; O: 3.9 at %, but as already seen in XRD, the copper excess can be explained by the Cu2Se remaining in the film. An EDXmapping of this sample is shown in Appendix Figure A3. It can be seen that the Cu and Se distribution resemble the surface morphology, whereas In is evenly distributed over the examined area. Another EDX result from a route Ib CISe film matches the Cu/In ratio of 0.88 from the precursor: rich (Cu/In > 1) elemental composition of Cu: 24.7 at %; In: 22.8 at %; Se: 46.7 at %; O: 3.9 at %, but as already seen in XRD, the copper excess can be explained by the Cu 2 Se remaining in the film. An EDX-mapping of this sample is shown in Appendix A Figure A3. It can be seen that the Cu and Se distribution resemble the surface morphology, whereas In is evenly distributed over the examined area. Another EDX result from a route Ib CISe film matches the Cu/In ratio of 0.88 from the precursor: Cu: 15.1 at %; In: 17.2 at %; Se: 37.9 at %; O: 22.8 at %. Route II In this section, the results from the binary selenide precursors after temperature treatment are shown. The chosen selenides Cu 2−x Se and In 2 Se 3 are direct predecessors of CuInSe 2 during its formation pathway and CISe formation was studied in situ [19]. The XRD spectrum in Figure 6 shows sharp CISe reflections with an FWHM of 0.13 • and the Mo back contact, but also huge amounts In 2 O 3 . In Figure 7, the surface and cross-section morphologies of CISe films from selenide precursors after different temperature treatments are shown. As can be seen, the face-to-face annealing with application of pressure (a) does not work as well as for the route I precursors. Drying cracks are visible in all of the respective layers, but they are most prominent in the sample processed by faceto-face annealing (a). Temperature treatment without any pressure (b) did work better considering surface area coverage, but the benefit of face-to-face can still be seen in the cross-section images. In the spots where grains were formed, they are better sintered together and less porous than in the sample from the tube furnace. In both samples, the drying cracks reach the back contact and will result in shunts once the buffer-and window-layer are applied. An improvement in surface coverage and cross-section morphology was achieved by repeating the coating and face-to-face annealing process two times (c). The cracks no longer reach the back contact, but are still large and inconsistently uncovered areas. Another possibility for improvement was the use of an ethanol based nano-ink instead of isopropanol. This makes use of the better wetting, coating and drying properties of ethanol owing to its significantly lower viscosity, slightly lower surface tension, and slightly lower boiling point, while having almost the same density as IPA. In this sample (d), the surface coverage is the best out of all the variants and in the cross-section, the layer looks densely sintered together even without application of pressure. In Figure 7, the surface and cross-section morphologies of CISe films from selenide precursors after different temperature treatments are shown. As can be seen, the face-to-face annealing with application of pressure (a) does not work as well as for the route I precursors. Drying cracks are visible in all of the respective layers, but they are most prominent in the sample processed by face-to-face annealing (a). Temperature treatment without any pressure (b) did work better considering surface area coverage, but the benefit of face-to-face can still be seen in the cross-section images. In the spots where grains were formed, they are better sintered together and less porous than in the sample from the tube furnace. In both samples, the drying cracks reach the back contact and will result in shunts once the buffer-and window-layer are applied. An improvement in surface coverage and cross-section morphology was achieved by repeating the coating and face-to-face annealing process two times (c). The cracks no longer reach the back contact, but are still large and inconsistently uncovered areas. Another possibility for improvement was the use of an ethanol based nano-ink instead of isopropanol. This makes use of the better wetting, coating and drying properties of ethanol owing to its significantly lower viscosity, slightly lower surface tension, and slightly lower boiling point, while having almost the same density as IPA. In this sample (d), the surface coverage is the best out of all the variants and in the cross-section, the layer looks densely sintered together even without application of pressure. uncovered areas. Another possibility for improvement was the use of an ethanol based nano-ink instead of isopropanol. This makes use of the better wetting, coating and drying properties of ethanol owing to its significantly lower viscosity, slightly lower surface tension, and slightly lower boiling point, while having almost the same density as IPA. In this sample (d), the surface coverage is the best out of all the variants and in the cross-section, the layer looks densely sintered together even without application of pressure. The band gap energy of a route II CISe film was found to be 0.98 eV. The corresponding Tauc plot is shown in Figure A2 Figure A4) shows an evenly distributed elemental composition and a smooth surface of the CISe film. An EDX mapping of another sample from the same series, but with a Cu/In ratio of 0.88, is shown in Figure A5 The band gap energy of a route II CISe film was found to be 0.98 eV. The corresponding Tauc plot is shown in Figure A2 An EDX mapping of another sample from the same series, but with a Cu/In ratio of 0.88, is shown in Figure A5 (Appendix A): Cu: 15.5 at %; In: 17.6 at %; Se: 32.1 at %; O: 7.7 at %. The molybdenum is clearly seen, where no CISe film covers the back electrode. Cu and Se again resemble the surface morphology and In is evenly distributed in the CISe film. Discussion In route Ia, by annealing the nanoparticle precursors in the N 2 -atmosphere, and thus excluding oxygen from air, nanoparticle oxidation can be prevented as verified by XRD. In a regular atmosphere, oxidation happens easily at elevated temperatures, especially for indium. Oxide layers surrounding the nanoparticles would hinder the reactions and diffusion processes and promote a more porous morphology with more nano-sized grains and remaining selenides. Additionally, the applied pressure helps to condense the precursor film, even more so once the Se melts. Capillary forces rearrange the nanoparticles, and thus the CISe film quality is enhanced by liquid phase and pressure sintering. Complete transformation of the nanoparticle precursor into a dense CISe layer in 5 min, as well as an absence of oxides and selenides, shows the potential of the face-to-face annealing with external pressure for solar cell manufacturing, especially if the formation of pin-holes can be prevented and the surface smoothness can be enhanced. Also, there is no need for external Se atmosphere or H 2 Se gas, as all the Se is directly in the precursor and does not evaporate and leave the face-to-face sandwich. Should an excess of Se or copper selenides be present in the film, it can easily be removed by KCN or even better by ammonium sulfide (NH 4 ) 2 S treatment [22]. Absorbers with band gap energies below 1 eV like this have been used in tandem solar cells [11], once the surface morphology is better optimized in terms of smoothness and porosity. When using the sputter-deposited seeding layer in route Ib, it can be concluded that during the face-to-face annealing with applied pressure, two liquid phases of In and Se exist one after the other in the temperature region of their respective melting points, and the rearrangement of nanoparticles can occur twice. Aided by pressure sintering, the Cu-In and Se precursor continues to grow in the dense morphology provided by the sputtered seeding layer, as long as both liquid phases and the Cu 2−x Se phase are present. Here, the 200 nm dense CISe calculated from the sputtered Cu grows to almost five times the thickness. In the top region of the precursor, only Se can contribute to the liquid phase sintering and there is no Cu 2−x Se present at the start of the annealing process, so the resulting morphology resembles the commonly known coarse grained CISe. To achieve a completely vacuum-free route Ib CISe layer, elemental Cu and In NPs could be used as well as very indium rich, bimetallic Cu-In NPs to form the seeding layer and provide the liquid indium. The below 1 eV band gap energies of these samples again allow for application in tandem solar cells [11], once the morphology is optimized. Route II showed several hindering mechanisms when compared with route I. The remaining In 2 O 3 most likely stems from the partly oxidation of the indium selenide nanoparticles before the coating process, which was observed as a change in dispersion-color and with EDX measurements in SEM. Therefore, diffusion processes and chemical reactions are hindered and a complete transformation of the precursor was prevented. This could be overcome by an excess of Se NP, as In 2 O 3 has been transformed back to selenide in earlier work [23,24], which can continue the reaction to CISe and consume the possibly left-over Cu 2−x Se. The morphology of the final film needs even more optimization compared with route I in terms of surface coverage and porosity. This is mostly because of the lack of a liquid phase during the reaction. There is no beneficial particle rearranging by capillary forces to densify the film. Even though, with application of external pressure, a densification of certain areas was observed, the porosity accumulates at the drying cracks and thus widens them. Improvement was achieved by repeating the coating and annealing processes. In this way, the nanoparticles from the second coating filled up the drying cracks left in the layer after the first annealing, and the film quality in terms of surface coverage was enhanced. The best quality, however, was found in a sample derived from ethanol-based nano-inks, which used the better wetting behavior of ethanol compared with isopropanol owing to its significantly lower viscosity, slightly lower surface tension, and slightly lower boiling point. The dispersion stability of the nanoparticles between IPA and ethanol did not vary significantly. In this sample, the surface coverage is the best out of all the variants and, in the cross section, the layer looks densely sintered together even without application of pressure or the presence of a liquid phase, suggesting that the packing of precursor nanoparticles is much better compared with the IPA-ink. Route II, however, needs more improvement before the CISe-layers reach absorber quality, although the band gap energy of the slightly copper-poor film is 0.98 eV and again shows potential for low-band-gap applications. Conclusions In this work, two different routes were examined to manufacture dense and homogenous low-band-gap CISe thin-films using nanoparticle based precursors. In route I, the face-to-face-method with application of external pressure was used at a temperature of 550 • C to form 3 µm homogenous and dense CISe-layers from bimetallic Cu-In and elemental Se nanoparticles. A 200 nm dense CISe film was prepared from a sputtered Cu and In seeding layer that continued to grow from addition of nano-ink up to 1 µm completely dense and closed before transitioning into a coarse grained morphology. A vacuum-free seeding layer can be achieved with elemental Cu and In NP or indium rich Cu-In bimetallic NP. Both CISe-layers from route I show low-band-gap energies of 0.93-0.97 eV and might be used as a solar cell absorber if the film quality in terms of porosity and smoothness is enhanced. This shows the prospect of a simple continuous coating process on infinite substrates and roll-to-roll machines for the temperature treatment with application of pressure. Route II, which uses binary Cu 2−x Se and In 2 Se 3 nanoparticles, results in partly closed layers with drying cracks and large amounts of remaining In 2 O 3 . Using ethanol instead of isopropanol for nano-ink formulation or repeating the coating and annealing process improves the layer quality in terms of both surface and cross-section morphology. Nevertheless, for route II, more optimization is necessary, before any use as solar cell absorber is feasible. enhanced. This shows the prospect of a simple continuous coating process on infinite substrates and roll-to-roll machines for the temperature treatment with application of pressure. Route II, which uses binary Cu2−xSe and In2Se3 nanoparticles, results in partly closed layers with drying cracks and large amounts of remaining In2O3. Using ethanol instead of isopropanol for nanoink formulation or repeating the coating and annealing process improves the layer quality in terms of both surface and cross-section morphology. Nevertheless, for route II, more optimization is necessary, before any use as solar cell absorber is feasible.
7,536.2
2019-07-31T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
The Analysis of the Maintenance Process of the Military Aircraft describes the state of the art and latest advancements in technologies for various areas of aircraft systems. In wide variety of topics in aircraft structures and advanced materials, control systems, electrical systems, inspection and maintenance, avionics and radar and some miscellaneous topics the work. The organization of the maintenance process of the military aircraft The technical objects maintenance is defined as a set of intentional organizational and economical operations of the people on the technical objects and the relationships between them from the beginning of the object lifecycle up to the end of lifecycle and object disposal. Relationships recognition and identification of the operations which appear between subjects based on the knowledge and experience of the technical objects designers, developers and engineers. The maintenance compliance and utility of product mainly depends on the engineers and designers crew professional competence. However the design presumptions can be altered many times during object lifecycle. These operations are performed to decrease maintenance "waste effect" and maximize "utility effect". The modern military aircraft, which is the basic technical object in Polish Air Force organization structure, is the complex product including various constructional, technological, engineering and organizational concepts. Design of so sophisticated product based on tactical and technical military requirements which was created after modern battlefield analysis. The aircraft construction is based on the module structure ( Fig. 1) which allows dividing the specified tasks between separate functional blocks. This solution improves the maintenance process and facilitates service and operational use of the aircraft. The conditions in which the aircrafts are operated are so specific that involves the specified requirements regarding high level of reliability, durability, effectiveness and safety parameters as far as airborne technology is concerned. Required levels of parameters are provided by determining specified functional structure of devices and specified level of redundancy. Due to specific character of aircraft operations the aircraft maintenance can be performed only within specified system which provides the conditions indispensable for correct aircraft operation. This specified system is called Air System (AR) and contains the aircraft frame, the people who participate in the maintenance process and the devices building the system which ensure process permanence (in functional way) - Fig. 1. The primary target in military aircraft maintenance process during peace is maintaining both the technical equipment and the personnel on the specified reliability and training level. It is required to provide high level of efficacy and effectiveness during wartime. Due to many various external factors, which influence negatively on the specified technical elements of the Air System, it can be claimed, that during operating process the elements are getting "used up". Therefore, due to maintain Air System in the appropriate reliability condition there is required to perform technical service. This action contains adjustment, tuning and replacement of particular devices or whole aggregates, in order to slow down the "using up" process. In practice there are three aircraft maintenance strategies (Fig. 2.): 1. maintenance system containing prevention services schedule (recurring maintenance). 2. operational maintenance system. 3. preventive/predictive maintenance system. The rule defining aircraft in maintenance process The rule specifying the range and kind of service in maintenance process Organization and scheme of military aircrafts recurring maintenance strategy is presented on Fig. 3. The basis of this maintenance strategy is the measurement of the amount of labor executed by the plant. As far as aircraft is concerned the amount of labor is defined as a number of hours in the sky. After emergency damage After emergency damage (trivial) One of the maintenance states in the recurring maintenance process is the indirect airworthiness state. The aircraft in this state is mostly working correctly but it lost the flying ability in order to circumstances determined on figure 3. After execution the specified amount of labor (hours of fly) the aircraft lifecycle should be either terminated or directed to the professional service to determine the new amount of labor possible to execute. Temporary unavailability As far as operational maintenance strategy is concerned there is the rule the aircraft is in operation as long as the levels of specified parameters do not exceed the specified limits of error. The knowledge about the maintenance state of the device is determining by the external and internal diagnostic equipment. The service operations during this maintenance www.intechopen.com The Analysis of the Maintenance Process of the Military Aircraft 403 strategy are executed according to levels of measured diagnostic parameters. The proper control of operational maintenance strategy even for the considerable fleet of aircrafts requires control of every aircraft separately. The preventive/predictive maintenance strategy defines the reliability as a designed characteristic. The level (value) of the reliability must be provided in the device design and manufacturing process and is maintaining during the device lifecycle. The maintenance schedule which is based on preventive/predictive maintenance strategy provides the desirable or defined levels of both reliability and flight safety. The all of described aircrafts maintenance strategies are followed during the real conditions fleet maintenance process. Due to the development of diagnostic systems, military aircraft on-board systems include diagnostic procedures enabling the assessment of a current technical state of a given system. The procedure of assessing a given system is performed before an air operation. The procedure results provide information on a technical state of a military aircraft. Based on this information, a pilot decides either to perform a task or to withdraw from performing the task. Apart from integrated diagnostic systems installed on board, there is a number of devices whose technical state is examined via monitoring and measuring equipment after its disassembly from the board of MMA. During maintenance works, diagnostic parameters of the examined devices are recorded and compared with the range of permissible changes. Any deviation beyond the assumed tolerance limits leads to the implementation of either appropriate maintenance procedures aiming at reducing the resultant deviation or appropriate corrections eliminating the deviation. The ability to predict the service life of MMA when diagnostic parameter tolerance might be exceeded would enable the appropriate management of the maintenance system of MMA. Thus, it is possible to optimize the time when MMA is under certain maintenance works and is not combat ready. The influence of destructive factors on the technical state of devices used on the military aircraft During the operation process of a military aircraft we can observe the change of technical parameters of selected devices along with the time of their operation. This change causes the deterioration of working conditions of a system and the loss of rated values of technical parameters. Factors influencing the above-mentioned changes include: − changes of temperature and air-pressure, − g-forces, − vibrations, − ageing process, etc. The construction of technical systems is based on the assumption that a device fulfils its role when its operational/diagnostic parameters are within acceptable error limits. This assumption depends on the accuracy of work of particular system elements. Thus, in order to assure a faultless functioning of a military aircraft, we cannot allow operational parameters to exceed the acceptable error limits, which can be done in two ways: by frequent checks of operational parameter values of a device/system and its switch off when parameters are close to the fixed limit, or by determining the time after which operational parameters exceed values of the acceptable error. The first way is onerous with regard to its organization and it is also time consuming and money consuming. Besides, the time spent on checking excludes a military aircraft from its use in a combat task, which consequently leads to a temporal decrease of the fighting efficiency of the air forces. The second way is based on the use of a particular mathematical method enabling the description of value changes of operational parameters of a device/system and the evaluation of time in which a device/system is in operational state. It is stated above that military aircrafts undergo changes during the exploitation of operational parameter values of particular devices in avionics system. The changes cause that operational parameter values approximate to the fixed acceptable limit. When parameter values equate with the limit value or exceed it, an adjustment must be done in order to restore nominal conditions of a device/system operation or the operation must be stopped. Figure 4 presents a theoretical course of changes of diagnostic parameter values. The second way is based on the use of a particular mathematical method enabling the description of value changes of operational parameters of a device/system and the evaluation of time in which a device/system is in operational state. The model of diagnostic parameter changes in the aspect of the occurrence of destructive factors In the figure, current value of a parameter is marked as "z". If z < z d then an element is fit for use, but if z ≥ z d the elements losses its operational state. The change of diagnostic parameter values will be of a random character because of a specific character of MA operation process and the influence of destructive processes. So, let's consider "the wear of a device" of avionics system as a random process occurring during the operation of an aircraft. Getting down to the analytical description of the diagram in Figure 4 and the determination of the density function of the changes of a diagnostic parameter values, the following assumptions were accepted: 1. The technical condition of an element is described by one diagnostic parameter which is marked as "z". 2. The change of the value of the parameter "z" happens only during the operation of a device, i.e. during the flight of an aircraft. 3. The parameter "z" is non-decreasing. 4. The change of the diagnostic parameter "z" is described by the following equation (1). where: c -random variable which depends on operational conditions of an element; N -the number of flights of an aircraft. then an element is fit for use, in other case the element is considered as unfit for use. 2. The intensity of flights of an aircraft is described by the following dependence (2). The time interval with length t shall be selected in such a way as to fulfil the following inequality (3). The intensity of flights  enables the determination of the number of flights of an aircraft up to the moment t form the following formula: Using the formula (4), the equation (1) can be written in the following form: The dynamics of the changes of a diagnostic parameter can be described by the following difference equation (6). where: t z U , -the probability that in the moment t the value of a diagnostic parameter will be z; z  -the increment of the diagnostic parameter z during one flight of an aircraft. The functional notation of the equation (6) has the following form: where: -the density function of the probability of the diagnostic parameter value z in the moment t; -the probability that in the time interval with length t the flight will not be performed; t -the probability of the flight performance in the time interval with length t. The equation (7) was transformed by substituting the following differential equation (8). Due to the fact that c is a random variable, the following mean value was introduced: where: f(c) -the density function of the random variable c; c g , c u -the limits of variation of c. Taking into consideration the dependence (9), the differential equation (8) can be written in the following form: Assuming that: the density function (11) has the following form: The dependence (14) is the probabilistic characterisation of the increase of the wear in the function of the flying time. However, it is important to know the distribution of the time (the flying time) of the exceedance of the acceptable error value of the parameter z. The probability of the exceedance of the acceptable value by the current value of the diagnostic parameter "z" can be written in the following form: The density function of the time distribution of the exceedance of the acceptable state z d has the following form: 408 After calculating the derivative, we obtain: The original function with regard to the integrand of the dependence (19) has the following form (20). Thus, the dependence (21) determines the density function of the time of the first transition of the current value of the parameter "z" through the acceptable state. Having the above-mentioned data, we can determine the durability of a device with respect to the change of the value of the parameter z. For this purpose, we can write down that the formula for the reliability of a device has the following form:   dt where the density function The unreliability of a device can be determined from the dependence (23). The integral (23) has to be simplified. It can be observed that the integrand can be written in the following form: We make the substitution in the above-mentioned integral. After the substitution, the integral (25) has the following form (29). Then, we make the second substitution. Taking into consideration the above-mentioned dependencies, the integral (29) can be written in the following form: We make one more substitution. Thus, we obtain the integral in the following form: where: Substituting the results into the formula (22) and remembering the appropriate notation of the integration limits, we obtain the formula for the reliability: The distribution function for the standard normal distribution has the following form (36). Finally, the formula for the reliability of a system has the form of the following dependence: where b * and a * are coefficients after the estimation on the basis of data obtained from the exploitation of military aircrafts. Thus, the risk of a device damage can be determined from the following dependence (38). Assuming a specified level of damage risk, we can find  (by reading values on the tables of the normal distribution). Knowing the value of , we can determine the durability (i.e. t) from the dependence (39). For this purpose, the dependence (39) was transformed into the following square equation (40). Thus, the durability: A computational example The efficiency of the chosen system is determined with the help of diagnostic parameters describing the technical condition of particular devices of the system. An aiming head (a navigation and aiming device) is an important device of avionics system. Its technical www.intechopen.com On the basis of analyzing results of checks of a particular population of aiming heads it was established that as the time of operation goes by and as a result of the influence of destructive factors, the values of these parameters undergo changes. 1 0 0 , and basing on the following formulas, the values of the density function coefficients for both diagnostic parameters were determined: Assuming the following level of reliability , the value of the parameter 32 , 2   was read on the tables of normal distribution. The parameter z d was determined on the basis of a technical documentation which is used for service works and includes information on the acceptable values of deviations of the diagnostic parameters. The values of the parameters a, b, , z d were substituted into the equation (41), and the time after which the values of the diagnostic parameter deviations exceed the limit state was calculated. In this case, the time comes to: since the last check of the diagnostic parameters. The values (44) can be used in technical service depending on the adopted service strategy. Summing up, we can state that the above-presented method seems to be correct and enables the analysis of a device/system technical condition with respect to the character of changes of values of the diagnostic parameters. The above-presented calculation example enabled the verification of the developed model and showed application qualities of the method. This method can be useful in future work on the improvement of both the operation process and the way of use of aircrafts with avionics system because it enables the determination of time during which a device is fit for use. Moreover, due to its universal character, the method can be used to determine the residual life of any technical object whose technical condition is determined by analyzing values of the diagnostic parameters. The influence of destructive factors on the course of the process of operating the military aircraft The use of military aircrafts concerns mainly the performance of a particular combat task, which often involves the use of aerial combat means. As far as an airborne function of a military aircraft is concerned, the main stages of its operation comprise the take-off, the staying in the air, and the landing. On the other hand, when analyzing the process of the operation of the on-board armament system, we can assume that the operational effect is the sum of the partial effects gained during the flight phase in relation to: − target detection; − the execution of the aiming process; − the execution of the process of attacking. The level of effect of munitions on a target is the most commonly assumed rate that characterizes the operational effect obtained during the execution of a combat task involving the use of aerial combat means. As regards the on-board armament system, the obtained effect comes down to the determination of the difference between the value of target coordinates and the coordinate values of a drop point of combat armament. Based on the structural diagram ( Fig. 1) and the functions of the on-board armament system, we can assume that the Armament Control System (ACS) is the basic element that affects the value of the operational effect. Both at the stage of maintenance and operation, ACS provides information that is essential for the accurate functioning of the on-board armament system (OAS). In turn, as regards the ACS, its most crucial element involves the navigation and aiming system (NAS). Its basic task comprises the realization of a set of algorithms. Their solution enables -in the maintenance system -the reconstruction of the nominal values of particular initial parameters; -in the operation system -the proper usage of combat means (the intended use). The latter system is the subject of further discussion. The analysis of the operational effect can be performed on the basis of the assessment of conditions in which NAS is used and the determination of causes that have a negative impact on the final value of the obtained effect. As regards NAS, during the execution of a combat task, the operational effect is the total angular correction represented as an aiming indicator in a pilot's field of view. The process of aiming and attacking is executed on the basis of the total angular correction. Thus, we can assume that the assessment of the operational effect involves the determination of accuracy in defining and reproducing the position of a moving aiming indicator. The next aspect concerns the use of the aiming correction by a pilot. When the correction is defined and illustrated, the task comes down to the determination of the flight conditions in which an aiming indicator coincides with a target at the moment of using combat means. Based on the conducted analysis, we can assume that the execution of a combat task under real conditions is not an easy process. The causes of errors affecting the value of the operational effect connected with the aiming process execution can be represented as the equation for the pooled error of the aiming process execution   : The error of the method for solving the aiming-related equations  M characterizes two groups of causes: 1. connected with the relative uncertainty resulting from the processing of initial data concerning the aiming process by NAS functional elements, and 2. concerning the error function of equations for aiming. The system configuration error  K connects with entering invalid control signals (that characterize the combat task being performed) into NAS. The instrumental error  I connects with the accuracy of determining the operational parameters of NAS by particular information transmitters. This error concerns mainly the measurement error. The reconstruction error  A characterizes the adequacy of a physical combat situation taking place during the execution of the aiming process to the assumed attack diagram which was used to determine the aiming equations. The causes of variance between the aiming indicator position and the target  C result from an incorrect approach of an aircraft to an attack path. The causes of the failure to maintain the required conditions for aiming and attacking  W connect with the failure to keep the required angle of diving, flight speed, bank angle, etc., i.e. the exceeding of the nominal values of particular parameters describing a combat task. The effect of the weapon position  R on the pooled error value   , concerns mainly the process of aiming during the execution of the process of attacking with the use of aerial combat means (that are applied in a time series of particular length). Environmental conditions determining the value of the error  O significantly influence the execution of the aiming process. Due to the fact that an aircraft moves at high speed in a heterogeneous space, it may encounter various conditions prevailing in space layers or areas, which directly translates into the perturbation of flight-related parameter values. The general error  N concerns causes which are not included in the presented classification and are the resultant of the lack of possibility to learn or describe them in an analytical way at the present state of knowledge. All the above-mentioned errors can be of two kinds: determined errors (systematic errors) and probabilistic errors (random errors). So, their accumulated form   will be burdened with both types of errors. The phenomenon of the random error occurrence is not precisely determined, that is why an attempt to evaluate its value is fully justified. A random character of compound errors causes that the operational effect of MMA application is burdened with the random error, too. www.intechopen.com Recent Advances in Aircraft Technology 414 The model of the assessment of the execution of a combat mission by the military aircraft The execution of the aiming process generally comes down to the process of making an aiming indicator coincide with a target. Significant elements of this process include parameters that determine the aiming indicator position and a set of actions aiming at pointing the indicator at a target. Based on these elements, we can consider the process of aiming as the execution of the process of building the aiming triangle using: a pilot -the system operator, an aiming indicator -the quantity describing the appropriate spatial orientation of an aircraft, and a target -the basic point in the execution of the aiming process. The aim of the process is to align these three elements. The aiming correction is obtained by recording particular parameters (necessary to solve aiming equations) and processing them in NAS. The aiming correction value is represented as the central point of a moving aiming indicator which is displayed on the reflector of the sight head. Due to the effect of various constraints, the aiming indicator can adopt different positions in the assumed flat coordinate system (Fig. 6) placed on the plane of the sight head reflector. The indicator can either move in one out of four directions or move back to the previously occupied position. Regarding the issue being discussed above, the difference equation is as follows: where: www.intechopen.com h -the deviation value along the specified axes; P 00 -the probability that the deviation value will not change; P 10 -the probability that the deviation value along the OZ axis will change by -h at the time t; P 20 -the probability that the deviation value along the OZ axis will change by h at the time t; P 01 -the probability that the deviation value along the OY axis will change by -h at the time t; P 02 -the probability that the deviation value along the OY axis will change by h at the time t; When we use expressions obtained from the expansion of the function U(z,y,t) in the Taylor series in the surrounding of the point (z,y) and the time t in accordance with the relationships of the following set of equations: where U=U (z,y,t), and the fact that 1 02 01 20 10 00 , the equation (47) takes the following form: When adding and subtracting U in the equation (48) and multiplying appropriate expressions in the brackets and taking the parameter U outside the brackets, the following result was obtained: Using the assumption that the sum of all probabilities describing the weapon angular position equals one, the equation (49) After grouping the quantities from the above equation, the following equation was obtained: After dividing both sides of the equation (51) By introducing the following denotations: The following function is the solution of the above equation: Assuming that the probabilities P 10 and P 20 are of the same order, i.e. P 10 =P 20 , we can write that the coefficient b 1 0. Similarly, we can assume that the probabilities P 01 and P 02 are also of the same order, so the coefficient b 2 0. Given these assumptions, the equation (55) The following form of the density function is the solution of the equation (57): Therefore, the logarithm of the likelihood function L takes the following form: Therefore, the parameters a 1 and a 2 can be defined on the basis of the above set of equations. When analyzing the function notation (58), it can be assumed that in order to determine the variance characterizing the distribution of the indicator central point, the parameters a 1 and a 2 must be multiplied by time, which leads to the following result: The determination of the function parameters (58) will allow defining the probability density function of the correct position of the indicator central point. As regards the case described, it is assumed that the probability of the occurrence of deviations in any direction of the assumed coordinate axes is the same. Such situation takes place when the process of aiming is performed correctly, i.e. when at the beginning of the aiming process, an aiming indicator coincides with a target and any dislocation of the indicator is compensated with its resetting on the target. A real process of aiming often involves the indicator dislocation relative to a target. The occurrence of such dislocation causes that the probability of the indicator dislocation in a specified direction is higher than the indicator dislocation in an opposite direction. Thus, the values of the parameters b 1 and b 2 are not 0. Therefore, the differential equation describing the aiming process takes the form of the equation (55). Its solution is the density function (56). The parameters b 1 , b 2 , a 1 and a 2 need to be determined for the function. Using the above-described technique, the likelihood function (65) was determined. It was used to estimate the sought parameters: The above relationships can be used to describe the process of aiming under real-life conditions. A computational example The execution of a combat task with the use of aerial combat means is characterized by the fact that the possibility of their use is determined by conditions that constitute a set of various factors enabling the performance of a combat task at the required level and with the consideration of a current tactical, navigational, meteorological, and radio-technical situation. The basic determinants of these conditions involve combat capabilities of an aircraft and the level of competence among aircrew members. The essence of the aiming process comes down to the controlling of an aircraft in such a way that it reaches the point in space where the applied weapon will hit a target. This procedure is performed in the NAS environment on the basis of the following data: − motion parameters of an aircraft executing an attack, a target, and parameters of the centre where an aircraft motion is executed; − the required coordinates of a target; − the actual coordinates of a target; − the comparison between actual and required coordinates of a target. A common method for analyzing the aiming process during an attack is the recorded material analysis (using either the film placed in a photo-control apparatus located in front of the sight head or a camera recording a tactical situation in front of MMA.) Based on the recorded material, it is possible to determine a mutual position of an aiming indicator and a target at the moment of a weapon use. Having the material registered by photo-control devices (Fig. 6) and using the abovementioned method, it is possible to define coordinates of the mutual position of a target and indicator in successive moments of the attacking process. www.intechopen.com Fig. 6. Photos taken with a photo-control apparatus during the realization of the attacking process with the use of non-guided missiles Based on the obtained data, it was possible to determine the aiming indicator path relative to a target. Figure 7 depicts the path. When analyzing the position of the central point of the aiming indicator, we can assume that the position adopting the chaotic motion of the indicator was the proper position that completely reflects the nature of the real process. The variance values were determined for the data presented in Fig. 7. The values are as follows: 80 , 22 , 24 , 14 2 2 2 By substituting the above equation values (58) and on the basis of the recorded data, it was possible to determine a graphical form of the probability density function (Fig. 8) that characterizes the concurrence of the aiming indicator with a target during the execution of the aiming process. Summary Works carried out during the process of maintaining aim to ensure the required level of safety concerning aircraft engineering and to maintain it in good working condition. This is achieved by carrying out planned works and systematic checks of diagnostic parameter values. Apart from identification, diagnostic testing includes two more aspects concerning the technical state genesis and prediction. That is why, for safety and reliability reasons, it is important to develop methods enabling prediction of the technical state of devices on the basis of information obtained during the maintenance process. The 4 rd chapter comprises the presentation of the probabilistic method for the determination of residual durability of devices on the basis of their diagnostic parameter changes registered during the process of maintaining. The application of the above-mentioned method may facilitate the military aircraft maintenance process by limiting the number of stoppages through the indication of a time of next maintenance works for a specified device/system. It shall be emphasized that the presented method is universal as it can be applied to the maintenance process www.intechopen.com The Analysis of the Maintenance Process of the Military Aircraft 423 modernization not only in respect of aircraft engineering but also in respect of any field where device/system diagnostic parameters are registered. The process of operating is inevitably connected with "an operational effect" which results from the completion of a particular combat mission. Depending on a combat mission, this effect will concern, for example hitting the target, intercepting an enemy, identifying the target to attack, etc. The operational effect is always obtained during flight. Due to flying conditions of the military aircraft, we can list a number of destructive factors reducing the value of the obtained operational effect. Analyzing the process of operating, we can state that one of the most significant "cells" in this process is the flying military personnel -a pilot. His task involves the appropriate configuration of the military aircraft systems and the performance of the aiming process that generally comes down to the process of making an aiming indicator coincide with a target. The method presented in the 5 th chapter enables the quantitative assessment of the aiming process quality. The results obtained in this way and supported by parameters describing conditions in which a combat task was conducted may constitute the basis for the evaluation of the realization of both a current combat task and the progress in training (considering the series of tasks of a given type in a specified time interval).
7,516.6
2012-02-24T00:00:00.000
[ "Computer Science" ]
SVM-Based Spectral Analysis for Heart Rate from Multi-Channel WPPG Sensor Signals Although wrist-type photoplethysmographic (hereafter referred to as WPPG) sensor signals can measure heart rate quite conveniently, the subjects’ hand movements can cause strong motion artifacts, and then the motion artifacts will heavily contaminate WPPG signals. Hence, it is challenging for us to accurately estimate heart rate from WPPG signals during intense physical activities. The WWPG method has attracted more attention thanks to the popularity of wrist-worn wearable devices. In this paper, a mixed approach called Mix-SVM is proposed, it can use multi-channel WPPG sensor signals and simultaneous acceleration signals to measurement heart rate. Firstly, we combine the principle component analysis and adaptive filter to remove a part of the motion artifacts. Due to the strong relativity between motion artifacts and acceleration signals, the further denoising problem is regarded as a sparse signals reconstruction problem. Then, we use a spectrum subtraction method to eliminate motion artifacts effectively. Finally, the spectral peak corresponding to heart rate is sought by an SVM-based spectral analysis method. Through the public PPG database in the 2015 IEEE Signal Processing Cup, we acquire the experimental results, i.e., the average absolute error was 1.01 beat per minute, and the Pearson correlation was 0.9972. These results also confirm that the proposed Mix-SVM approach has potential for multi-channel WPPG-based heart rate estimation in the presence of intense physical exercise. Introduction Heart rate estimation can provide useful information for users who are engaged in rehabilitation or in physical exercise and anyone who wants to routinely keep track of their cardiac status. Traditional heart rate estimation mainly relies on electrocardiogram (ECG), but ECG requires the presence of ground and reference sensors that must be attached to the body. That is to say, its application area was limited because of the high hardware complexity and low user comfort ability. Photoplethysmography (PPG) [1,2] was widely used for measuring blood volume changes in tissue due to its non-invasive nature and low cost. Unfortunately, the quality of PPG sensor signals can be easily affected by motion artifacts during intensive physical exercise. Therefore, the motion artifacts must be removed to measure heart rate accurately [3][4][5]. The data sets were gathered while 12 healthy male participants walked or ran on a treadmill. They first start from rest to high speed, i.e., 1-2 km/h for 30 s, 6-8 km/h for 60 s, 12-15 km/h for another 60 s. Next, the same cycle is repeated, i.e., starting at speed of 6-8 km/h for 60 s, followed by 12-15 km/h for 60 s. Finally, these subjects slow down to 1-2 km/h for 30 s. Methodology As mentioned earlier, our proposed method is comprised of five main stages: preprocessing, initial motion artifact reduction, sparse signal reconstruction model, spectrum subtraction, and SVM-based spectral analysis. Figure 1 shows the flowchart of the developed system. The blocks used in the proposed method are introduced in the following subsection. Preprocessing Heart rate can be estimated in a time window when the simultaneous multi-channel WPPG sensor signals and acceleration signals are slide in the time window. In our experiments, the time window was set to 8 s and two successive time windows overlap by 6 s. At the beginning of the preprocessing, the frequency of the multi-channel WPPG sensor signals and the acceleration signals are 125 Hz, their sampling frequency becomes 25 Hz by downsampling. The above process can facilitate the subsequent activity. Generally, health adults' heart rates vary from 0.5 Hz to 3 Hz. Therefore, the multi-channel WPPG sensor signals and the acceleration signals all are filtered with the second order Butterworth band-pass filter (0.4 Hz-4 Hz). Then, a majority of noise is removed outside the frequency band because of the above filtering procedure. the pulse oximeters and the accelerometer were embedded in a wristband, which was comfortably worn. Meanwhile, the wet ECG sensors were used for acquiring the ground-truth of heart rate on the chest. The data sets were gathered while 12 healthy male participants walked or ran on a treadmill. They first start from rest to high speed, i.e., 1-2 km/h for 30 s, 6-8 km/h for 60 s, 12-15 km/h for another 60 s. Next, the same cycle is repeated, i.e., starting at speed of 6-8 km/h for 60 s, followed by 12-15 km/h for 60 s. Finally, these subjects slow down to 1-2 km/h for 30 s. Methodology As mentioned earlier, our proposed method is comprised of five main stages: preprocessing, initial motion artifact reduction, sparse signal reconstruction model, spectrum subtraction, and SVMbased spectral analysis. Figure 1 shows the flowchart of the developed system. The blocks used in the proposed method are introduced in the following subsection. Preprocessing Heart rate can be estimated in a time window when the simultaneous multi-channel WPPG sensor signals and acceleration signals are slide in the time window. In our experiments, the time window was set to 8 s and two successive time windows overlap by 6 s. At the beginning of the preprocessing, the frequency of the multi-channel WPPG sensor signals and the acceleration signals are 125 Hz, their sampling frequency becomes 25 Hz by downsampling. The above process can facilitate the subsequent activity. Generally, health adults' heart rates vary from 0.5 Hz to 3 Hz. Therefore, the multi-channel WPPG sensor signals and the acceleration signals all are filtered with the second order Butterworth band-pass filter (0.4 Hz-4 Hz). Then, a majority of noise is removed outside the frequency band because of the above filtering procedure. Initial Motion Artifacts Reduction In this subsection, prior to further denoising processing, we apply an adaptive filter to remove a part of motion artifacts in multi-channel WPPG sensor signals. Although adaptive noise cancellation was already utilized by [27,28] to remove motion artifacts, the reference motion artifact signals were extracted from the PPG signal itself. This approach may work sufficiently for low motion artifact scenarios, but for intensive physical exercise, the reference motion artifacts signal components needed for the adaptive filter are extracted from the simultaneous acceleration signals by principle component analysis (PCA) [29]. The two substeps will be explained in the following details: Reference Motion Artifacts Generation Using PCA: The simultaneous acceleration signals along the three axes include footprints of the motion artifact. However, the acceleration data are convoluted Initial Motion Artifacts Reduction In this subsection, prior to further denoising processing, we apply an adaptive filter to remove a part of motion artifacts in multi-channel WPPG sensor signals. Although adaptive noise cancellation was already utilized by [27,28] to remove motion artifacts, the reference motion artifact signals were extracted from the PPG signal itself. This approach may work sufficiently for low motion artifact scenarios, but for intensive physical exercise, the reference motion artifacts signal components needed for the adaptive filter are extracted from the simultaneous acceleration signals by principle component analysis (PCA) [29]. The two substeps will be explained in the following details: Reference Motion Artifacts Generation Using PCA: The simultaneous acceleration signals along the three axes include footprints of the motion artifact. However, the acceleration data are convoluted noisy signals composed of different periodic components themselves. If we utilize them directly as the motion artifacts reference, this will strongly hamper convergence of the adaptive filter and degrade the overall performance. Hence, we use PCA to analyze the acceleration signals, and then choose the first principal component which contains the most "information" as the reference motion artifact signal components. In this technique, each acceleration signal goes through the following steps: first, the acceleration signals are standardized; second, we calculate the correlation coefficient matrix; third, we use the Jacobian method to compute the eigenvalues of the correlation coefficient matrix; fourth, according to contribution rate, we select the important principal components; finally, we calculate the scores of these principal components and acquire the reference motion artifact signal components. Adaptive Filter for Motion Artifacts: During this procedure, all the reference motion artifact components are used in the adaptive least mean square (LMS) filter. We continually update the weight of the filter according to the criterion of mean square error minimization. Then a part of the motion artifacts from multi-channel WPPG sensor signals can be eliminated. Suppose the observed WPPG signal where y 0 (l) is the cleansed WPPG signal and m(l) is the motion artifact signals. Let the difference value e(l) and the weight w(l) update be expressed as follows where m (l) denotes the estimated motion artifact, a(l) is the reference motion artifact signal components described above, and µ is a parameter which determines the stability and convergence of iteration. Sparse Signal Reconstruction Model At first, the sparse spectrum of a raw PPG signal can be estimated by SVM model [21,22]. The expression of the basic model is as follows where y ∈ R M×1 is an observed signal, x ∈ C N×1 is assumed to be sparse, and it is also an unknown vector, Φ m,n = e j 2π N mn ∈ C M×N (M < N) is the redundant discrete Fourier transform (DFT) basis, and v ∈ R M×1 is the modeling errors. Based on SVM model, the i-th spectrum coefficient of the PPG signal is estimated s i Then, the MMV model [24] can use the multiple measurement vectors to estimate a solution jointly. So the objective function of the model is as follows where Y ∈ R M×H is the measurement matrix, Φ m,n also is the redundant DFT basis, X ∈ C N×H is the sparse matrix, and V ∈ R M×H represents model errors. Through the above analysis, the paper proposes a sparse signal reconstruction model for further denoising, which is based on compressive sensing theory. Owing to the strong relativity between Sensors 2017, 17, 506 5 of 12 motion artifacts in multi-channel WPPG sensor signals and simultaneous acceleration signals, we can extract the sparse characteristics of rows of the spectral matrix. Hereby, we can write the procedure, i.e., where constrains the row sparse of the spectral matrix. x i,j denotes the (i, j)-th entry of X. τ is a weight. In addition, multi-channel WPPG sensor signals are contained in the measurement matrix Y. The multi-channel WPPG sensor signals have removed a part of motion artifacts via adaptive filter. In the simulation experiments, we merely adopt two WPPG sensor channel signals (namely, WPPG1, WPPG2, respectively). It is well known that this model can utilize many proposed algorithms to attain a solution [30,31]. However, the adjacent columns of the matrix Φ always are highly coherent, so not every algorithm is suitable. In this paper, the solution of this model is estimated by utilizing the Regularized M-FOCUSS algorithm [32], the mathematical expression of the algorithm is described as follows The M-FOCUSS algorithm is an extension of the FOCUSS class of algorithms. It can quickly converge when the large coefficients are observed [33]. Hence, even if a high correlation matrix Φ exists, the algorithm still presents fast and reliable performance. Spectrum Subtraction Thanks to the spectral peak positions of motion artifacts in multi-channel WPPG spectra and the spectral peak positions of the acceleration spectra alignment, we can use the acceleration spectra to subtract the motion artifacts from multi-channel WPPG spectra. The operation mode of spectrum subtraction is similar to the proposed process in [23]. Here, given two WPPG spectra and three acceleration spectra. Step 1: We need to seek the maximum value Acc l of spectral coefficients in acceleration spectra for each frequency bin f l (l = 1, 2, . . . , N). Step 2: At each frequency bin f l (l = 1, 2, . . . , N), the value of the spectral coefficients are subtracted Acc i in two WPPG spectra. Within 0 ≤ f l ≤ 199, the maximum values of all coefficients are denoted p max1 , p max2 in two WPPG spectra, respectively. Step 3: Within 0 ≤ f l ≤ 199, we set to zero all spectral coefficients with values less than p max1 /5 in WPPG1 spectra, and all spectral coefficients with values less than p max2 /5 are also set to zero in WPPG2 spectra. Finally, we can yield two cleansed WPPG spectra. Because the heart rate values are less than 180 beat per minute (BPM) in most conditions (including intense exercises), and the maximum recorded heart rate value is 230 BPM, we just analyze the WPPG spectra within 0 ≤ f l ≤ 199. Moreover, WPPG spectra and acceleration spectra should be normalized to have the same energy prior to spectrum subtraction method. SVM-Based Spectral Analysis After the aforementioned stages, motion artifacts are mainly removed in multi-channel WPPG sensor signals. The next step is finding the related spectral peak of heart rate. Here, we put forward a spectral analysis approach based on the Support Vector Machine (SVM) approach [32]. In this approach, the spectral peak tracking is regarded as a two-category classification task. In addition, we also fully consider the statistical properties of the multi-channel WPPG sensor signals. Then, the procedure of spectral analysis has two steps: spectral peak discovery and spectral peak selection. In the different WPPG spectra, the possible peaks are found by spectral peak discovery. That is to say, an adaptive threshold κ automatically is set to where x denotes the spectrum of each denoised WPPG channel, ξ is a parameter, and max{·} is a mathematical operation, it can extract the maximum value. Then, a candidate spectral peak set is formed, i.e., when the frequencies of spectral peaks with coefficients larger than the threshold κ, it is included the previous proposed set. The goal of spectral peak selection is to determine the best reliable spectral peak corresponding to heart rate from the candidate spectral peak set. To the best of our knowledge, there is only one true spectral peak in each time window. There are many different features between the true spectral peak and false spectral peaks. Researchers have investigated the statistical properties of the true peaks, so these effective features showed that [34] (1) About 75% of spectral peaks that have the largest coefficient in their corresponding time windows are true spectral peaks. (2) About 84% of spectral peaks that have the shortest distance from their previous true spectral peak are true spectral peaks. (3) About 96% of true spectral peaks have the largest coefficient and shortest distance. In conclusion, we choose two traits to quantify differences among the candidate spectral peaks. The traits are the peak coefficient ratio and peak-to-peak distance, respectively. In the current time window, the candidate spectral peak set has L candidate spectral peaks. While the l-th candidate spectral peak's coefficient ratio is defined as where coe l is the peak coefficient of the l-th candidate spectral peak and coe max is the maximum peak coefficient in the candidate spectral peak set. Also, the peak-to-peak distance of the l-th candidate spectral peak is defined as where f l is the frequency of the l-th candidate spectral peak and f prev is the true estimated peak in the previous time window. In the view of the accuracy and robustness to noise of SVM, it is often used for classification and regression problems. Meanwhile, SVM has high generalization performance due to the special properties of the decision surface. Therefore, we adopt SVM to classify true spectral peaks from false ones. In the training phase, we firstly extract the two features described above from all candidate spectral peaks. Then the true spectral peaks are labeled as "1" and the false spectral peaks are labeled as "0". Next, the SVM trains itself with the labeled features and finds the support vectors among the features. Finally, a decision boundary is determined which is based on the support vectors. In test phase, we first collect two features as before, and form feature vectors. Next, a trained SVM classifier is developed to examine whether they are true spectral peaks. The specific choice rules are as follows (1) If only one spectral peak is true, and then we select the spectral peak corresponding to frequency, denoted as f HR . (2) If the classifier output more than one true spectral peak, we select the spectral peak of the closest to f pre , its frequency is denoted as f HR . (3) If there is no true spectral peak, we consider that SVM classifier cannot seek out reliable spectral peaks because of serious motion artifacts in the current time window. Hence, a prediction mechanism is proposed for solving the problem. The mechanism can be expressed as follows where h = predict − predict pre , predict is the current predicted frequency, and predict pre is the previous predicted frequency. Here, predict and predict pre are computed by the Smoother algorithm [35]. It is operated on the frequencies of the 10 closest previously estimated heart rate values. Once f HR is determined, the current estimated heart rate BPM est can acquire via Parameter Settings We choose an adaptive LMS filter of order 25 to reduce a part of motion artifacts with an optimized µ = 0.005. In a sparse signal reconstruction model, the weighting parameter τ = 1. Then, we set the regularization parameter λ = 10 −10 in the Regularized M-FOCUSS algorithm [33], and the spectrum grid number N = 1024. For the SVM-based spectral peak selection, the parameter in Equation (8) to ξ = 0.7, and the smoother parameter of the Smoother algorithm [36] was set to 20. Note that the SVM classifier is trained by using five training data and five test data from the public datasets [26]. Meanwhile, we choose all training data to appraise the performance of Mix-SVM. Data Analysis and Statistics To measure the performance of Mix-SVM, we adopt the average absolute error, average absolute error percentage, Bland-Altman plot, and Pearson correlation to analyze the relationship between estimates and ground-truth values. Let the ground-truth values and the estimates be BPM true (l), BPM est (l) in the l-th time window, respectively, and the total number of time window be W. Then, the average absolute error (AAE) was calculated as In the same way, the average absolute error percentage (AAEP) was computed as The agreement between ground truth values and estimates was directly reflected by Bland-Altman plot [35]. After analyzing the limit of agreement is expressed LOA = [u − 1.96σ, u + 1.96σ], and u, σ As we all know, the classification accuracy is the most important indicator for evaluating the performance of the classifier. Thus, we used the effective 10-fold cross-validation method in our experiment. In this process, the public datasets were divided into 10 subsets. Every subset was considered as the validation set, the rest of the subsets as training set, and then we obtained 10 classification models. Finally, the mean of the classification accuracy of the 10 models can be used as a measurement indicator. Results Analysis For direct comparison, Tables 1 and 2 give AAE and AAEP of Mix-SVM, TROIKA [21,22], JOSS [23], and MC-SMD [24] method on all datasets, respectively. The results show that the performance of Mix-SVM was entirely superior to TROIKA over the 12 datasets. We can also observe that Mix-SVM had better nature than JOSS and MC-SMD on most of datasets. Furthermore, we compute the averages of AAE and AAEP for aforementioned four algorithms across the 12 datasets. Where Mix-SVM were 1.01 BPM and 0.72%. In TROIKA framework, AAE = 2.42 BPM and AAEP = 1.82%. In JOSS, AAE = 1.28 BPM, and AAEP = 1.01%. In MC-SMD, AAE = 1.11 BPM and AAEP = 0.80%. Among 12 datasets, Figure 2 depicts the Bland-Altman plot, where LOA = [−3.46, 3.83] BPM. In order to observe the relationship between the ground-truth heart rate values and the associated estimates, we give the scatter plot ( Figure 3). Then Y = 0.994X + 0.957 was the fitted line, where X denotes the ground truth values of heart rate, and Y the estimated heart rate. The Pearson coefficient was 0.9972. To get a closer look to the ability of Mix-SVM, Figure 4 presents an example of estimate of Mix-SVM on subject 8 (randomly chosen). We can find that the estimates were quite close to the ground truth values as we expected. truth of heart rate trace, but JOSS and MC-SMD sometimes got errors. For example, the estimates of JOSS have gravely deviated from the ground-truth of heart rate from 0 to 50 s. From 110 to 115 s, 140 to 145 s, 210 to 225 s and 265 to 270 s, the estimates of MC-SMD do not overlap with the ground-truth value. But the estimates of Mix-SVM were nearly equal to the ground-truth of heart rate. Due to the performance of our proposed method being entirely superior to TROIKA on each subject, we ignore the estimated heart rate traces of TROIKA in Figure 5. truth of heart rate trace, but JOSS and MC-SMD sometimes got errors. For example, the estimates of JOSS have gravely deviated from the ground-truth of heart rate from 0 to 50 s. From 110 to 115 s, 140 to 145 s, 210 to 225 s and 265 to 270 s, the estimates of MC-SMD do not overlap with the ground-truth value. But the estimates of Mix-SVM were nearly equal to the ground-truth of heart rate. Due to the performance of our proposed method being entirely superior to TROIKA on each subject, we ignore the estimated heart rate traces of TROIKA in Figure 5. To show the superiority of our proposed algorithm, the estimated heart rate traces of Mix-SVM, JOSS, and MC-SMD on subject 4 (randomly chosen) is shown in Figure 5. Mix-SVM had the best performance among three methods, its estimated heart rate trace almost overlaps with the ground-truth of heart rate trace, but JOSS and MC-SMD sometimes got errors. For example, the estimates of JOSS have gravely deviated from the ground-truth of heart rate from 0 to 50 s. From 110 to 115 s, 140 to 145 s, 210 to 225 s and 265 to 270 s, the estimates of MC-SMD do not overlap with the ground-truth value. But the estimates of Mix-SVM were nearly equal to the ground-truth of heart rate. Due to the performance of our proposed method being entirely superior to TROIKA on each subject, we ignore the estimated heart rate traces of TROIKA in Figure 5. Conclusions A novel mixed approach termed Mix-SVM was developed for estimation of heart rate from multi-channel WPPG signals with various types of motion artifacts. In this approach, we used the fast denoising algorithm and reconstruction algorithm to deal with serious motion artifacts caused by users' fast activities. Then we utilized the SVM-based method to analyze spectra. Through the simulation experiment on the 12 datasets, the results verified that the accuracy and efficacy of Mix-SVM, and the estimates of Mix-SVM were close to the ground-truth of heart rate. Furthermore, SVM can seek the optimal solution according to finite sample information. The theoretical basis of SVM determines that the final solution is the global optimal solution rather than the local minimum value. The above features ensure the good generalization ability of SVM for unknown samples. Hence, Mix-SVM may greatly improve its own generalization ability. Conclusions A novel mixed approach termed Mix-SVM was developed for estimation of heart rate from multi-channel WPPG signals with various types of motion artifacts. In this approach, we used the fast denoising algorithm and reconstruction algorithm to deal with serious motion artifacts caused by users' fast activities. Then we utilized the SVM-based method to analyze spectra. Through the simulation experiment on the 12 datasets, the results verified that the accuracy and efficacy of Mix-SVM, and the estimates of Mix-SVM were close to the ground-truth of heart rate. Furthermore, SVM can seek the optimal solution according to finite sample information. The theoretical basis of SVM determines that the final solution is the global optimal solution rather than the local minimum value. The above features ensure the good generalization ability of SVM for unknown samples. Hence, Mix-SVM may greatly improve its own generalization ability. Conclusions A novel mixed approach termed Mix-SVM was developed for estimation of heart rate from multi-channel WPPG signals with various types of motion artifacts. In this approach, we used the fast denoising algorithm and reconstruction algorithm to deal with serious motion artifacts caused by users' fast activities. Then we utilized the SVM-based method to analyze spectra. Through the simulation experiment on the 12 datasets, the results verified that the accuracy and efficacy of Mix-SVM, and the estimates of Mix-SVM were close to the ground-truth of heart rate. Furthermore, SVM can seek the optimal solution according to finite sample information. The theoretical basis of SVM determines that the final solution is the global optimal solution rather than the local minimum value. The above features ensure the good generalization ability of SVM for unknown samples. Hence, Mix-SVM may greatly improve its own generalization ability.
5,787.6
2017-03-01T00:00:00.000
[ "Computer Science", "Engineering", "Medicine" ]
Rethinking Language in Irigaray’s Mimesis Applied in David Mamet’s Oleanna Jacques Derrida (1930-2004) the Algerian critical thinker, in one of his most outstanding works titled Spectres de Marx, uses deconstructive philosophy to examine the permutations of ‘spectre’ as well as its accompaniments ‘haunting’ and ‘spirit’ (Lechte 129). Within the boundaries of Western patriarchal culture, elements have been used and exploited to establish binary systems of symbolic representation. In such systems, women as ‘the other’ can exist only by providing a negative counterpart. Unless they can, the ‘woman’ is lost and forgotten and hence becomes a “specter”. As Narges Raoufzadeh (2019) in her article entitled “Analysis of Love, Death, Rebirth and Patriarchy in Two Contemporary Poetess Forough Farrokhzad and Sylvia Plath’s Selected Poems” has mentioned “According to almost all feminist scholars, patriarchy refers to the rule of the father in a male dominated society as a social and ideological construct which regards men as superior to women. They are of the opinion that men’s domination over female sexuality is central to women’s subordination. In fact, man is the head of the family who controls women’s sexuality, labor, production, reproduction and mobility. Moreover, the effect of patriarchy can be traced in politics, public life and economy as well as in all aspects of social, personal, psychological and sexual existence” (60). Furthermore, Derrida maintains that phallocentrism, which is advocated in most Western societies, privileges a masculine and highly individualistic point of view, thus, in Abstract such a society, woman can not enjoy existence unless they are defined in a binary opposition to the 'male'. In such a case, the value of a woman always escapes recognition and her value is that of becoming the product of a man's labor. Consequently, the intersection of the two notions, phallocentrism and the existence of woman as a specter gives birth to a notion with which woman can challenge and resist the impositions of a phallocentric society. Basirizadeh (2019) in her article entitled A Comparative Study of the Psychoanalytical Portrayal of the Women Characters by Virginia Woolf and Zoya Pirzad mentions that, "In de Beauvoir's view if women really want a status, they should deconstruct the structures of the masculine society and present their own definition of feminity. This definition would be the proof of woman's presence and existence counter-intuitive to masculine canon of knowledge in power" (2). II. Research Methods Luce Irigaray (1930 -) Belgian born French feminist has picked up women's issues henceforth and has introduced the idea of mimesis. This literary term is basically defined as imitation, mimicry, the act of resembling, and recently, has come to be used as the notion of expressing and presenting oneself. Irigaray has enhanced the meaning and states that in order to escape the phantom-like position allocated to women in the society and to defy the dominant phallocentric order of the society, it is necessary to employ mimesis as a form of resistance with which women can imitate stereotypes so as to place them in the limelight and undermine stereotypicality. She suggests that the notion of mimesis should be used as a critical tool in reading texts, especially ones in which women stand in direct opposition to men in a binary relationship. In texts which women are presented as foils or antagonists, mimesis is the process of resubmitting to stereotypical positions imposed on them, with the purpose of questioning those views. This point can further be illustrated with a simple example. For instance, if women are viewed as illogical, woman should try their best to challenge the matter quite logically. The juxtaposition of logical versus illogical undermines the claim that women are illogical. Or for example, if women's bodies are viewed as multiple and dispersed, women should speak from that position in a playful way so as to show that this view stems from a masculine economy that values identity and unity and excludes women as the 'other'. Irigaray; moreover, suggests that women should not fear criticism and challenge since negative views can only be overcome when they are exposed and demystified. Mimesis repeats a negative perspective and ridicules it in a manner so that eventually it is annulled and in the end discarded. The deformed female form of subjectivity that accompanies the male form which dictates master/subject/male versus slave/other/female should be curtailed and should be offered no instance of repeating itself. Fortunately, the view on subjectivity has altered; however, male dominance has not. The 'other', the female, should not be neglected if we hope for a paradigm shift. Irigaray emphasizes on mimesis as a result of her belief that a second sex can and should exist in its own right as opposed to being considered as a deformed version of male identity so that we can challenge and pass back through the oppressive formulation of sexual differences. Irigaray challenges the phallocentric model in which the feminine is reduced to the inverse or indeed the underside of the masculine. Using mimesis as a powerful weapon, women can dispute and displace male-centered structures of language and thought through a challenging practice toward a feminine discourse that would reduce the strength and dominance of phallocentricism. Women who resort to mimetic speech and behavior try to recover their true place in a sexual hegemony and so they try to regain their place before being exploited by discourse and repressed by male dominance. According to Gertrude Postl, "Women are able to sustain their exchange value only if they stand by the 'phantom-like reality' of their existence which is to say 'mimetic expression of masculine values"'. She further maintains, "Remaining on the market is a question of survival and for this survival, woman is willing to mime man, to pretend to have a phallus which entails a willingness to accept the standard accessories of femininity". Mimesis in Practice, Act I In David Mamet's "Oleanna" (1992), there are instances of mimetic expressions galore. All throughout Act I, John speaks condescendingly to Carol because in a hegemonic order he stands in a higher position and as Carol's instructor supersedes her in the matter of rank. Carol lives in a male centered society, a patriarchal society that sees women as a means to fulfill their ideals. Narges Raoufzadeh (2020) in her entitled "A Foucauldian Reading: Power in Awakening by Kate Chopin"States: "She lives in a society that encompasses traditional and patriarchal patterns in their most restricting sense. The community in which she dwells is a male-centered one in which the norms have described the female as the one who is dependent on the male, and is domestic and emotional. She should be "an angel in the house" taking care of her children and her husband. The female model of perfection is the sacrificial mother-woman who effaces herself for her family's welfare. On the other hand, the male figure is described as independent and traditional. He should be in control of his family and especially his wife. He bears the responsibility of economic matters of his family. While his wife controls the domestic affairs, he is responsible for social matters. She is surrounded by controlling men and mostly conventional women. In such a society, her efforts to become independent from her husband and male dominance is a great opposition and threat to the dominating social and cultural structures (161). In Oleanna Mamet beautifully portraits this situation. John's superior position and its imposition on Carol drive her to an inferior position and she resorts to mimetic behavior and speech in order to rescue herself from the undesirable situation she is in. John intends to secure his dominant position and he does so by humbling Carol with his awkward propositions and his insincere show of affinity. He jocosely embraces Carol in order to sympathize with the difficult situation she has, but quite obviously he only intends to take liberties with her. He decides to bribe her into visiting his office more than required and treats her as if she were a commodity produced for the sake of his libidinous satisfaction. Mimesis in Practice, Act II Intent on reversing her position from an inferior one to a superior one, from the beginning of Act II, Carol resorts to mimetic behavior and speech. She proffers a report she has been working on for a long time about John's sexist conduct in the academic atmosphere. Ironically, as her instructor, John would expect Carol to prepare different kinds of reports and not ones that would jeopardize his career at university. Perhaps the most vivid example of mimesis, as Irigaray defines it, is observed in this section of the play. After John reads the report that Carol has prepared, he tries, in a futile attempt, to make light of the situation and overtly claims that no one will believe her. He is sure because he is a member of the tenure and belongs to that "group". Carol is wellprepared to respond to this and retorts that she is not alone in this and there is a group which has prepared the report and she is also a member of that particular group. In another scene, John's anger gets the better of him, and he professes that a college education is not for women and they are only making vain attempts to achieve positions for themselves. In a mimetic effort, Carol responds that her academic knowledge is exactly what has enabled her to distinguish the truth of matters and be able to oppose John. "Migration and diaspora are concerns of postcolonialism"(Qtd. in Zaheri birgani 12). She and the group she represents will have no fears of questioning the educational system and this bravery comes from the fact that they themselves are part of the system and know all the details. At the end of Act II, John tries to impose himself on Carol by having physical contact with her and from the same position she is trapped in, she lets out a scream which does double duty. On the one hand, it informs John about his new position, the one in which he can no longer feel he has the upper hand. It also gives him insight as to the 'new' Carol who is no longer a pathetic creature, satisfied with a subordinate position in confronting him. On the other hand, this act of mimetic behavior deconstructs what Carol formerly believed to be true about herself, the fact that as a female student she constantly has to submit to an inferior position. She has rediscovered a new aspect, in essence, which can reverse her position and give her a different stronghold. Carol finds she has the power to relieve herself and oppose to what she thinks wrong or inappropriate; not only John's faulty estimation of her potentialities but also the academic curriculum which is rife with shortcomings of all sorts. Mimesis in Practice, Act III In Act III, the audience finds John in despair because Carol's reports have been affective and John is suspended. In a final attempt at mimesis, Carol proves that she and other women in general can resort to law and order to attain what they are liable to. The final scene is significantly meaningful in that emerging from her spectral marginalization, Carol becomes John's instructor in her mission to abdicate him from his position of dominance and to educate him on the extent to which his views are corrupt and selective. Carol's final proposition is the final attempt she makes at resubmitting the stereotypical view both John and the audience have of the 'female'. John can compromise and agree to ban a list of books which he unhesitatingly refutes. Perhaps the reversal of positions, losing both face and ground, at this rate, has proven too much for him and he has not had enough time to come to terms with the change of events. Leaving her 'ghost-like' existence behind, Carol becomes audaciously bold in employing or perhaps misusing her new-found power. She corrects John when he refers to his wife as 'baby' and condescendingly prohibits him from repeating this. John who cannot adapt himself to his new position in this hegemony and is determined to oppose this reversed hegemonic order, lifts a chair in order to attack Carol physically. She cowers down in a corner of the room and covers her head (the organ in which the logic of all her decisions lie within) with her arms to prove once more that the phallic symbol which John had tried unsuccessfully to represent can no longer intimidate her and she is completely capable of protecting herself. In this way, she defies John and renders his efforts in subverting her futile. IV. Conclusion For some feminist writers, a patriarchal system which valorizes masculinity and; therefore, most males, is the predominant outcome of Lacan's Freudian anthropology. No doubt, Lacan has only reinforced this impression in the eyes of many women with his provocative aphorisms, 'woman does not exist', and 'woman is not all'. The first statement is meant to indicate that there is no essence of femininity. And this is why sexuality is always a play of masks and disguises . This may apparently give a very dark picture of women in that it places them in an inferior position in a hegemonic order. However, according to Luce Irigaray this is not where everything ends and it is just the beginning of the story. If language for Lacan is irreducibly phallic, the only way women can speak or communicate at all is by appropriating the masculine instrument. In order to speak clearly, to communicate and to forge links with othersto be social-the woman must speak like a man. If women are to have an identity of their own, the phallic version of the symbolic to which they have been subjected to, for so long, must be subverted. The symbolic has been the source of women's oppression (192). Luce Irigaray picks up the story from here and introduces the notion of mimesis, the weapon women possess, in order to show resistance and undermine the characteristics stereotypically attributed to them. Women, in a process of mimesis, use language and behavior to subvert their inferior position and confront or perhaps overcome the inferior rank allocated to them in the hegemonic order. Various instances in David Mamet's play "Oleanna" depict how mimesis works out.
3,326.4
2020-08-13T00:00:00.000
[ "Linguistics" ]
Transient Temperature Field Model of Wear Land on the Flank of End Mills: A Focus on Time-Varying Heat Intensity and Time-Varying Heat Distribution Ratio : Modelling methods for the transient temperature field of wear land on the flank of end mills have been proposed to address the challenges of inaccurate prediction in the temperature field of end mills during the high-speed peripheral milling of Ti6Al4V that is a titanium alloy. A transient temperature rise model of wear land on the flank of end mills was constructed under the influence of heat sources in the primary shearing zone (PSZ), rake-chip zone (RCZ), flank-workpiece zone (FWZ), and dissipating heat source. Then the transient temperature field model of wear land on the flank of end mills was constructed. Finally, the transient temperature field model of wear land on the flank of end mills was constructed. Comparison of simulation result and experimental data verified the accuracy of the model. In sum, the proposed model may provide a temperature model support for future studies of flank wear rate in end mill modeling. Introduction Titanium alloys have been widely used in aerospace, energy and military fields due to their excellent specific strength, specific stiffness, heat resistance, corrosion resistance, etc. In order to meet the advanced requirements for processing efficiency and workpiece surface quality in the aforementioned fields, it is particularly important to adopt high-speed precision milling. However, severe tool wear may occur in high-speed milling, which reduces the processing efficiency and affects machining surface integrity [1]. During milling, the temperature of the milling cutter [2,3], lubrication mode [4-6] and the anti-friction performance of the tool [7] are the important factors that affect tool wear. In order to meet the specific requirements of green cutting, increasing research attentions have been directed toward the peripheral milling of titanium alloy workpiece with integral carbide end mills, especially on tool temperatures. The main influential factors of temperature field distribution in the cutting process are heat source intensity, geometric characteristics of heat source, processing parameters, constitutive model of workpiece material, tool holder, undeformed chip thickness, the number of cutting edges involved in the cutting process, etc. Many studies have focused on the aforementioned factors. The time-varying load between tool and chip may directly affect the heat intensity in the RCZ. Hence, Islam et al. [8] used a finite difference method with implicit time discretization to solve the partial differential equations for the heat-mass transfer models of tool, and then established a temperature field of milling that has time-varying chip loads. The complex geometric characteristics and asymmetric heat source may directly influence the distribution of cutting temperature field. For this reason, Klocke et al. [9] investigated the effect of cutting-edge geometries and cutting-edge radius on cutting heat source and constructed an analytical model of cutting temperature based on the panel method in the field of fluid mechanics. In order to carry out a more specific research on milling temperature, Wu et al. [10] divided the temperature change period of end mills into temperature increase and decrease phases. Considering the real friction state between chip and tool in the temperature increase phase, the heat flux and tool-chip contact length are obtained via finite element simulation, whereas for the temperature decrease phase, a one-dimensional disc heat convection model is applied. Processing parameters are one of the most important factors that affect cutting temperature. Therefore, Sivasakthivel et al. [11] developed an end-milling temperature rise model based on the response surface method and assessed the influence of milling parameters on this model. They used a genetic algorithm to optimize the processing parameters to obtain a low temperature rise and found that helix angle is the most crucial milling parameter affecting the peak temperature rise. The constitutive model of workpiece material may affect the flow deformation in the first deformation zone. For this reason, Yang et al. [12] established a new constitutive model of workpiece material under the milling conditions of large strain, high strain rate and high temperature, which can improve the accuracy of cutting temperature resulted by finite element simulation when using the double-helix edge end mills to process Ti6Al4V. The heat conduction during cutting process may be affected by the tool holder, thus Carvalho et al. [13] calculated the tool-chip interface temperature using an inverse heat conduction method based on tool and tool holder factors. At the end milling process, the time variations of tool chip contact length may directly affect the undeformed chip thickness and tool temperature distribution. To overcome these challenges, Sato et al. [14] considered the time-varying of tool chip contact length in the end milling process and constructed a temperature distribution model of the rake face of indexable milling cutter through Green's function. When an integral carbide end mill is milling a plane, both its circumferential edge and bottom edge may be involved in the process of milling, thus generating milling heat. Consequently, Lazoglu et al. [15] established a new end milling heat model using a semi-analytical method, by considering the effects of the circumferential edge and bottom edge on cutting temperature. Heat intensity and heat distribution ratio are the two important physical parameters for establishing a cutting temperature field model. Therefore, heat intensity in the machining process may directly affect the amount of heat generated from cutting and affect the process precision owing to the thermo-elastic deformations. Processing technology, heat transfer mode and processing conditions are the main factors that affect heat intensity. Many studies have been carried out concerning these factors. First, through orthogonal continuous cutting, Yvonnet et al. [16] derived the heat intensity of the rake face based on numerical simulation and an inverse method algorithm. Furthermore, in the interrupted cutting of indexable milling cutters, Jiang et al. [17] calculated the heat intensity with time-varying characteristics by using an inverse heat conduction method. Similarly, based on the reverse heat conduction method, Han et al. [18] obtained heat intensity by thermocouple method embedded. Because of the unsteady milling process, Putz et al. [19] calculated the unsteady heat intensity of indexable milling insert through the finite element simulation of the interrupted chip formation process. Considering the influence of processing conditions on heat intensity, Pabst et al. [20] established mathematical expressions on the heat intensity of end milling based on a polynomial method for feed per tooth, cutting speed, axial cutting depth, cutting width, edge radius, and rake angle. Taken together, during the process of studying the milling temperature of an integral flat end milling cutter, there is a lack of prior research on the time-varying and non-uniform characteristics of heat intensity in the RCZ as well as the non-uniform characteristics of heat intensity in the FWZ. Heat distribution ratio in the process of machining may directly affect tool temperature distribution. The main influential factors of the heat distribution ratio are processing technology; thermal number, which is a parameter defining how much accumulated heat in the primary shear zone is distributed by heat convection in comparison with thermal conduction; tool coating; and workpiece material. Numerous studies have been carried out on these factors. The thermal number of workpiece materials determines the amount of heat accumulated through heat convection in the shear zone. Therefore, some studies have investigated the relationship between thermal number and heat distribution ratio. Heat value parameters were applied to establish a calculation method for heat distribution ratio under continuous cutting conditions by Putz et al. [21]. Moreover, Putz et al. [22] extended the same parameters to the calculation of the heat distribution ratio in an interrupted cutting process. Tool coatings of different materials can have a significant impact on the generation and distribution of cutting heat within coated tools. Therefore, by using the characteristics and tool chip contact as well as the types of workpiece materials and tool matrix/coating materials, the heat distribution ratio of multi-coated tools can be accurately predicted, see Grzesik et al. [23]. Zhang et al. [24] examined the heat distribution ratio of coated cutting tools based on the convective heat transfer principle. Akbar et al. [25] developed a two-dimensional finite element model under thermo-mechanical coupling to estimate the heat distribution ratio of tool chip. This model was modified based on deformed chip thickness and cutting force, and its sensitivity on measuring heat distribution ratio was evaluated. Considering the lubrication condition, Rech et al. [26] established a fast heat distribution ratio model based on a special tribometer. In the field of processing carbon fiber composites, by considering the direction of fibers and predicting the heat distribution ratio based on classical Hertzian contact theory, Wang et al. [27] found that the direction of fibers exhibited a greater impact on the heat distribution ratio instead of cutting parameters. Overall, the time-varying characteristics of the heat distribution ratio from the RCZ to the rake face and from the FWZ to the wear land on the flank of end mills have yet to be comprehensively investigated, especially the milling temperature of end mill. As aforementioned, there is a lack of research focused on the temperature modeling of wear land on the flank of an integral end milling cutter. Especially according to the process of temperature rise and temperature drop during milling. With regards to the time-varying characteristics of heat intensity and heat distribution ratio in the RCZ, FWZ as well as the rake angle factors. Considering the roles of time-varying heat intensity, time-varying heat distribution ratio and rake angle factors in the RCZ and FWZ, as well as the impacts of heat sources in the PSZ, RCZ, FWZ and dissipating heat source, we aimed to establish a transient temperature field model of wear land on the flank of end mill based on "moving heat source method". Transient Temperature Field Model of Wear Band on the Flank of End Mills During the peripheral milling process, each tooth of the helical end mills is discretized into many slices along the axis (Z direction). Helical angle changes the end mill into an independent oblique cutting edge, and thus the peripheral milling process is transformed into an oblique cutting process ( Figure 1). thermal number, which is a parameter defining how much accumulated heat in the primary shear zone is distributed by heat convection in comparison with thermal conduction; tool coating; and workpiece material. Numerous studies have been carried out on these factors. The thermal number of workpiece materials determines the amount of heat accumulated through heat convection in the shear zone. Therefore, some studies have investigated the relationship between thermal number and heat distribution ratio. Heat value parameters were applied to establish a calculation method for heat distribution ratio under continuous cutting conditions by Putz et al. [21]. Moreover, Putz et al. [22] extended the same parameters to the calculation of the heat distribution ratio in an interrupted cutting process. Tool coatings of different materials can have a significant impact on the generation and distribution of cutting heat within coated tools. Therefore, by using the characteristics and tool chip contact as well as the types of workpiece materials and tool matrix/coating materials, the heat distribution ratio of multi-coated tools can be accurately predicted, see Grzesik et al. [23]. Zhang et al. [24] examined the heat distribution ratio of coated cutting tools based on the convective heat transfer principle. Akbar et al. [25] developed a two-dimensional finite element model under thermo-mechanical coupling to estimate the heat distribution ratio of tool chip. This model was modified based on deformed chip thickness and cutting force, and its sensitivity on measuring heat distribution ratio was evaluated. Considering the lubrication condition, Rech et al. [26] established a fast heat distribution ratio model based on a special tribometer. In the field of processing carbon fiber composites, by considering the direction of fibers and predicting the heat distribution ratio based on classical Hertzian contact theory, Wang et al. [27] found that the direction of fibers exhibited a greater impact on the heat distribution ratio instead of cutting parameters. Overall, the time-varying characteristics of the heat distribution ratio from the RCZ to the rake face and from the FWZ to the wear land on the flank of end mills have yet to be comprehensively investigated, especially the milling temperature of end mill. As aforementioned, there is a lack of research focused on the temperature modeling of wear land on the flank of an integral end milling cutter. Especially according to the process of temperature rise and temperature drop during milling. With regards to the time-varying characteristics of heat intensity and heat distribution ratio in the RCZ, FWZ as well as the rake angle factors. Considering the roles of time-varying heat intensity, time-varying heat distribution ratio and rake angle factors in the RCZ and FWZ, as well as the impacts of heat sources in the PSZ, RCZ, FWZ and dissipating heat source, we aimed to establish a transient temperature field model of wear land on the flank of end mill based on "moving heat source method". Transient Temperature Field Model of Wear Band on the Flank of End Mills During the peripheral milling process, each tooth of the helical end mills is discretized into many slices along the axis (Z direction). Helical angle changes the end mill into an independent oblique cutting edge, and thus the peripheral milling process is transformed into an oblique cutting process ( Figure 1). Because milling involves intermittent tool-workpiece contact, when the edge is in contact with the workpiece, heat source that results in temperature rise is primarily generated in three different deformation zones at the time of metal cutting. The PSZ generates a high temperature due to its plastic deformation on the shear surface, thus softening the workpiece material and leading to greater workpiece deformation. This is a process of mutual coupling between heat and deformation. The heat generated in the RCZ is attributed to chip deformation, bonding and sliding friction between the tool and chip. The third deformation zone contains the heat generated by sliding friction and extrusion between the machined workpiece surface and wear band on the flank face. When the edge is out of contact with the workpiece, a dissipating heat source that results in temperature drop primarily is generated in wear land on the flank of end mills. Among these three zones, the PSZ and RCZ are mainly affected by cutting conditions, whereas the FWZ is largely affected by the wear of the flank face, dissipating heat source mainly affected by the cooling medium wear land on the flank of end mills. In this study, the flank of end mills was worn out, and thus the influences of the PSZ, RCZ, FWZ, and dissipating heat source on the transient temperature field of flank wear band on the end mills were comprehensively considered. The following five assumptions were made for the transient temperature field model of the wear band on the flank of end mills. 1. The heat flow generation and temperature distribution are stable. 2. Deformation energy refers to the deformation energy within the shear zone, deformation energy at the rake-chip interface due to friction and extrusion, and deformation energy at the flank-workpiece interface due to friction and extrusion. All deformation energy is involved in mechanical processes and is converted into cutting heat. This negligible part is stored in the deformed metal in the form of potential. The heat loss along the contact surface as well as the tool, chip and workpiece surface are neglected. 3. The heat sources in the RCZ are not affected by crater wear on the rake face. 4. The cutting-edge radius is zero. 5. The temperature of end mills is not related to the milling depth. 6. The yield strength of Ti6Al4V is affected by its temperature, strain rate and stress state. In order to simplify the calculation, only the effect of temperature on the yield strength of Ti6Al4V is considered. As shown in Figure 2, three different coordinate systems were established, by considering the influence of four heat sources on the temperature field of the wear band on the flank of end mills. Because milling involves intermittent tool-workpiece contact, when the edge is in contact with the workpiece, heat source that results in temperature rise is primarily generated in three different deformation zones at the time of metal cutting. The PSZ generates a high temperature due to its plastic deformation on the shear surface, thus softening the workpiece material and leading to greater workpiece deformation. This is a process of mutual coupling between heat and deformation. The heat generated in the RCZ is attributed to chip deformation, bonding and sliding friction between the tool and chip. The third deformation zone contains the heat generated by sliding friction and extrusion between the machined workpiece surface and wear band on the flank face. When the edge is out of contact with the workpiece, a dissipating heat source that results in temperature drop primarily is generated in wear land on the flank of end mills. Among these three zones, the PSZ and RCZ are mainly affected by cutting conditions, whereas the FWZ is largely affected by the wear of the flank face, dissipating heat source mainly affected by the cooling medium wear land on the flank of end mills. In this study, the flank of end mills was worn out, and thus the influences of the PSZ, RCZ, FWZ, and dissipating heat source on the transient temperature field of flank wear band on the end mills were comprehensively considered. The following five assumptions were made for the transient temperature field model of the wear band on the flank of end mills. 1. The heat flow generation and temperature distribution are stable. 2. Deformation energy refers to the deformation energy within the shear zone, deformation energy at the rake-chip interface due to friction and extrusion, and deformation energy at the flank-workpiece interface due to friction and extrusion. All deformation energy is involved in mechanical processes and is converted into cutting heat. This negligible part is stored in the deformed metal in the form of potential. The heat loss along the contact surface as well as the tool, chip and workpiece surface are neglected. 3. The heat sources in the RCZ are not affected by crater wear on the rake face. 4. The cutting-edge radius is zero. 5. The temperature of end mills is not related to the milling depth. 6. The yield strength of Ti6Al4V is affected by its temperature, strain rate and stress state. In order to simplify the calculation, only the effect of temperature on the yield strength of Ti6Al4V is considered. As shown in Figure 2, three different coordinate systems were established, by considering the influence of four heat sources on the temperature field of the wear band on the flank of end mills. Figure 2. Four heat sources and heat distribution systems. To calculate the increased temperatures caused by heat sources in the PSZ, RCZ, FWZ and dissipating heat source, the coordinate systems of heat sources in the PSZ and RCZ were converted To calculate the increased temperatures caused by heat sources in the PSZ, RCZ, FWZ and dissipating heat source, the coordinate systems of heat sources in the PSZ and RCZ were converted into the coordinate system of heat source in the FWZ (or dissipating heat source), β 0 = 90 • − γ n , as shown in Equation (1). Temperature Rise Model of the Wear Band Affected by Heat Source in the PSZ During the peripheral milling process, the heat source in the PSZ contacts with the tool indirectly through the workpiece, so the transient temperature field of the wear land on the flank of end mills will inevitably be affected by the heat source in the PSZ. According to Komanduri et al. [28], it is considered that the temperature rise at an arbitrary point on the workpiece can be calculated by the heat source model in the PSZ. This paper applies the heat source model in the PSZ to evaluate the temperature rise at any point on wear land on the flank of end mills, this model is based on the native heat source in the PSZ and its mirror heat source, and both of them exhibit the same heat source intensity ( Figure 3). Temperature Rise Model of the Wear Band Affected by Heat Source in the PSZ During the peripheral milling process, the heat source in the PSZ contacts with the tool indirectly through the workpiece, so the transient temperature field of the wear land on the flank of end mills will inevitably be affected by the heat source in the PSZ. According to Komanduri et al. [28], it is considered that the temperature rise at an arbitrary point on the workpiece can be calculated by the heat source model in the PSZ. This paper applies the heat source model in the PSZ to evaluate the temperature rise at any point on wear land on the flank of end mills, this model is based on the native heat source in the PSZ and its mirror heat source, and both of them exhibit the same heat source intensity ( Figure 3). According to a previous study [29], it has been suggested that the temperature rise ΔTP of any point P in space is caused by a heat source point dl on an obliquely moving banded heat source, as demonstrated in Equation (2). In Equation (2), RV is the length of the projection from heat source to an arbitrary point towards the motion of the banded heat source; and Vh is the velocity of the belt heat source. Figure 3 shows the relationship between the maximum length of the banded heat source in the first deformation zone and the undeformed chip thickness, as shown in Equation (3). According to a previous study [29], it has been suggested that the temperature rise ∆T P of any point P in space is caused by a heat source point dl on an obliquely moving banded heat source, as demonstrated in Equation (2). In Equation (2), R V is the length of the projection from heat source to an arbitrary point towards the motion of the banded heat source; and V h is the velocity of the belt heat source. Figure 3 shows the relationship between the maximum length of the banded heat source in the first deformation zone and the undeformed chip thickness, as shown in Equation (3). Further, the length of the projection from heat source to any point towards the motion of the banded heat source is calculated by Equation (4). According to Equations (3) and (4), the maximum length of the banded heat source in the PSZ and the projection length of the distance from the heat source point to any point in the direction of the movement of the banded heat source are time-varying because of the time-varying characteristics of the undeformed chip thickness, which ultimately leads to the time-varying characteristics of the temperature rise of the transient temperature field in the flank wear zone on end mills affected by the heat source in the PSZ. Because the heat source in the PSZ is a banded heat source, the temperature rise of the transient temperature field in wear land on the flank of end mills affected by the heat source in the PSZ is regarded as the effect of the integral superposition of a finite number of heat source points. Further, according to the coordinate system established in Figure 3, the temperature rise at any point along the X direction can be calculated by Equations (5)- (7). From Equation (5) to (7), ∆T I (x, 0, z) and ∆T I' (x, 0, z) are the temperature rises caused by native heat source and mirror heat source in the PSZ, respectively. The distance from any point P I (x, 0, z) along the X direction to the points of the native heat source dl and mirror heat source dl' in the PSZ can be measured by Equation (8). From Equation (5) to (7), λ t is the thermal conductivity of the tool (cemented carbide material); h(θ) is the thickness of the undeformed chips; φ n is the normal shear angle; η c is the chip outflow angle; α w is the thermal diffusivity of the Ti6Al4V material; and K 0 is the zero order of the second modified Bessel function. The calculation of heat source intensity in the PSZ is presented in Equation (9). The milling process is accompanied by a large plastic deformation at high temperatures, pressures and strain rates. In the present study, a flow stress model (Johnson-Cook) was used to characterize the yield stress (σ ABCD ) on the shear surface (ABCD) of workpiece during milling. The shear slip phenomenon of the shear surface requires the ultimate shear stress (τ ABCD-max ). The relationship between them is revealed by Equation (10). The calculation for the ultimate shear stress of the shear plane (ABCD) is shown in Equation (11). Johnson-Cook constitutive model has been reported on Ti6Al4V material by Wu et al. [30]. In Equation (11), A is the initial yield stress at the reference strain rate and temperature, B is the strain hardening modulus of Ti6Al4V, ε ABCD−P is the effective plastic strain at the shear plane, n is the strain-hardening exponent of Ti6Al4V, C is the strain rate-hardening parameter of Ti6Al4V, . ε ABCD−P is the effective plastic strain rate at the shear plane, . ε 0 is the reference strain rate; T is the current temperature; T r is the reference temperature; T m is the melting temperature of Ti6Al4V; and m is the thermal softening index of Ti6Al4V. Temperature Rise Model of the Wear Band Affected by Heat Source in the RCZ During the peripheral milling process, the heat source in the RCZ contacts with the rake face of end mills, then the heat generated by the heat source in the RCZ is transmitted to wear land on the flank of end mills through the inside of the end mills, so the transient temperature field of wear land on the flank of end mills will inevitably be affected by the heat source in the RCZ. According to Jaeger's dynamic heat source theory [31], the temperature field model of the heat source in the RCZ is established. The contact surface between the chip and the rake face formed by the peripheral milling process is a parallelogram; a rectangular heat source in the RCZ is considered to simplify the problem. Huang et al. [32] considered that the rake angle effect on rake face temperature could be ignored. However, Puls et al. [33] found out by finite element simulation that the chip plastic deformation decreased with the increase of rake angle, which ultimately reduced the temperature rise of rake face ( Figure 4). Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 29 Johnson-Cook constitutive model has been reported on Ti6Al4V material by Wu et al. [30]. In Equation (11), A is the initial yield stress at the reference strain rate and temperature, B is the strain hardening modulus of Ti6Al4V, is the effective plastic strain at the shear plane, n is the strain-hardening exponent of Ti6Al4V, C is the strain rate-hardening parameter of Ti6Al4V, ABCD P    is the effective plastic strain rate at the shear plane, 0  is the reference strain rate; T is the current temperature; Tr is the reference temperature; Tm is the melting temperature of Ti6Al4V; and m is the thermal softening index of Ti6Al4V. Temperature Rise Model of the Wear Band Affected by Heat Source in the RCZ During the peripheral milling process, the heat source in the RCZ contacts with the rake face of end mills, then the heat generated by the heat source in the RCZ is transmitted to wear land on the flank of end mills through the inside of the end mills, so the transient temperature field of wear land on the flank of end mills will inevitably be affected by the heat source in the RCZ. According to Jaeger's dynamic heat source theory [31], the temperature field model of the heat source in the RCZ is established. The contact surface between the chip and the rake face formed by the peripheral milling process is a parallelogram; a rectangular heat source in the RCZ is considered to simplify the problem. Huang et al. [32] considered that the rake angle effect on rake face temperature could be ignored. However, Puls et al. [33] found out by finite element simulation that the chip plastic deformation decreased with the increase of rake angle, which ultimately reduced the temperature rise of rake face ( Figure 4). Figure 4. Effect of rake angle on tool temperature field [33]. Therefore, the rake angle should be considered for establishing the heat source model in the RCZ. The heat source in the RCZ is composed of both native and mirror heat sources, and the heat intensity of the two heat sources are relatively similar ( Figure 5). Therefore, the rake angle should be considered for establishing the heat source model in the RCZ. The heat source in the RCZ is composed of both native and mirror heat sources, and the heat intensity of the two heat sources are relatively similar ( Figure 5). From Equation (12) to (14), ∆T II (x', 0, z') and ∆T II' (x', 0, z') are the temperature rise caused by native heat source and mirror heat source in the RCZ, respectively. The distance from any point P II Appl. Sci. 2019, 9, 1698 8 of 27 (x', 0, z') along X' direction to the points of both native and mirror heat sources in the RCZ can be measured by Equation (15). From Equation (12) to (14), ΔTII (x', 0, z') andΔTII' (x', 0, z') are the temperature rise caused by native heat source and mirror heat source in the RCZ, respectively. The distance from any point PII (x', 0, z') along X' direction to the points of both native and mirror heat sources in the RCZ can be measured by Equation (15). Under dry cutting conditions and with a new tool for machining, the flank is generally considered as adiabatic. Therefore, nr = 1, when wear land on the flank of end mills contacts with the machined surface, it is calculated as 0 <nr < 1. The value obtained in this study was 0.5, for which the simulation result of the transient temperature field model of wear land on the flank of end mills is closest to the experimental results. Under dry cutting conditions and with a new tool for machining, the flank is generally considered as adiabatic. Therefore, n r = 1, when wear land on the flank of end mills contacts with the machined surface, it is calculated as 0 <n r < 1. The value obtained in this study was 0.5, for which the simulation result of the transient temperature field model of wear land on the flank of end mills is closest to the experimental results. In the process of peripheral milling, the tool chip contact area of cuttings may exhibit time-varying characteristics, leading to the time-varying characteristics of heat source strength in the second deformation zone. In addition, the non-uniform friction distribution between tool and chip may contribute to the non-uniformity of heat intensity in the RCZ. Based on the above assumptions, the heat intensity of the RCZ q II' (θ,x') is calculated using Equation (16). Huang et al. [32] believes that shear stress results in heat source intensity. However, in the actual high-speed milling process, the heat production of the tool chip is largely dependent on contact friction. Therefore, due to the non-uniform friction distribution between the chip and rake face, it is presumed that the lengths of bonding zone and slip zone accounted for half of the tool chip contact length. The friction force in the bonding zone represents the ultimate shear stress of workpiece material, and the ultimate shear stress τ max ≈ σ s−w / √ 3, σ s-w is derived from the yield stress of Ti6Al4V. The yield strength of Ti6Al4V is affected by its temperature, strain rate and stress state. In order to simplify the calculation, only the effect of temperature on the yield strength of Ti6Al4V is considered. According to Sun et al. [34], it is known that the relationship between workpiece temperature and milling parameters is shown in Equation (17), when the peripheral milling of cemented carbide is used to process Ti6Al4V. In this paper, V = 80 m/min; fz = 0.1 mm/z; Ae = 0.7 mm; and Ap = 16 mm is milling parameters, so T workpiece = 318.73 • C according to Boyer et al. [35], the relationship between yield strength and temperature of Ti6Al4V can be obtained, and the yield strength of Ti6Al4V is 600 MPa. The slip friction zone obeys Coulomb friction law. Hence, the frictional force interface between tool and chip is shown in Equation (18). According to previous studies [36][37][38][39], the normal stress distribution on the rake face of end mills is measured based on the time-varying characteristics of tool chip contact length, as described in Equation (19). The calculation of the normal stress near the tip of the rake face is referred to in Moufki et al. [39], as shown in Equation (20). According to Moufki et al. [39], the relationship between unknown chip speed and given cutting speed can be calculated using Equation (21). As illustrated in Figure 6, it can be observed that the contact area between tool and chip is time-varying, which is caused by the time-varying contact length between the tool and chip derived from Equation (22). In Equation (22), the calculation of the rake-chip contact length is a critical value for the prediction of temperature and stress distribution on the rake face. The Equation (23) in Moufki et al. [39] is used here, but for the derivation of the undeformed chip thickness involved in Equation (23), see Section 2.6.1 for details. According to Moufki et al. [39], the relationship between unknown chip speed and given cutting speed can be calculated using Equation (21 As illustrated in Figure 6, it can be observed that the contact area between tool and chip is time-varying, which is caused by the time-varying contact length between the tool and chip derived from Equation (22). In Equation (22), the calculation of the rake-chip contact length is a critical value for the prediction of temperature and stress distribution on the rake face. The Equation (23) According to Shaw et al. [40], the heat distribution ratio in the cutting process can be calculated by Equation (24). Because the rake-chip contact length during the peripheral milling process is time-varying, the heat distribution ratio of the heat source in the RCZ (B II-rake (θ)) is time-varying. Equation (23) is substituted into Equation (24) to calculate the heat distribution ratio (B II-rake (θ)). Temperature Rise Model of the Wear Band Affected by Heat Source in the FWZ During the peripheral milling process, the heat source in the FWZ contacts with the tool, so the transient temperature field of wear land on the flank of end mills will inevitably be affected by the heat source in the FWZ. According to Huang et al. [32], the temperature rise at any point of the tool flank rake is calculated by the heat source model in the FWZ, but the tool rake angle effect on the temperature rise is not considered. As mentioned in Section 2.2, this paper considers the influence of rake angle on cutting temperature; the heat source in the FWZ is composed of both native and mirror heat sources, and their heat intensity are relatively similar (Figure 7). During the peripheral milling process, the heat source in the FWZ contacts with the tool, so the transient temperature field of wear land on the flank of end mills will inevitably be affected by the heat source in the FWZ. According to Huang et al. [32], the temperature rise at any point of the tool flank rake is calculated by the heat source model in the FWZ, but the tool rake angle effect on the temperature rise is not considered. As mentioned in Section 2.2, this paper considers the influence of rake angle on cutting temperature; the heat source in the FWZ is composed of both native and mirror heat sources, and their heat intensity are relatively similar (Figure 7). The temperature rises ΔTflank-III (x", 0, 0) at any point PIII (x", 0, 0) along X" direction can be calculated by Equations (25), (26) and (27). From Equation (25) to (27), ∆T III (x , 0, 0) and ∆T III' (x , 0, 0) are the temperature rise caused by native heat source and mirror heat source in the FWZ, respectively. The distance from any point P III (x , 0, 0) along X direction to the points of native and mirror heat sources in the FWZ can be measured by Equation (28). As previously described in Section 2.2, n f = 0.5. Due to the non-uniformity of normal stress distribution in the wear band during the peripheral milling process, a non-uniform heat intensity is found in the FWZ. Further, according to the physical definition of heat intensity, the heat intensity of the FWZ (q III (x )) is calculated using Equation (29). The friction between flank and machined surface is mostly sliding friction type. In order to simplify it to Coulomb friction type, the friction on the interface between tool and workpiece is obtained using Equation (30). In Equation (30), µ f is the average friction coefficient on the interface between tool and workpiece, which is related to the average temperature of the interface between tool and workpiece. It is well accepted [41] that both of them satisfy the following empirical formula: Moreover, it has been previously reported [42] that T f ≈ 500 • C, and thus µ f ≈ 0.828.σ n− f (x") is the normal stress distribution in the middle of wear band on the flank. According to a previous study [43], when the width of the flank wear band is small, elastic contact is found between machined surface and wear band, while a plastic flow phenomenon can occur when the wear band width is large. These may be due to the high temperatures and pressures existing near the cutting edge, such as plastic flow contact. As the referred to previous research [44] and coordinate transformation shown in Figure 8, the distribution of normal stress σ n− f (x") over the wear band on the flank is determined, as shown in Figure 8 and Equation (31). x" ∈ (VB CR , VB 1 ) x" ∈ (0, VB CR ) In Equation (30), f  is the average friction coefficient on the interface between tool and workpiece, which is related to the average temperature of the interface between tool and workpiece. It is well accepted [41] that both of them satisfy the following empirical formula: . Moreover, it has been previously reported [42] that 500 C f T   , and thus 0.828 is the normal stress distribution in the middle of wear band on the flank. According to a previous study [43], when the width of the flank wear band is small, elastic contact is found between machined surface and wear band, while a plastic flow phenomenon can occur when the wear band width is large. These may be due to the high temperatures and pressures existing near the cutting edge, such as plastic flow contact. As the referred to previous research [44] and coordinate transformation shown in Figure 8, the distribution of normal stress   " n f x   over the wear band on the flank is determined, as shown in Figure 8 and Equation (31). In Equation (31), VB CR is the critical point for both the plastic flow area and elastic contact area, while it also represents the width of the elastic contact area. The relationship between the width of the plastic flow area and the width of the wear band on the flank is fitted to previously published experimental data [45]. According to a prior research [46], when VB 1 >VB CR , the normal stress σ tip on the tip of microelement can be calculated, as shown in Equation (32). where η w = 0.5cos −1 (m w ). In Equation (32), K is the ratio of the shear stress on the cutting edge to the shear flow stress on the workpiece, which is equal to the friction coefficient µ c of the cutting edge near the flank. Due to the bonding nature of contact between the wear band on the flank and the machined surface, a previous study [47] has suggested that µ c is uniform. m w is the slip line field angle of the wear band on the flank; its numerical value is equal to the friction coefficient µ c of the cutting edge near the flank. The value of the Ti6Al4V workpiece obtained from the current study is m w = µ c = 0.85. According to a previous research [46], if the proportion of undeformed chip thickness to cutting width is more than 5%, it is considered ρ = 0 • , and the obtained value is 0.8 • . According to Lee and Shaffer's principle (φ + λ r − γ 0 = 45 • ), the direction of principal stress and the direction of maximum shear stress are adjusted to 45 • , in order to determine the shear angle φ. As shown in Equation (33), the formula for shear angle φ is expressed as follows in Equation (33). Basing on a prior research [48], the heat distribution ratio in the cutting process can be calculated by Equation (34). Owing to the undeformed chip thickness during the peripheral milling process and the flank length (VB 1 ) are time-varying, respectively, which results in the result that the heat distribution ratio of the heat source in the FWZ (B III-flank (θ)) is time-varying. the Equation (44) is substituted into Equation (34) to calculate the heat distribution ratio. Temperature Drop Model of the Wear Band Affected by Dissipating Heat Source Tool-workpiece contact is intermittent during the peripheral milling process, when the tool is out of contact with the workpiece, the heat sources in the PSZ, RCZ and FWZ will disappear, and then the dissipating heat sources will appear. Because the processing method in this paper is dry milling, the dissipating heat source at this time is caused by wear land on the flank of end mills and the natural convection cooling of air. Because air is between the wear land on the flank of end mills and the machined surface, the former is regarded as a dissipating heat source. The dissipating heat source is a fixed heat source relative to any point on the flank face, so the temperature drop model is analogous with the temperature rise model of the tool/workpiece heat source established by Huang et al. [32]. It is assumed that the length of the dissipating heat source is the length of the wear band on the flank face and the width of dissipating heat source is the cutting width, as shown in Figure 9. Temperature drop in wear land on the flank of end mills is affected by both native dissipating heat source and mirror dissipating heat source. According to Newton's cooling law, the dissipating heat intensity is calculated by Equation (39). In Equation (39), h is the average air convective heat transfer coefficient, T flank (x , 0, 0) is the temperature at any point in the X direction of the flank wear band of end mills, and Te is the ambient temperature. Transient Temperature Field in Wear Band on the Flank of End Mills (1) When the edge is in contact with the workpiece, there will be heat sources in the PSZ, RCZ and FWZ, which will have a temperature rise effect on the transient temperature field of wear band on the flank of end mills. Therefore, considering the effects of heat sources in the PSZ, the RCZ and the FWZ on the transient temperature field of wear band on the flank of end mills, the temperature T in-flank (x , 0, 0) at any point P (x , 0, 0) along the X direction can be calculated using Equation (40). Further, the temperature at any point P (x , 0, 0) along the X direction of the wear band on the flank of end mills can be obtained from the coordinate system transformation and can be calculated using Equation (41). (2) When the edge is out of contact with the workpiece, the heat sources in the PSZ, RCZ and FWZ disappear, and then the dissipated heat sources will appear, which will have a temperature drop effect on the transient temperature field of the wear band on the flank of end mills. Therefore, considering the effect of dissipation source on the transient temperature field of wear band on the flank of end mills, the temperature T out-flank (x , 0, 0) at any point P (x , 0, 0) along the X" direction can be calculated using Equation (42). Based on the two kinds of contact relationship between tool and workpiece, the transient temperature field of wear band on the flank of end mills, the temperature T flank (x , 0, 0) at any point P (x , 0, 0) along X direction can be calculated using Equation (43). Further, the temperature at any point P (x", 0, 0) along the X" direction of the wear band on the flank of end mills can be obtained from the coordinate system transformation and can be calculated using Equation (41). (2) When the edge is out of contact with the workpiece, the heat sources in the PSZ, RCZ and FWZ disappear, and then the dissipated heat sources will appear, which will have a temperature drop effect on the transient temperature field of the wear band on the flank of end mills. Therefore, considering the effect of dissipation source on the transient temperature field of wear band on the flank of end mills, the temperature Tout-flank (x", 0, 0) at any point P (x", 0, 0) along the X" direction can be calculated using Equation (42). Based on the two kinds of contact relationship between tool and workpiece, the transient temperature field of wear band on the flank of end mills, the temperature Tflank (x", 0, 0) at any point P (x", 0, 0) along X" direction can be calculated using Equation (43). In this part, the temperature field models of 2.1, 2.2, 2.3, and 2.4 were simulated and superimposed by MATLAB software (R2014a, The MathWorks, Natick, MA, the United States, 2014). Consequently, the predicted results for the transient temperature field of the wear band on the flank of end mills were obtained. The modeling process is summarized in Figure 10. Further, the temperature at any point P (x", 0, 0) along the X" direction of the wear band on the flank of end mills can be obtained from the coordinate system transformation and can be calculated using Equation (41). (2) When the edge is out of contact with the workpiece, the heat sources in the PSZ, RCZ and FWZ disappear, and then the dissipated heat sources will appear, which will have a temperature drop effect on the transient temperature field of the wear band on the flank of end mills. Therefore, considering the effect of dissipation source on the transient temperature field of wear band on the flank of end mills, the temperature Tout-flank (x", 0, 0) at any point P (x", 0, 0) along the X" direction can be calculated using Equation (42). Based on the two kinds of contact relationship between tool and workpiece, the transient temperature field of wear band on the flank of end mills, the temperature Tflank (x", 0, 0) at any point P (x", 0, 0) along X" direction can be calculated using Equation (43). In this part, the temperature field models of 2.1, 2.2, 2.3, and 2.4 were simulated and superimposed by MATLAB software (R2014a, The MathWorks, Natick, MA, the United States, 2014). Consequently, the predicted results for the transient temperature field of the wear band on the flank of end mills were obtained. The modeling process is summarized in Figure 10. ) (43) In this part, the temperature field models of 2.1, 2. The following assumptions were made for deriving the undeformed chip thickness: 1. There is no deformation or vibration between tool and workpiece. 2. The cutting edge is cut from the machined surface Ae during each milling. The following assumptions were made for deriving the undeformed chip thickness: 1. There is no deformation or vibration between tool and workpiece. 2. The cutting edge is cut from the machined surface A e during each milling. As shown in Figure 11 and Equation (44), the relationship among undeformed chip thickness and instantaneous contact angle, peripheral milling width and feed rate per tooth is deduced from a geometric relationship. In Equation (44), As shown in Figure 11 and Equation (44), the relationship among undeformed chip thickness and instantaneous contact angle, peripheral milling width and feed rate per tooth is deduced from a geometric relationship. Calculation of Instantaneous Contact Angle θ The end mill is discretized along the direction of axial cutting depth into m circular discs with thickness w = dz = Ap/m ( Figure 12). Calculation of Instantaneous Contact Angle θ The end mill is discretized along the direction of axial cutting depth into m circular discs with thickness w = dz = Ap/m ( Figure 12). Calculation of Instantaneous Contact Angle θ The end mill is discretized along the direction of axial cutting depth into m circular discs with thickness w = dz = Ap/m ( Figure 12). If m is large enough, the helical line BD is considered as a straight line. Therefore, in this study, ABC is regarded as a right triangle, indicating a corresponding relationship between them, as revealed by Equation (45). When the instantaneous contact angle is θ, the secant AB satisfaction relationship in the sector is demonstrated in Equation (46). As shown in Equation (47), the relationship among instantaneous contact angle θ, milling depth A p and the number of discs m is examined using the simultaneous Equations of (45) and (46). Verification of Temperature Field Model The temperature experimental data in Sun et al. [34] reflects the temperature variation law in the cutting area during the milling process, namely, it reflects the law of temperature change near the cutting edge of the end mills, and near the cutting edge of the end mills means that it includes the tool rake face that is near the cutting edge of end mills and the wear band on the flank of the end mills. Therefore, the accuracy of the established transient temperature field model of the wear band on the flank of end mills was verified by referring to the experimental data from Sun et al. [34]. The processing site of the milling experiment is shown in Figure 13. If m is large enough, the helical line BD is considered as a straight line. Therefore, in this study, ABC is regarded as a right triangle, indicating a corresponding relationship between them, as revealed by Equation (45). When the instantaneous contact angle is θ, the secant AB satisfaction relationship in the sector is demonstrated in Equation (46). sin 2 As shown in Equation (47), the relationship among instantaneous contact angle θ, milling depth Ap and the number of discs m is examined using the simultaneous Equations of (45) and (46). Verification of Temperature Field Model The temperature experimental data in Sun et al. [34] reflects the temperature variation law in the cutting area during the milling process, namely, it reflects the law of temperature change near the cutting edge of the end mills, and near the cutting edge of the end mills means that it includes the tool rake face that is near the cutting edge of end mills and the wear band on the flank of the end mills. Therefore, the accuracy of the established transient temperature field model of the wear band on the flank of end mills was verified by referring to the experimental data from Sun et al. [34]. The processing site of the milling experiment is shown in Figure 13. A non-standard semi-artificial thermocouple consisting of titanium alloy and constantan strip was used to measure the changes in temperature during milling ( Figure 14). A non-standard semi-artificial thermocouple consisting of titanium alloy and constantan strip was used to measure the changes in temperature during milling ( Figure 14). This experiment used DAEWOOACE-V500 NC Machining Center for the dry milling of titanium alloy. The specific tool parameters are listed in Table 1. Table 1. Tool parameters for experiment [34]. This experiment used DAEWOOACE-V500 NC Machining Center for the dry milling of titanium alloy. The specific tool parameters are listed in Table 1. Table 1. Tool parameters for experiment [34]. Parameters of End Mills Parameters of end mills number of tooth rake angle diameter flank angle helix angle 4 10° 20 mm 14° 38° Prediction Results for the Transient Temperature Field Model of Wear Band on the Flank of End Mills The temperature of the end mills varies periodically during the peripheral milling process, and the changes in milling temperature are positively correlated with the changes in undeformed chip thickness, as shown in Figure 15a The temperature of the end mills varies periodically during the peripheral milling process, and the changes in milling temperature are positively correlated with the changes in undeformed chip thickness, as shown in Figure 15a This experiment used DAEWOOACE-V500 NC Machining Center for the dry milling of titanium alloy. The specific tool parameters are listed in Table 1. Table 1. Tool parameters for experiment [34]. The temperature of the end mills varies periodically during the peripheral milling process, and the changes in milling temperature are positively correlated with the changes in undeformed chip thickness, as shown in Figure 15a Comparison of Simulation and Experimental Results for Milling Temperature As shown in Figure 16, the simulation value of wear band in the temperature field model at the highest temperature is compared with the reported experimental value from Sun et al. [34]. Comparison of Simulation and Experimental Results for Milling Temperature As shown in Figure 16, the simulation value of wear band in the temperature field model at the highest temperature is compared with the reported experimental value from Sun et al. [34]. Comparison of Simulation and Experimental Results for Milling Temperature As shown in Figure 16, the simulation value of wear band in the temperature field model at the highest temperature is compared with the reported experimental value from Sun et al. [34]. By comparing the simulated value to the experimental value, it was found that the simulated value of the temperature field model was consistent with the experimental value with regards to the magnitude and trend of change. These results further verified the accuracy of the temperature field model. However, the relative error between them ranged from 9.28% to 15.22%. Such errors could be attributed to the following reasons: 1. The established transient temperature field model of wear band on the flank of end mills assumed that all the physical works in the PSZ, RCZ and FWZ are converted into heat. Indeed, the actual situation may be that part of the physical works in the three deformation zones are at least partially stored in the form of potential, which exerts no effect on milling temperature. 2. The proposed temperature field model considers the wear band on the flank of end mills. However, the flank of end mills obtained from Sun et al. [34] does not occur to wear phenomenon, which results in the lower temperature of end mills measured in the experiment compared to its actual temperature. 3. The reported method for measuring milling temperature is based on the artificial thermocouple approach of Sun et al. [34]. It can only measure the temperature of the wear band on the flank of end mills at certain distances, but is not able to measure the temperature directly. 4. Thermocouple junctions have a specific volume and mass, which lags behind the temperature response of the milling process, owing to their interrupted cutting characteristics. By comparing the simulated value to the experimental value, it was found that the simulated value of the temperature field model was consistent with the experimental value with regards to the magnitude and trend of change. These results further verified the accuracy of the temperature field model. However, the relative error between them ranged from 9.28% to 15.22%. Such errors could be attributed to the following reasons: 1. The established transient temperature field model of wear band on the flank of end mills assumed that all the physical works in the PSZ, RCZ and FWZ are converted into heat. Indeed, the actual situation may be that part of the physical works in the three deformation zones are at least partially stored in the form of potential, which exerts no effect on milling temperature. 2. The proposed temperature field model considers the wear band on the flank of end mills. However, the flank of end mills obtained from Sun et al. [34] does not occur to wear phenomenon, which results in the lower temperature of end mills measured in the experiment compared to its actual temperature. 3. The reported method for measuring milling temperature is based on the artificial thermocouple approach of Sun et al. [34]. It can only measure the temperature of the wear band on the flank of end mills at certain distances, but is not able to measure the temperature directly. 4. Thermocouple junctions have a specific volume and mass, which lags behind the temperature response of the milling process, owing to their interrupted cutting characteristics. The Influence of Milling Parameters on Heat Distribution Ratio in the RCZ The changes in heat distribution ratios of the heat source in the RCZ to the rake face over a period of varying undeformed chip thickness were simulated by MATLAB software (R2014a, The MathWorks, Natick, MA, USA, 2014). As shown in Figure 17, the heat distribution ratios between heat source in the RCZ and rake face appeared from 0.282 to 0.2874, in the period of varying undeformed chip thickness. The results indicated that heat distribution ratios were positively correlated with the changes in undeformed chip thickness, which may be due to the effect of undeformed chip thickness on milling force. period of varying undeformed chip thickness were simulated by MATLAB software (R2014a, The MathWorks, Natick, MA, the United States, 2014). As shown in Figure 17, the heat distribution ratios between heat source in the RCZ and rake face appeared from 0.282 to 0.2874, in the period of varying undeformed chip thickness. The results indicated that heat distribution ratios were positively correlated with the changes in undeformed chip thickness, which may be due to the effect of undeformed chip thickness on milling force. 30 Two milling parameters, namely, milling speed and milling depth, were selected to assess their effects on the heat distribution ratios of the heat source in the RCZ to the rake face. The results of simulation are shown in Figure 18; Figure 19. Two milling parameters, namely, milling speed and milling depth, were selected to assess their effects on the heat distribution ratios of the heat source in the RCZ to the rake face. The results of simulation are shown in Figures 18 and 19. The changes in heat distribution ratios of the heat source in the RCZ to the rake face over a period of varying undeformed chip thickness were simulated by MATLAB software (R2014a, The MathWorks, Natick, MA, the United States, 2014). As shown in Figure 17, the heat distribution ratios between heat source in the RCZ and rake face appeared from 0.282 to 0.2874, in the period of varying undeformed chip thickness. The results indicated that heat distribution ratios were positively correlated with the changes in undeformed chip thickness, which may be due to the effect of undeformed chip thickness on milling force. 30 Two milling parameters, namely, milling speed and milling depth, were selected to assess their effects on the heat distribution ratios of the heat source in the RCZ to the rake face. The results of simulation are shown in Figure 18; Figure 19. As shown in Figure 18 and19, when milling depth and speed remain unchanged, as well as milling speed and depth increased, the heat distribution ratios of the heat source in the RCZ to the rake face decreased, and the reduction rates were most significant before reaching the maximal value of undeformed chip thickness. As shown in Figure 20, the variation of heat intensity in the RCZ was not obvious, and the heat As shown in Figures 18 and 19, when milling depth and speed remain unchanged, as well as milling speed and depth increased, the heat distribution ratios of the heat source in the RCZ to the rake face decreased, and the reduction rates were most significant before reaching the maximal value of undeformed chip thickness. The Influence of Milling Parameters on Heat Intensity in the RCZ The changing trend of heat intensity in the RCZ over a period of varying undeformed chip thickness was simulated by MATLAB software (R2014a, The MathWorks, Natick, MA, USA, 2014). As shown in Figure 20, the variation of heat intensity in the RCZ was not obvious, and the heat intensity near the tool tip was larger along the interface between tool and chip. Moreover, the heat intensity increased with increasing instantaneous contact angle, but the difference was not significant. When the instantaneous contact angle increased by 80 • , the change of heat intensity was significant and reached a maximum value. The maximum value of heat intensity may be attributed to the highest value of undeformed chip thickness at that time. As shown in Figure 18 and19, when milling depth and speed remain unchanged, as well as milling speed and depth increased, the heat distribution ratios of the heat source in the RCZ to the rake face decreased, and the reduction rates were most significant before reaching the maximal value of undeformed chip thickness. The Influence of Milling Parameters on Heat Intensity in the RCZ The changing trend of heat intensity in the RCZ over a period of varying undeformed chip thickness was simulated by MATLAB software (R2014a, The MathWorks, Natick, MA, the United States, 2014). As shown in Figure 20, the variation of heat intensity in the RCZ was not obvious, and the heat intensity near the tool tip was larger along the interface between tool and chip. Moreover, the heat intensity increased with increasing instantaneous contact angle, but the difference was not significant. When the instantaneous contact angle increased by 80°, the change of heat intensity was significant and reached a maximum value. The maximum value of heat intensity may be attributed to the highest value of undeformed chip thickness at that time. As shown in Figures 21 and 22, when milling depth remained unchanged and milling speed increased, the heat intensity in the RCZ was enhanced, and the enhancement rate was most significant when the undeformed chip thickness reached a maximum value. When the contact angle was 93 • , the undeformed chip thickness nearly reached the maximum value, meanwhile, milling force also nearly reached the maximum value. In contrast, when milling speed remained unchanged and milling depth increased, the heat intensity in the second deformation zone was reduced. Similarly, when the undeformed chip thickness reached its maximum value, the weakening rate of heat intensity was found to be most significant. As shown in Figure 21 and 22, when milling depth remained unchanged and milling speed increased, the heat intensity in the RCZ was enhanced, and the enhancement rate was most significant when the undeformed chip thickness reached a maximum value. When the contact angle was 93°, the undeformed chip thickness nearly reached the maximum value, meanwhile, milling force also nearly reached the maximum value. In contrast, when milling speed remained unchanged and milling depth increased, the heat intensity in the second deformation zone was reduced. Similarly, when the undeformed chip thickness reached its maximum value, the weakening rate of heat intensity was found to be most significant. The Influence of Milling Parameters on Heat Distribution Ratio in the FWZ The changes in heat distribution ratios of the heat source in the FWZ to the wear band on the flank of end mills over a period of change of varying undeformed chip thickness were simulated by MATLAB software (R2014a, The MathWorks, Natick, MA, the United States, 2014). As shown in Figure 23, the heat distribution ratios between the heat source in the FWZ and wear band on the flank of end mills varied from 0.3726 to 0.378 over the period of varying undeformed chip thickness. The results demonstrated that heat distribution ratios were positively correlated with the changes in undeformed chip thickness, which may be due to the effect of undeformed chip thickness on milling force. As shown in Figure 23, the heat distribution ratios between the heat source in the FWZ and wear band on the flank of end mills varied from 0.3726 to 0.378 over the period of varying undeformed chip thickness. The results demonstrated that heat distribution ratios were positively correlated with the changes in undeformed chip thickness, which may be due to the effect of undeformed chip thickness on milling force. Similarly, milling speed and milling depth were selected to evaluate their effects on the heat distribution ratios of the heat source in the FWZ to the wear band on the flank of end mills. The results of simulation are shown in Figure 24 Similarly, milling speed and milling depth were selected to evaluate their effects on the heat distribution ratios of the heat source in the FWZ to the wear band on the flank of end mills. The results of simulation are shown in Figures 24 and 25. As shown in Figures 24 and 25, when milling depth and speed remain unchanged, as well as milling speed and depth increased, the heat distribution ratios of the heat source in the FWZ to the wear band on the flank of end mills decreased, and the reduction rate appeared to be higher with increasing undeformed chip thickness. Similarly, milling speed and milling depth were selected to evaluate their effects on the heat distribution ratios of the heat source in the FWZ to the wear band on the flank of end mills. The results of simulation are shown in Figure 24 and 25. As shown in Figure 24 and 25, when milling depth and speed remain unchanged, as well as milling speed and depth increased, the heat distribution ratios of the heat source in the FWZ to the Similarly, milling speed and milling depth were selected to evaluate their effects on the heat distribution ratios of the heat source in the FWZ to the wear band on the flank of end mills. The results of simulation are shown in Figure 24 As shown in Figure 24 and 25, when milling depth and speed remain unchanged, as well as milling speed and depth increased, the heat distribution ratios of the heat source in the FWZ to the The Influence of Milling Parameters on Heat Intensity in the FWZ The changing trend in heat intensity of the FWZ over a period of varying undeformed chip thickness was simulated by MATLAB software. Figure 26 shows that the heat intensity in the FWZ varied with the location of wear band on the flank. When the X coordinate value of wear band on the flank was within the range of 0-0.09 mm, the heat intensity in the FWZ decreased with increasing X coordinate value. When the X coordinate value of wear band on the flank was within the range of 0.09-0.15 mm, the heat intensity remained unchanged. These changes are probably due to the fact that there are two types of contacts, such as plastic contact and elastic contact, between the wear band on the flank and the machined surface. Additionally, milling speed and milling depth were selected to determine their effects on the strength of the heat source in the FWZ. The simulation results are presented in Figures 27 and 28. As shown in Figures 27 and 28, when milling depth remained unchanged and milling speed increased, the heat intensity in the FWZ was enhanced, and the enhancement rate of wear band on the flank was found to be most significant when the X coordinate value of the wear band on the flank was within the range of 0.09-0.15 mm. When milling speed remained unchanged and milling depth increased, the heat intensity of the FWZ increased. When the X coordinate value of the wear band on the flank was within the range of 0.09-0.15 mm, the weakening rate was the most significant. flank. When the X" coordinate value of wear band on the flank was within the range of 0-0.09 mm, the heat intensity in the FWZ decreased with increasing X" coordinate value. When the X" coordinate value of wear band on the flank was within the range of 0.09-0.15 mm, the heat intensity remained unchanged. These changes are probably due to the fact that there are two types of contacts, such as plastic contact and elastic contact, between the wear band on the flank and the machined surface. coordinate value of wear band on the flank was within the range of 0.09-0.15 mm, the heat intensity remained unchanged. These changes are probably due to the fact that there are two types of contacts, such as plastic contact and elastic contact, between the wear band on the flank and the machined surface. As shown in Figure 27and 28, when milling depth remained unchanged and milling speed increased, the heat intensity in the FWZ was enhanced, and the enhancement rate of wear band on the flank was found to be most significant when the X" coordinate value of the wear band on the flank was within the range of 0.09-0.15 mm. When milling speed remained unchanged and milling depth increased, the heat intensity of the FWZ increased. When the X" coordinate value of the wear band on the flank was within the range of 0.09-0.15 mm, the weakening rate was the most Conclusions In sum, the transient temperature field characteristics of the wear band on the flank of end mills in the titanium alloy cutting process are studied by theoretical modeling and an experimental method, as shown below: According to the temperature rise and drop in milling process, given the non-uniform heat intensity, heat distribution ratio and rake angle in the RCZ and FWZ, a transient temperature field model of wear band on the flank of end mills is established based on "moving heat source method". The accuracy of the model is verified by both simulation and experimental results. The error ranged from 9.28% to 15.22%. When milling depth remains unchanged and milling speed is higher, heat distribution ratios of the heat source in the RCZ to the rake face may decrease, and the reduction ratio is most significant before the undeformed chip thickness reaches its maximum value. When milling speed remains unchanged and milling depth is greater, the heat distribution ratio of the heat source in the RCZ to the rake face may decrease, and the reduction rate is most significant before the undeformed chip thickness reaches its maximum value. When milling depth remains unchanged and milling speed is higher, the heat intensity in the RCZ is enhanced, and the enhancement rate is most significant when the undeformed chip thickness reaches its maximum value. When milling speed remains unchanged and milling depth is lower, the strength of the heat source in the RCZ is improved, and the enhancement rate is the most significant when the undeformed chip thickness reaches its maximum value. When milling depth and speed remain unchanged as well as milling speed and depth increase, the heat distribution ratio of the heat source in the FWZ to the wear band on the flank may be lowered, and the reduction rate decreases gradually with increasing undeformed chip thickness. When milling depth remains unchanged and milling speed is higher, the heat intensity in the FWZ is enhanced, and when the X coordinate value of the wear band of the flank is within the range of 0.09-0.15 mm, the enhancement rate appears to be most significant. When milling speed remains unchanged and milling depth is greater, the heat intensity in the FWZ may be weakened, and when the X coordinate value of wear band on the flank is within the range of 0.09-0.15 mm, the weakening rate is considered the most significant. Funding: This research was funded by National Natural Science Foundation International (Regional) Cooperation and Exchange Project, grant number 51720105009, and Outstanding Youth Project of Science and Technology Talents, grant number LGYC2018JQ015.
17,043.6
2019-04-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Machine-Learning-Based Indoor Localization under Shadowing Condition for P-NOMA VLC Systems The localization of agents for collaborative tasks is crucial to maintain the quality of the communication link for successful data transmission between the base station and agents. Power-domain Non-Orthogonal Multiple Access (P-NOMA) is an emerging multiplexing technique that enables the base station to accumulate signals for different agents using the same time-frequency channel. The environment information such as distance from the base station is required at the base station to calculate communication channel gains and allocate suitable signal power to each agent. The accurate estimate of the position for power allocation of P-NOMA in a dynamic environment is challenging due to the changing location of the end-agent and shadowing. In this paper, we take advantage of the two-way Visible Light Communication (VLC) link to (1) estimate the position of the end-agent in a real-time indoor environment based on the signal power received at the base station using machine learning algorithms and (2) allocate resources using the Simplified Gain Ratio Power Allocation (S-GRPA) scheme with the look-up table method. In addition, we use the Euclidean Distance Matrix (EDM) to estimate the location of the end-agent whose signal was lost due to shadowing. The simulation results show that the machine learning algorithm is able to provide an accuracy of 0.19 m and allocate power to the agent. Introduction Visible Light Communication (VLC) systems have been the focus of research and development for many years. Numerous VLC products have been developed and are commercially available such as Light Fidelity (LiFi) [1]. The basic components of VLC systems include Light Emitting Diodes (LEDs) and Laser Diodes (LDs) as transmitters and Photo-Detectors (PDs) as the receivers [2,3]. At the transmitter end, the information bits are converted into electrical signals to drive LEDs, while at the receiver end, the photons are received by PDs and converted back to information bits. VLC systems use the visible light spectrum instead of the Radio Frequency (RF) spectrum. The wide range of the visible light spectrum, 380-780 nm, is one of the main advantages of VLC systems over RF systems. VLC systems also provide a higher data rate [4,5]. VLC systems may not replace RF systems completely; however, these systems can be used as hybrid technologies to improve communication quality for different applications and environments [6]. VLC systems are used for both data communication and illumination. This leads to a wide range of applications [7]. Indoor environments are prime examples of the application of VLC systems due to the availability of pre-installed infrastructure for the transmitters [8]. In [9], the authors proposed an indoor VLC system based on On-Off Keying (OOK) using four devices equipped with 3600 LEDs (60 × 60), which were installed in a room of dimensions 5.0 m × 5.0 m × 3.0 m. The authors of [9] concluded that the system can satisfy communication requirements, such as the high data rate of 10 Gbps can be achieved using LEDs with OOK and intersymbol interference is dependent on the data rate and the FOV of the receiver in the indoor environment. In [10], a data rate of about 200 Mbps was achieved using blue LEDs for an indoor environment. The laser-based white-light-emitting Surface Mount Device (SMD) platform combined with blue LEDs is proposed in [11], and the proposed framework achieved a data rate of about 20 Gbps with 10-100 times more brightness as compared to the conventional light bulbs. Indoor positioning methods using VLC technologies have been reported in numerous studies. The features such as Received Signal Strength (RSS), Time of Arrival (ToA), and Angle of Arrival (AoA) are the most prominent when designing an indoor positioning system. The first, high-precision VLC-based indoor positioning system was proposed in [12]. The proposed algorithm uses RSS measurement to obtain the user/object location with an accuracy of 0.4 m. In [13], AoA is used as a feature to find the user/object location with an accuracy of 0.1 m. The proposed framework used image sensors instead of PDs. The hybrid approach using a combination of RSS and AoA for improved communication and localization is proposed in [14,15]. In the aspect of cost-effectiveness, Ref. [16] proposed a two-stage framework. First, they presented a dedicated analog sensor that is capable of being directly plugged into the microphone input of a computer or a mobile device such as a smartphone. The signal pattern, as well as the signal strength of a beacon, can both be decoded by it. Second, to decode the signal pattern, the use of rolling shutter cameras is proposed. This offers a viable answer to the problem of localizing hand-held devices that contain cameras. Artificial Intelligence (AI) methods have been widely developed for the application of indoor positioning mainly due to high accuracy and easy deployment [17,18]. The machine learning-based classifiers are compared for localization in an indoor environment in [19]. It has been reported that the k-nearest neighbor (k-NN) algorithm performed the best among all other classifiers. It has been reported in [20] that support vector machine (SVM) outperformed logistic regression using the fingerprinting method for Bluetooth signals in an indoor environment. In [21], Principle Component Analysis (PCA) is used to improve the performance of SVM, K-NN, and random forest classifiers for indoor positioning. In [22], it is reported that the random forest classifier outperforms K-NN for WiFi-based indoor positioning. In [23], an enhance random forest algorithm is proposed for indoor positioning in real-world settings. The proposed machine learning algorithms have proven to be efficient; however, a comprehensive analysis for a multi-agent system that also considers the signal loss due to obstacles is missing a link. Therefore, firstly this paper compares the performance of SVM and Random Forest Regression (RFR) for VLC-based indoor positioning and used Euclidian Distance Matrix (EDM) to find the location of the agent in the scenario of signal loss due to obstacles. Secondly, we complete the framework with adaptive resource allocation for multi-agent systems based on obtained positions. VLC technologies have been used for a variety of autonomous systems such as vehicles, autonomous ground robots, and fixed robotic industrial grippers. In [24], an in-hospital transportation robot called HOSPI is developed and VLC technology is used to improve navigation and localization in addition to other navigation sensors for better autonomous control of the robot. The study explores the localization and autonomy using experimental and actual results in an actual hospital. In [25], the multi-frequency method with RSS is used to measure the distance of the robot from each LED that is installed above the robot in a plane that is parallel to the plane of the robot base in order to accomplish precise indoor positioning. This is accomplished by installing the LEDs in a plane that is perpendicular to the surface of the robot base. A multi-frequency technique is one in which each LED transmits its location ID at a frequency that is distinct from the others. In [26], VLC technology in addition to the Extended Kalman filter has been used for communication and localization in underwater robots for nuclear reactor inspection. VLC-based systems have proven to be efficient in the area of positioning for multiple applications; however, these proposed studies often require high-cost, sophisticated setups for high accuracy. Therefore, in this research, a system is described that makes use of the modulated light signal both as a medium through which data can be sent and as a reference upon which to base the positioning of a mobile robot/agent. Both of these functions are carried out within the same framework. Moreover, the underlying modulation and multiplexing techniques for VLC systems have constantly evolved to meet the needs of more complicated and dynamic environments. Many different approaches, such as Orthogonal Frequency Division Multiple Access (OFDMA), Time Division Multiple Access (TDMA), and Code Division Multiple Access (CDMA), have been suggested in the research literature as methods for implementing multiple access in high bit rate VLC systems [27][28][29][30][31][32][33][34]. MIMO-OFDM is utilized for multiuser VLC systems in [35], and this is accomplished by giving each user their own unique carrier. In the same vein, CDMA is used in conjunction with OFDM to support multiple-user communication rather than sending a single carrier to each user. Similarly, Non-Orthogonal Multiple Access (NOMA) methods have been widely proposed in VLC systems to increase the number of users without compromising on performance [36]. In the power domain NOMA, the data of different agents are accumulated together using different power factors for each agent, and Successive Interference Cancellation (SIC) is used to retrieve the data at the receiver end. However, the resource allocation for the power domain NOMA is highly affected by the channel and environment. The resource allocation in NOMA has been a focus of research for many years [37]. In [38], the subject of energy-efficient user planning and power optimization in NOMA wireless links is investigated to understand the trade-off that exists between the data rate effectiveness and the amount of energy that is consumed by NOMA. For the downlink NOMA heterogeneous network, energy-efficient user planning and power distribution techniques are presented for both perfect and imperfect Channel State Information (CSI), respectively. In [39], the neural network-based resource allocation method is proposed for mobile users with mutual interference management. This research work also provides priorities and rate demand-based user scheduling methods to coordinate the access of heterogeneous users with limited radio resources. In [40], a low complexity power allocation scheme for NOMA-based indoor VLC systems, which is called the Simplified Gain Ratio Power Allocation (S-GRPA) scheme, is proposed. The CSI used for power allocation in NOMA is obtained through the look-up table method rather than calculation. In this research work, the location of the agent is received at the base station by a separate lowenergy communication link. This approach suffers the loss of the signal-carrying position information. Inspired by this approach, we proposed to use machine learning algorithms to find the location of the agent for CSI for better dynamic resource allocation in NOMA for a collaborative indoor environment instead of the separate communication link. Additionally, to avoid the loss of agent locations due to shadowing or obstacles, we used EDM to obtain the location of agents using mutual distances of agents in the network. In this research work, we use machine learning algorithms such as RFR and SVM with minimum features of RSS and AoA for an indoor positioning framework using VLC technologies. The positioning algorithm is combined with the S-GRPA scheme for the adaptive resource allocation in a P-NOMA-based multi-agent communication network. Furthermore, we use the Euclidean Distance Matrix (EDM) to estimate the position of the agent whose signal was lost due to shadowing. The simulation results show that RFR-based positioning algorithm is able to provide an accuracy of 0.19 m and allocate power to the agent. The rest of the article is structured as follows. Section 2.1 discusses the VLC channel followed by a mathematical discussion on the S-GRPA for NOMA in Section 2.2. Sections 2.3 and 2.4 discusses Random Forest Regression (RFR) and Support Vector Machine (SVM) algorithms. Section 2.5 discusses the Euclidean Distance Matrix (EDM) to obtain an agent's location in the event of signal loss due to shadowing and obstacles. Section 3 presents results for indoor positioning and the bit-error-rate (BER) for NOMAbased VLC systems. The article is finally wrapped up with concluding remarks. System Design In this research work, a complete framework for resource allocation for power domain NOMA using S-GRPA, in the indoor environment for collaborative tasks, is proposed using visible light communication systems as shown in Figure 1. For indoor positioning, we have compared the RFR and the SVM algorithms with the Euclidean Distance Matrix (EDM). The complete system block diagram is shown in Figure 1. The next section discusses the S-GRPA for NOMA with the VLC channel model. Visible Light Communication (VLC) Channel LEDs, the transmitting devices in VLC, serve dual tasks by providing both light and data transmission. As can be seen in Figure 2, the responsiveness of the VLC channel in an indoor setting is heavily dependent on the illumination intensity and the transmission power. The illumination intensity at a point in the Cartesian plane is given as follows [41]: where I(0) is the intensity of the central light source, m l , is the order of Lambertian emission, d is the separation distance between the LEDs and the PDs, and θ and ψ are the irradiance and incidence angles, respectively. Lambertian emission, m l , is described in the following order: where ϑ 1 2 is the angle at half illuminance of an LED. The signal power received at a particular PD is given as follows: where T s (ψ) is the filter transmission, g(ψ) is concentrator gain and ψ con is the field of view of PD. P t is transmitted signal power, and it fades through Line of Sight (LoS) channel gain. The DC gain of the LoS path is given as follows: The distance, d, between the transmitter and receiver is the key factor in the allocation of signal power in NOMA-based communication systems. The next section discusses the mathematical basis for NOMA and resource allocation in this method. Simplified Gain Ratio Power Allocation (S-GRPA) for NOMA Several agents' data are combined using the power-domain NOMA, and the data are then separated using SIC. NOMA is free of spectrum spreading and has low degradation SNR performance, in contrast to other multiple access methods such as OFDMA and CDMA, due to the high peak-to-average power ratio, low complex receiver design, and moderate latency [42,43]. Several agents using the same resource simultaneously in NOMA boosts system throughput, but each agent has a distinct power factor. In this study, we employ symbol-level NOMA. Figure 3 illustrates the NOMA framework for three agents. Because of the relative difference in distance from the base station, the agent near the cell edge receives a lot more power than the agent at the center of the cell. The received signal can be expressed as follows [42]: The termp 1 x 1 + p 2 x 2 + p 3 x 3 shows the infusion of data of each user at the base station and is transmitted to each user via the communication channel. These infused data experience different channel effects, such as H 1 , H 2 , H 3 , as shown by Figure 3. The terms y 1 , y 2 , and y 3 show the data received at each agent after passing through the channel. The information from agent-1 is separated and demodulated as follows at the receiver end: After agent-1's data have been retrieved, agent 2's data are recovered by SIC reducing agent-1's interference in the manner described below: This method can be expanded to a higher number of agents. The power factor, p, for each agent depends on its location. The information about the position of agents in a multi-agent system can help to assign appropriate power factors to the agents for the P-NOMA-based VLC system. The power distribution of the VLC system is shown in Figure 2, where it can be seen that overall power distribution can be divided into zones, n = 1, 2, · · · , N, with radius, r N . Here, r n refers to the radius of each zone. Furthermore, zone N has the lowest illumination intensity and zone 1 has the highest. The nth region can be defined as follows [40]: for r n = nr 2 e N , n = 1, 2, · · · , N. where r e is the maximum radius of the last zone. The relationship between the kth agent and the k − 1th agent in terms of power allocation is described as follows [40]: Here, the information on channel gain, H, is received by the indoor positioning algorithm and stored in the look-up table. Random Forest Regression Algorithm The idea of aggregating random decision trees was initially presented in the research papers [44][45][46]. This idea is at the heart of the RFR technique. The following is the formulation of the problem statement for the RFR algorithm: Problem 1. Calculate the value of the non-parametric regression function denoted by f (x) = E[y|x] → y = f (x), where p is the dimension of the input vector and x ∈ R p is the input vector used to estimate the output, y ∈ R. Using the training data set, the RFR method predicts the function f e (x), which is similar to the actual regression function f (x), and then compares the two. T n = {[|y 1 |x 1 ], [|y 2 |x 2 ], · · · , [|y n |x n ]} is a mathematical expression. The error function can be written as: E[ f e (x) − f (x)] 2 → 0, as n → ∞. The random forest methodology employs N different types of regression trees. The input vector, x, is evaluated on each tree in such a way that the projected value for the i th tree is given as follows [45]: Here, θ 1 , θ 2 , · · · , θ N are the independent random variables associated with each regression tree. T n (θ i ) are the data points selected prior to the construction of trees. A m (x, θ i , T n ) is the zone containing x and M m (x, θ i , T n ) is the number of data points that fall into A m (x, θ i , T n ). Support Vector Machine Algorithm As with the RFR algorithm, the purpose of the Support Vector Machine (SVM) regression algorithm is to map the input x to the desired output y. Instead of minimizing the discrepancy between the estimated and true regression functions, f e and f , respectively, SVM penalizes the final result so that it can be used in linear regression. y = w.x + b. Let y be the true output, and inside an area defined as y ± , SVM will make its predictions. The discrepancy between actual output, y, and expected output, z, is denoted by the |z i − y i | < . Two slack variables are added as a penalty if the projected output is outside the region y ± . ζ + indicates that the projected output lies above the region of y + , whereas ζ − indicates that the predicted output lies below the region of y − . The following is the error function for the SVM regression algorithm [47][48][49][50][51]: subject to: C is the tunable variable that controls the penalty on slack variables and . To solve (13), the following Lagrange multipliers are introduced [47,51]: This transforms (13) in the following: Now differentiate (16) to 0, with respect to each variable as below [47,51]: The dual form of the primary L p can be defined as follows: subject to: The predicted output can be written as [51]: This concludes the regression algorithms to obtain the location of the agent based on power received at the receiver. However, in real-world settings, the direct receiver might not be able to receive the agent's signal due to obstacles or because the signal is highly disrupted due to noise. Therefore, for localization in collaborative indoor environments, the distance geometry problem plays an important role by obtaining the location of the particular agent using the mutual distances of all agents in the network. The next section discusses the distance geometry problem using Euclidian Distance Matrix (EDM). The Distance Geometry Problem The VLC systems for indoor environments with dynamic agents often face the loss of signal due to shadowing because of obstacles. To assist the localization in the event of signal loss in a multiagent collaborative environment, the Distance Geometry Problem (DGP) is used in conjunction with the machine learning algorithms discussed in the previous subsection. The objective of the DGP is to find the location of agents using their mutual distances. It is important to mention that agents do not only communicate with the base station during collaborative tasks but also communicate with each other, and this provides extra information about the environment that agents transmit to the base station. The mutual distances of agents are provided in the form of EDM. The solution to DGP for N agents in dimension, d, is a matrix, S ∈ R d×N = [s 1 , s 2 , . . . , s N ], here s i are the coordinates for ith agent. The mutual distance between agents can be referred as D ∈ R N×N = [d ij ], here d ij is the distance from jth agent to ith agent. To generate an initial estimate of the matrixŜ, multi-dimensional scaling is used. The estimated point cloud,Ŝ can be mapped to the actual matrix, S, using rigid transformation in absolute coordinates. For this transformation, the Procrustes Analysis is performed, which is spectral factorization. For this process, it is assumed that the location of some agents, N a < N is known, and these agents are referred to as anchors. To find the location of missing agents in the network, DGP can be stated as static. The static DGP consists of three stages to obtain matrix S from matrix D. The first stage is to obtain a Grammian matrix, G ∈ R N×N = S T S, as it has a one-one relation with matrix D. Grammian matrix G can be obtained by following the optimization problem [52,53]: where K(.) is a function, which maps Grammian matrix G to matrix D. W is referred to as a binary mask matrix, the entries with zero values in this matrix show the missing measurement of the received power and • is the Hadamard product. The next stage is to obtain matrix S using Grammian matrix G. The estimate,Ŝ can be obtained by the Singular Value Decomposition (SVD) method given that matrices G and S hold the mathematical relation G = S T S. As mentioned earlier, matrixŜ can be mapped to the actual matrix S using rigid transformation along with the information of N a anchors. By denoting the columns of matrix S related to anchors as S a and assuming Y a refers to the same columns inŜ. Furthermore, to make S a and Y a centered at the origin, assume that S a and Y a are the translated version of S and Y. The transformation R can be defined as follows: The actual matrix S can be calculated as follows: where s a,c and y a,c refer to the centroids of S a and Y a , respectively. This concludes the mathematical basis for the proposed framework for resource allocation using precise positioning of the agent in a collaborative indoor environment using VLC systems. The next section discusses the performance of indoor positioning using SVM and RFR algorithms, and bit-error-rate (BER) for NOMA-based VLC system. Results In the context of this research, a rectangular indoor environment of a width of 6.3 m, a depth of 2.5 m, and a height of 3 m is being studied. These dimensions represent the area of the environment. Each Cartesian point has a zone that corresponds to it that is represented by a square cell that is 0.3 m 2 in size. The detailed simulation parameters are shown in Table 1. The purpose of this exercise is to make an estimation of the Cartesian location within the square cell region and use this information to calculate the channel gain for resource allocation for NOMA. The location of agents consists of 2D Cartesian coordinates; however, the environment is three-dimensional. The dimensions 6.3 × 2.5 × 2 m 3 refer to length × width × height. The base station is considered to be mounted at 2 m height from the ground and agents are moving in 2D plane on the floor. To obtain the location of agents, the SVM and RFR algorithms are trained using RSS and AoA features at the receiver end. Due to 4 × 4 Tx-Rx (LED arrays-PDs on agents) configuration for each agent, each feature has a dimension of R 4×4 . The dataset is collected in MATLAB using the VLC channel. Each agent has a region of confidence of 0.25 m for its location. Figure 4 shows the estimated and actual Cartesian coordinates of agents. For this simulation, the agent is considered to be dynamic; therefore, different locations are estimated in these simulations for more robustness in the algorithm. The red dots are actual coordinates and the blue dots are estimated values. For RFR, 100 decision trees are used in this simulation. The RFR shows an accuracy of 93.6% with an estimation error of 0.19 ± 0.22 m. Figure 5 shows the results of the estimated location using the SVM algorithm. For the SVM algorithm, the radial basis function is used to solve the error function. The SVM shows an accuracy of 84% with an estimation error of 0.3 ± 0.2. According to the findings of the statistical analysis, the mean error for the SVM regression method is higher than the mean error for the random forest regression technique. Therefore, RFR is used as a positioning algorithm in conjunction with S-GRPA for power allocation in a NOMA-based multi-agent VLC system. Table 2 shows the comparison of the proposed RFR and SVM algorithm for VLC-based indoor positioning with other state-of-the-art algorithms. In this table, the Epsilon, Plugo, and Luxapose are algorithms for indoor positioning using VLC technology. The WiDeep is the Wifi-based deep learning algorithm for indoor positioning. It can be seen that RFR with minimum features provides better accuracy than Epsilon, Plugo and WiDeep, and SVM, However, Luxapose is still better than the other algorithms due to more sophisticated hardware and a higher number of features. Therefore, it can be deduced that with minimum features, RFR provides better accuracy and easy deployment. Figure 6 shows a scenario of five agents where the signal of two agents is not received at the base station due to obstacle/shadowing. In this scenario, the location of the other three agents is predicted by the RFR algorithm and used as anchors for EDM as shown by black data points. The red square boxes in Figure 6 show the estimated position of two remaining agents using EDM. It can be seen that EDM with RFR predicted the locations precisely. Here, the predicted position of anchors (black boxes) is considered to be accurate during the process of EDM. Luxapose [13] 0.1 m Figure 6. VLC indoor positioning using EDM with RFR, when agent signal is not received at the base due to obstacle. Actual data points (blue), predicted data points using EDM (red), and predicted data points using RFR (black) called anchors. By combining the RFR algorithm with S-GRPA for power factor allocation in the VLC system, Figure 7 shows the BER curves for five agents at different locations. The results consist of a Signal-to-Noise Ratio (SNR) between 0 to 150. The blue curve shows agent-1 at a distance of 6 m inches, which is the farthest agent in the NOMA setup. The high power factor is allocated to this user. The orange curve shows the BER rate for agent-2 at a distance of 4.5 m in an indoor environment. The yellow curve shows the BER for agent-3 at a distance of 3.8 m. The purple and green curves show the BER curves for agent-4 and agent-5 at distances of 2 and 0.7 m, respectively. Agent-5 has been allocated low power based on the distance. As expected, the BER for the high-power agent shows sharp and quick decay; however, the low-power agent shows slow and high BER values. The higher BER shows the signal suffers higher noise during signal propagation from the base station to the agent. At a given time instance, the power allocation is performed based on the location of the agent. However, the value of the power factor based on distance from the base station is already stored in a look table using the simplified gain ratio power allocation (S-GRPA) method. The power factor for each agent is allocated based on the location found by the RFR algorithm. It can be seen that S-GRPA with RFR is able to allocate appropriate power factors to each agent based on the location in an indoor environment. Conclusions Indoor localization and task accomplishment depend on the communication link between the base station and mobile sensor networks, such as multi-agent systems for collaborative tasks that involve ground mobile robots and drones. In this research work, a power allocation method using the precise location of the agent is proposed for the powerdomain non-orthogonal multiple access (P-NOMA). P-NOMA enables the base station to suppress signals for different agents using the same time-frequency channel, and it requires the information of the channel for better power allocation to each agent. The channel is highly affected by the distance between the end agent and the base station. In this research work, a machine learning algorithm is proposed to find the location of the agent using Received Signal Strength (RSS) and Angle of Arrival (AoA). The location provided by the machine learning algorithm is used to determine the channel gain and the appropriate power factor is allocated to each agent based on simplified gain ratio power allocation (S-GRPA). It is shown by simulations that Random Forest Regression (RFR) performed better to obtain the location as compared to Support Vector Machine (SVM). In addition, the Euclidian Distance matrix is used to find the location of the agent, if the signal is not received from a particular agent, based on the mutual distances of agents in the network. The complete framework is tested for five agents using the VLC channel. The S-GRPA with the RFR algorithm was able to assign the appropriate power factors to each user based on its location. This work provides bases for dynamic power allocation for multiuser VLC systems. The future work aims to test this method in real-world experiments. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy and ethical reasons.
7,051.8
2023-06-01T00:00:00.000
[ "Computer Science" ]
Phase unwrapping with a rapid opensource minimum spanning tree algorithm (ROMEO) Purpose To develop a rapid and accurate MRI phase‐unwrapping technique for challenging phase topographies encountered at high magnetic fields, around metal implants, or postoperative cavities, which is sufficiently fast to be applied to large‐group studies including Quantitative Susceptibility Mapping and functional MRI (with phase‐based distortion correction). Methods The proposed path‐following phase‐unwrapping algorithm, ROMEO, estimates the coherence of the signal both in space—using MRI magnitude and phase information—and over time, assuming approximately linear temporal phase evolution. This information is combined to form a quality map that guides the unwrapping along a 3D path through the object using a computationally efficient minimum spanning tree algorithm. ROMEO was tested against the two most commonly used exact phase‐unwrapping methods, PRELUDE and BEST PATH, in simulated topographies and at several field strengths: in 3T and 7T in vivo human head images and 9.4T ex vivo rat head images. Results ROMEO was more reliable than PRELUDE and BEST PATH, yielding unwrapping results with excellent temporal stability for multi‐echo or multi‐time‐point data. It does not require image masking and delivers results within seconds, even in large, highly wrapped multi‐echo data sets (eg, 9 seconds for a 7T head data set with 31 echoes and a 208 × 208 × 96 matrix size). Conclusion Overall, ROMEO was both faster and more accurate than PRELUDE and BEST PATH, delivering exact results within seconds, which is well below typical image acquisition times, enabling potential on‐console application. | INTRODUCTION The complex signal in MRI can be divided into two constituents: magnitude (M) and phase (θ). The MRI phase is proportional to local deviations in the static magnetic field, ΔB 0 (Hz), through the relation ∼ 2 TE ⋅ ΔB 0 . Knowledge of ΔB 0 can be used to correct image distortions, 1,2 visualize veins and microbleeds using Susceptibility Weighted Imaging (SWI), 3 assess iron-rich tissues or calcifications through Quantitative Susceptibility Mapping (QSM), 4 and to estimate blood flow 5 or temperature changes in tissue. 6 The measured phase, φ, is a projection of the true phase θ into the 2π range. This gives rise to abrupt changes (ie, wraps), which do not represent the spatial and temporal continuity of θ within the object and require unwrapping. A questionnaire completed by 46 participants at the Fifth International Workshop on MRI Phase Contrast and Quantitative Susceptibility Mapping in South Korea (September 2019) indicated that 84.8% of participants use Laplacian unwrapping 7 in their work, 32.6% use PRELUDE, 8 30.4% use BEST PATH, 9 and 2.2% use Graph-Cut 10 (unpublished results reported by Prof. Peter C.M. van Zijl of Johns Hopkins University). Laplacian unwrapping is the most robust method currently available, providing globally smooth phase results (ie, no abrupt jumps) within seconds, even for large data sets with low SNR, explaining its popularity in QSM. However, it does not yield exact results for θ, 11 which makes it unsuitable for applications such as distortion correction, flow, or temperature measurements (see Supporting Information Figure S1). Moreover, Laplacian unwrapping introduces large phase variations around regions with sharp phase changes, such as veins, which corrupt QSM results around these structures. [12][13][14][15][16] PRELUDE and BEST PATH are the methods of choice when exact phase results are desired. They assume that phase changes between voxels that exceed π are indicative of wraps. PRELUDE belongs to the class of region-growing spatial unwrapping approaches which divide the volume into wrapless regions (ie, groups of contiguous voxels containing ranges of values that are less than π) and assess phase changes at the borders between them. The PRELUDE algorithm is relatively robust and considered to be the gold standard, 11 but it can take several hours or even days to unwrap large data sets with challenging phase topographies. A substantial reduction in computation time has been achieved using a recently developed method based on PRELUDE, called SEGUE, 17 by simultaneously unwrapping and merging multiple regions. However, SEGUE can still take more than 10 minutes to unwrap more challenging data sets (eg, 17:35 minutes ± 9:26 minutes using a 3.5-GHz processor for images acquired at 3T with matrix size = 220 × 220 × 240 and TE = 18.9 ms), making potential on-console implementation impractical. Path-following approaches, such as BEST PATH, 9 usually provide solutions within seconds, even for highly wrapped images with large matrix sizes. 12 This can be particularly useful in large studies, including functional MRI, in which hundreds of 3D image volumes are often acquired per subject. Path-following algorithms compare the phase in adjacent voxels, beginning at one location and proceeding to neighboring voxels in an order dictated by the reliability of the information in the voxels and how well they are connected (ie, a quality map). BEST PATH is rapid but more prone to errors than PRELUDE, especially in regions where a corresponding magnitude image has low SNR. For a comprehensive comparison of phase-unwrapping algorithms, we refer the reader to Robinson et al 12 and Ghiglia and Pritt. 18 To overcome the shortcomings of the exact phase-unwrapping algorithms currently available, we propose a new path-following algorithm called ROMEO: Rapid Opensource Minimum spanning treE algOrithm. This algorithm (1) uses up to three measures of the quality of connections between voxels, or weights, calculated from phase and magnitude information to provide improved unwrapping paths compared with BEST PATH, (2) provides computationally efficient bookkeeping of quality values and respective voxel edges, and (3) offers single-step unwrapping of a fourth dimension (echo or time). We tested ROMEO's performance against PRELUDE and BEST PATH in simulated topographies, challenging human head images acquired at 3T and 7T, and rat head images at 9.4T. Source code in the Julia 19 programming language, compiled versions for Linux and Windows (easily executable using command line or MATLAB [MathWorks, Natick, MA]) and the data sets used in this study are publicly available (see Data Availability Statement). | The ROMEO algorithm It is important for the accuracy of a path-based phaseunwrapping method that the unwrapping proceeds along a path connecting reliable (albeit wrapped) voxels, as an error is likely to be introduced when an unreliable voxel (eg, noise value) is encountered. In common with many phase-unwrapping methods, we draw on graph theory concepts to determine the optimum path. The edges connecting K E Y W O R D S distortion correction, fMRI, MRI phase, multi-echo, phase unwrapping, QSM voxels are assigned weights, which indicate the reliability of the connection between them. Each of the many possible networks connecting all of the voxels in the image constitutes a spanning tree, and each spanning tree is associated with a weight that is the sum of all the weights of the edges in the tree. The minimum spanning tree is the spanning tree with the smallest weight: essentially the path connecting all voxels which includes the least unreliable connections between them. The weights assigned to edges may consist of multiple contributions. The ROMEO algorithm uses up to three weights, which are multiplied together to yield a map of the "quality" of connection between neighboring voxels for each of the three principal directions (x, y, and z), here called a quality map. The unwrapping process is guided through the 3D phase data by this quality map, starting at the seed voxel; that with an edge with the highest quality value. For computational efficiency, the real-valued quality values are converted into integer cost values that are sorted into a bucket priority queue 20 : a sequence of non-negative integers, each of which has a "priority" associated with it: the cost value ranking. This effectively creates a minimum spanning tree of the cost values of the edges during the unwrapping process according to the Prim-Jarník algorithm. 21 The ROMEO algorithm was written in the open-source programming language Julia, 19 which has a syntax of similar simplicity to that of Python and a speed similar to the C-based languages. The ROMEO weights are defined in the range [0; 1] with "good" weights (those indicating well-connected voxels) being close to 1, which allows easy combination of all or only some of the weights through multiplication. The three weights, calculated in each direction (x, y, and z), are defined as follows: 1. Spatial Phase Coherence weight: where Ω is a wrapping operator; φ i,t and φ j,t are measured phases at two adjacent spatial locations, i and j; and W ,Spat is the spatial weight of edge (i, j), all at the same time point t. Temporal Phase Coherence weight: where t = 2 and phase values for the first and second echo (TE 1 , TE 2 ) are chosen as the default for the calculation of the temporal coherence weight, W ,Temp (i,j,t) . It is possible to change t if desired. Magnitude Coherence weight: where M i,t and M j,t are the magnitudes at two adjacent spatial locations i and j at time t. Weight 2 is only used for multi-echo or multi-time-point data and can be omitted. Weight 3 is used if magnitude data are available. An example of the three weights for the x-direction (left-right) for a 7T 3D gradient-echo (GRE) data set is shown in Figure 1. The product of the weights for each direction yields a quality map for each direction. For computational efficiency and reduced memory usage, the real valued quality map between 0 and 1 is transformed into integer cost values between 255 and 1 (ie, cost = max (round (255 ⋅ (1 − quality)) , 1). An integer cost value of 1 corresponds to the best connection and a cost value of 255 to the worst. The special case of a cost value of 0 denotes no connection between voxels (eg, the border of a mask). In this case, the corresponding phase values are not included in the priority queue, which effectively stops the unwrapping process in that direction. The range of values from 0 to 255 was chosen, as it can be stored efficiently as an 8-bit unsigned integer (2 8 = 256) and represents the range of original real-valued quality values with sufficient accuracy to avoid changing the unwrapped result. Cost values derived from the quality map and corresponding voxel edge locations along the three different axes are passed into the bucket priority queue. The priority queue initially contains six cost values surrounding the seed voxel (in directions -x, x, -y, y, -z, z). The smallest value in the queue is identified together with the corresponding edge connecting the seed voxel (voxel 1) with its neighbor (voxel 2). If there is a phase jump > π between these voxels, 2πn is subtracted from the phase of voxel 2 according to 2,t = 2,t − 2 n = 2,t − 2 * round 2,t − 1,t 2 , where φ 2,t is the wrapped phase measured in voxel 2, θ 1,t is the phase in voxel 1, and θ 2,t is the unwrapped phase in voxel 2, all at a given time point t. Voxel 2 is subsequently marked as having been visited. New values are added to the queue, including the connections between voxel 2 and all of its neighbors not yet visited by the algorithm. When a new edge is drawn from the queue, a check is performed to see whether the voxels connected by the edge have both been visited: If they have, this edge is removed from the queue. The search for the minimal cost value and the unwrapping process are repeated iteratively until all voxels have been visited (Supporting Information Figure S2). By default, ROMEO calculates weights only for a single 3D volume in multi-echo or multi-time-point data-a template phase volume-and uses the unwrapped result from this template, θ i,t , to unwrap the neighboring volumes, , assuming an approximately linear phase evolution in time: This accelerates the unwrapping process substantially by avoiding the recalculation of the weights for each single volume and improves the stability of the unwrapping results over the echoes or time points. By default, t = 2 is specified as the template phase volume, because t = 1 tends to be more affected by flow effects (in multi-echo GRE) or is acquired before the longitudinal magnetization reaching a steady state (in EPI time series). The template phase volume can be changed if necessary. In specific cases, when large motion occurs between the time points or the assumption of linear phase evolution is not fulfilled, individual phase unwrapping can be applied with the calculation of weights and spatial unwrapping for each volume. | Data sets To provide a ground-truth phase, θ, for a complicated pattern of wraps in φ, a complex topography was simulated as in Robinson et al 22 Measured phase maps (with no ground-truth θ) were also examined: in vivo human head MRI acquisitions at 3T and 7T (Siemens MAGNETOM; Siemens Healthineers, Erlangen, Germany) and ex vivo rat head images acquired at 9.4T (Bruker BioSpec 94/20 USR; Bruker, Ettlingen, Germany) with sequence parameters listed in Table 1. Human measurements were approved by the Ethics Committee of the Medical University of Vienna, and all participants provided written, informed consent. All human data sets were acquired from healthy volunteers except for five 7T EPI time series (57 volumes) from a previous study, 23 which were collected from 4 patients with brain tumors and 1 patient with a developmental venous anomaly. | Data analysis All of the data were acquired with multichannel coil arrays. Separate channels were combined using ASPIRE 24 for multi-echo GRE data and using the coil combination described in Dymerska et al 25 for the single-echo EPI data. Combined phase images were unwrapped using compiled PRELUDE from the FSL toolbox (version 5.0.11; https:// fsl.fmrib.ox.ac.uk/fsl/fslwi ki/FSL) written in C++, 8 BEST PATH 9 programmed in C, and ROMEO written in the Julia programming language. Their performance was compared with respect to unwrapping accuracy and computational speed. All of the calculations were performed on a PC with Example of maps of the three ROMEO weights for the x-direction (left-right): Magnitude and phase images at the first two TEs, Spatial Phase Coherence, Temporal Phase Coherence, and Magnitude Coherence weights. Multiplication of these weights defines the final quality map for the x-direction. Two axial slices are shown from data acquired at 7T with the 3D gradient-echo (GRE) sequence parameters found in Table 1. These maps were calculated for the first echo using the magnitude image acquired at TE 1 and phase images acquired at TE 1 and TE 2 . The weights and quality values close to 1 correspond to good voxel connections; values close to 0 are weakly connected and are unwrapped last. Analogous weights are calculated for the y and z directions an Intel Xeon W-2125 4.0-GHz processor, 64 GB RAM, and an Ubuntu Linux 16.04 operating system. All ROMEO results were obtained using magnitude and phase images as well as template unwrapping as described in section 2.1. Images were additionally masked to obtain PRELUDE results in a feasible time. Unmasked and masked measured data were analyzed for BEST PATH and ROMEO. Masking was performed with the FSL Brain Extraction Tool 26 for in vivo data other than the 3T EPI, in which the SPM Segment Toolbox (SPM12; https://www. fil.ion.ucl.ac.uk/spm/) was used, as the Brain Extraction Tool produced masks that did not match the image well. Ex vivo rat head images were masked using magnitude image thresholding. For the simulated data, quantitative comparison was performed by calculating the percentage of unwrapped values that were different from the ground truth. Although for in vivo measurement there is inherently no ground truth available, a reliable estimate of the true phase, or temporal reference image, can be obtained for multi-echo data if the first TE is short and the TE difference between consecutive echoes is small, ensuring small and approximately linear phase evolution between the echoes. This was found to be the case for the multi-echo GRE data at 3T, 7T, and 9.4T. For the first TE, the temporal reference image was merged from the results of the three methods analyzed here: Voxels were only included if their unwrapped phase values were the same for all three unwrapping methods. This meant that for the 3T , 7T, and 9.4T GRE data, respectively, 99%, 98%, and 89% of the voxels within the brain mask were included in the temporal reference image. The temporal reference for subsequent echoes was calculated by assuming approximately linear phase evolution over time using Equation 4. Phase-unwrapping errors were calculated as the difference between the unwrapped phase obtained using a given method at a given TE and the temporal reference at the same TE. Histograms showing the number of voxels with 2πn errors for all three methods analyzed were plotted (see Figure 7). The percentage of voxels with unwrapped phase values different than the temporal reference within the mask is listed in Table. For EPI at 3T and 7T, calculation of a temporal reference as described previously was not possible due to the inherently long minimum TEs of the EPI acquisitions. These images were assessed visually, using MRIcro and FSLeyes. For the single-echo 7T EPI time-series data, temporal mean SD images were calculated throughout the brain mask to investigate regions where unwrapping errors were different at various time points ( Figure 5). The SD of the estimated field map (ΔB 0 (TE) = (TE) TE ) was calculated for the 9.4T GRE data set with 12 echoes (Figure 6). Figure 3. Two slices with visible differences among PRELUDE, BEST PATH, and ROMEO are shown. Unwrapping errors occurred in all methods close to the sinuses (red arrows), where BEST PATH shows the largest errors. In both the GRE and EPI phase, an open-ended fringe line is clearly visible in the wrapped phase close to the left ear canal, where the magnitude signal approaches the noise level (blue arrows). In ROMEO, the extent of the unwrapping error in this region was limited to a few voxels. In PRELUDE and BEST PATH, the size of the region affected increased with TE (not shown). A residual wrap also occurred in the vein of Galen in the BEST PATH result (see Figure 3 GRE, yellow arrows). Unwrapping differences among the three methods were also observed in a small number of voxels in other vessels (see Figure 3 EPI, yellow arrows). Regions affected by residual wraps were the smallest in ROMEO. Examples of phase-unwrapping performance for PRELUDE, BEST PATH, and ROMEO at 7T are presented in Figure 4 for multi-echo GRE and in Figure 5 for a single-echo EPI time series. A central axial slice from the GRE data is shown at four selected TEs in Figure 4, starting with a relatively short TE (TE 6 = 15 ms) and ending with a very long one (TE 30 = 75 ms), where the signal in a large part of the image has decayed into noise. At TE 6 = 15 ms, small differences were observed at the brain boundaries and in the sagittal sinus (red arrows) between the temporal reference and the PRELUDE or BEST PATH results. At TE 15 = 30 ms, slightly larger regions with unwrapping errors were observed in the PRELUDE and BEST PATH results, such as close to the auditory canals (red arrows). At later echoes, such as TE 24 = 60 ms and TE 30 = 75 ms, large patches of tissue were affected by phase-unwrapping errors in both the PRELUDE and BEST PATH results, with larger regions of errors observed at longer TEs. No difference between the temporal reference and the ROMEO unwrapped phase is visible in this slice at TE 6 and T A B L E 1 Data sets used for the assessment of ROMEO unwrapping accuracy and computational speed TE 15 , but differences in a few voxels are observed at TE 24 and TE 30 (blue arrows). Regions with a very low SNR, approaching the noise floor in the magnitude image, are noisy in both the temporal reference and the ROMEO unwrapped phase; otherwise, both show a coherent phase topography. Small differences between the two results are more apparent in the quantitative comparison of the methods in Figure 7 and Table 2. PRELUDE and BEST PATH results for the 7T EPI time series (57 volumes) were affected by global 2πn phase jumps between consecutive time points, numbering 0, 19, 0, 14, and 32 jumps, respectively, for patients 1 to 5 for PRELUDE and 17, 10, 18, 0, and 26 for BEST PATH. No phase jumps between time points were present in the ROMEO results. Therefore, global phase jumps in the PRELUDE and BEST PATH results were removed before the calculation of the temporal mean and SD of the unwrapped phase, which are shown in Figure 5. In patients 1, 2 and 3, extensive unwrapping errors occurred using PRELUDE and BEST PATH close to the sinuses, marked by the arrows. These errors changed in size at different time points, which contributed to the high values of the phase SD in these regions. The ROMEO unwrapping errors were substantially smaller and stable over time points, which is reflected in low SD values. In patients 4 and 5, the unwrapping errors occurred close to pathologies and were, again, less extensive for ROMEO than for PRELUDE or BEST PATH. The effect that these errors unwrapping EPI have on a dynamic distortion correction is illustrated in Supporting Information Figure S3. Figure 6 shows high-resolution images of an ex vivo rat brain, acquired at 9.4 T using a multi-echo GRE sequence. The ROMEO algorithm gave the most accurate phase-unwrapping results, agreeing well with the temporal reference, and with very good stability over the echoes, as highlighted by the ΔB 0 SD. Regions in PRELUDE and BEST PATH results marked by red arrows were affected by residual phase errors, which changed for different echoes (see corresponding ΔB 0 SD maps). Blue arrows point to the superior sagittal sinus, which had a phase offset with respect to the surrounding tissue in both the temporal reference and ROMEO results, which was consistent for all echoes, as represented by the low ΔB 0 SD values. The phase images unwrapped by PRELUDE and BEST PATH show similar offsets in the sagittal sinus, but only at some of the shorter TEs (five echoes in PRELUDE and seven in BEST PATH), which is reflected by high ΔB 0 SD values in this large vein. Histograms showing the number of voxels with 2πn phase errors (n is an integer) in the unwrapped GRE results at 3T, 7T and 9.4T, for the middle echo, the last echo, and F I G U R E 4 Unwrapping results for the 7T GRE data at four selected echoes (of 31). At shorter echoes (TE 6 = 15 ms and TE 15 = 30 ms), a few unwrapping differences between the PRELUDE or BEST PATH and the temporal reference phase occur at the brain edges, sagittal sinus, or ear canals (see red arrows). At longer echoes (TE 24 = 60 ms and TE 30 = 75 ms), large patches with unwrapping errors are visible in PRELUDE and BEST PATH results. There is no difference between the temporal reference phase and the ROMEO result at TE 6 and TE 15 , and only a few voxels (marked by blue arrows) differ at longer echoes over all echoes are presented in Figure 7. The PRELUDE and BEST PATH results show similar error spectra, which increase in amplitude and become broader at longer TEs. The number of erroneous voxels in ROMEO is substantially smaller than for the other unwrapping methods. This result is also highlighted in Table 2, where the percentage of erroneous voxels within the mask is listed for all methods and three selected echoes. For all methods, the number of erroneous voxels increased with the TE. For ROMEO results, this number was below 1% at all field strengths and all echoes. The PRELUDE and BEST PATH errors affected over 20% of the voxels at the longest echoes at 7 T and 9.4 T. All of the described wrapped phase images were masked to obtain PRELUDE results in feasible times. For BEST PATH and ROMEO, no mask was required; therefore, unwrapping with no mask was also assessed. This yielded identical results, within masks, to this analysis. There was, however, a difference in computation time between executions with or without a mask, as described subsequently. | Comparison of computational speed among PRELUDE, BEST PATH, and ROMEO The computation times of ROMEO in comparison to PRELUDE and BEST PATH are summarized in Table 3. PRELUDE unwrapping took from several minutes (14 minutes 36 seconds for 3T GRE) to several hours (128 hours 8 minutes 59 seconds for 7T GRE), and the unwrapping process failed to finish within 38 days for the simulated data set. BEST PATH took less than a minute for all masked data, and ROMEO took at most 20 seconds. ROMEO was generally faster than BEST PATH with the exception of 3T data sets, in which BEST PATH was faster by, at most, 4 seconds (see Table 3 for more details). Using BEST PATH and ROMEO, data sets that were not masked took longer to unwrap than masked images. This difference was less prominent for ROMEO, with the same unwrapping times for masked and unmasked 7T EPI data sets (6 seconds). All three methods were memory-efficient, with the maximum RAM use below 5 GB for data sets (magnitude and phase images) with size below 800 MB. | DISCUSSION We have presented a new, rapid, and robust phase unwrapping technique-ROMEO-that is more reliable and faster than the two exact phase-unwrapping algorithms most commonly used in MRI: PRELUDE and BEST PATH. Because the MRI signal is complex, magnitude information is available for every MRI scan, even if a study focuses exclusively on phase imaging. The ROMEO method includes information about the spatial coherence of the magnitude signal in the unwrapping operation. Combining this information with information on the phase's spatial and temporal coherence creates a refined quality map that guides the unwrapping process through 3D data, starting with the most reliable voxels. This improves the unwrapping accuracy over BEST PATH, which uses a quality map based on only a single weight calculated from the second difference of the phase between the six nearest neighbors and the 20 diagonal neighbors of a given voxel. Moreover, ROMEO uses "template-based" unwrapping with respect to a selected template volume when the data have a fourth dimension (eg, echoes or time points). This allows ROMEO to avoid introducing 2πn phase jumps between these echoes or time points and speeds up the unwrapping operation, as the weights and quality map are calculated only for one 3D template volume. Template unwrapping works accurately if the phase is proportional to TE (ie, linear for multi-echo data, constant for single-echo time-series data) and no residual phase offsets are present (ie, θ ≈ 0 at TE = 0). We offer ROMEO version 3.1 (see Data Availability Statement) with the possibility to remove residual phase offsets using the MCPC-3D-S method. 24 If the phase is nonlinear with time (eg, due to large motion), ROMEO offers the option of unwrapping each volume individually. The phase-unwrapping problem exists because it is not possible to measure the ground-truth phase, which means that creating a reference image and performing a quantitative comparison of different phase-unwrapping algorithms in vivo is challenging. The temporal reference was calculated with the assumption that the phase evolves approximately linearly over time, which is, for the purpose of assessing wraps, a reasonable approximation of the ground truth. It is only possible to calculate such a temporal reference from multi-echo acquisitions with a short initial TE and small echo spacings. The first TE phase must be either free of wraps or unwrapped without errors by all of the methods under evaluation. In addition to a thorough qualitative comparison presented in Figures 2-6, we have also provided a quantitative analysis of unwrapping errors for all three phase-unwrapping methods considered here (see Figure 7 and Table 2). We have calculated the percentage of voxels in each method with values different from the ground truth in simulation or from the temporal reference in measured data. The ROMEO method uses template unwrapping, a type of temporal unwrapping, which yielded results that agreed well with the temporal reference. Path-following methods, BEST PATH and ROMEO, were much faster than PRELUDE, a region-growing method. The improved speed of ROMEO with respect to BEST PATH arises from template unwrapping as well as from efficient handling of values in the queue of voxels to be considered. The BEST PATH method uses the Kruskal algorithm to calculate the minimum spanning tree, 27 using a heap as the priority queue, which has a runtime that depends on the number of voxel insertions into the queue, m, according to O(m log[m]). The ROMEO method uses the Prim-Jarník algorithm 21 with integer representation in a bucket priority queue, the runtime of which scales with O(m). Some of the speed differences may also come from the fact that the two methods were implemented in different programming languages (BEST PATH in C, ROMEO in Julia). The ROMEO method requires initialization of the Julia runtime before unwrapping, which contributes to the fact that ROMEO unwrapping was never faster than 6 seconds for the data presented here. F I G U R E 5 Unwrapping results for a 7T EPI time series with 57 volumes acquired in 4 patients with brain tumors and 1 patient (patient 5) with a developmental venous anomaly. The temporal mean and SD of the unwrapped phase are shown for all patients. The SD maps highlight residual phase wraps, which change for different time points. Red arrows highlight the largest errors. The ROMEO method outperformed PRELUDE and BEST PATH, yielding both fewer residually wrapped voxels and less temporal variation in the unwrapped phase over time points ROMEO took only a few seconds, even for very challenging examples such as 7T GRE images (9 seconds for masked images), whereas PRELUDE delivered results after about 128 hours and BEST PATH took 48 seconds. Although ROMEO was usually several seconds faster than BEST PATH (except in small 3T data sets), this is less relevant than the unwrapping accuracy. The ROMEO method showed fewer residual phase wraps than BEST PATH and PRELUDE in all of the analyzed cases. Additionally, ROMEO demonstrated superior phase-unwrapping stability over time points or echoes, which was highlighted by phase or field-map SD results. The PRELUDE and BEST PATH methods often showed a different distribution of residual wraps in problematic areas (eg, close to open-ended fringe lines) at different time points, rendering large areas of the phase images unusable, and requiring post hoc global 2πn jump correction between adjacent volumes. The unwrapping accuracy was independent of masking for BEST PATH and ROMEO, which highlights the redundancy of masking for these path-following methods and allows time to be saved that is normally spent on the often-fraught problem of mask generation. The PRELUDE algorithm only generated results when a mask was provided, but even then, calculation times were excessively long. As shown using simulated data, not masking PRELUDE inputs led to the algorithm failing to yield results even after many days. The quality maps calculated in ROMEO can be combined and thresholded to generate an object mask. This could be useful in applications requiring a mask such QSM, particularly for inhomogeneous images and non-brain regions or phantoms, where commonly used methods such as the Brain Extraction Tool do not perform well. ROMEO is extremely flexible, as it has the option to output individual weights and the quality map (both combined over x, y, and z) as well as a mask. There is substantial interest in using EPI sequences for phase imaging and QSM. [28][29][30][31][32] Phase images from EPI acquisitions were generally more challenging to unwrap than those from GRE scans, because EPI has a lower SNR than GRE and is affected by other effects such as distortions in the phase-encoding direction or stronger eddy currents. Of the three methods tested, ROMEO proved to be the most accurate and robust unwrapping algorithm for single-echo EPI acquisitions. Three weights were included in the ROMEO implementation discussed here. We offer the source code in the Julia programming language, which allows users to experiment with alternative weights for atypical MRI acquisitions or phase data sets acquired using other modalities such as optical or satellite radar interferometry. We expect ROMEO to find applications in MRI phase imaging and QSM, especially in challenging cases such as at high fields, at long TEs, in highly accelerated data sets with low SNR, and close to air spaces or implants. The speed of ROMEO speed makes it feasible to use spatial phase unwrapping in real time on the MRI reconstruction computer, which could benefit large studies with hundreds or thousands of phase volumes, including functional MRI studies in which phase information can be used to correct distortions 25,33 and provide quantitative information about changes in blood susceptibility in functional QSM. [30][31][32] F I G U R E 7 Histograms of phase errors in PRELUDE, BEST PATH, and ROMEO GRE results at all field strengths for the central and last TEs, and summed over all echoes (excluding TE 1 ). Voxels with 2πn (where n is an integer) phase differences from the temporal reference phase were counted as erroneous (see section 2.3). The number of voxels is shown on a logarithmic scale TWITTER Simon Daniel Robinson @simon_mri SUPPORTING INFORMATION Additional Supporting Information may be found online in the Supporting Information section. FIGURE S1 A comparison of the path-based method (ROMEO [rapid opensource minimum spanning tree algorithm]) and Laplacian unwrapping. The path-based method restores the simulated ground-truth phase from the wrapped phase, yielding exactly the true phase value in every voxel. The Laplacian method removes wraps but introduces phase offsets and background phase variations (windowed differences under the histogram show background phase variation and edge effects) FIGURE S2 Determination of the order in which voxels are unwrapped, illustrated for a 4 × 4 image. Unwrapping proceeds from the gray seed voxel in the order indicated by the blue number on each arrow, following the order of the quality values (in black) of the edges of the voxels that have already been unwrapped FIGURE S3 The effect of unwrapping errors on field maps and the distortion correction of EPI. Errors unwrapping EPI phase data yield erroneous field map values (at yellow arrows) and lead to corruption of the magnitude (red arrows) TABLE S1 Percentage of erroneous voxels for BEST PATH and ROMEO unwrapping results for the complex topography with no noise and 10% noise at three TEs (PRELUDE did not complete). Note: Voxels with 2πn phase differences from the ground-truth phase (where n is an integer) were counted as erroneous. How to cite this article: Dymerska
8,120
2020-07-24T00:00:00.000
[ "Physics" ]
An Overview of the Multi-Band and the Generalized BCS Equations-Based Approaches to Deal with Hetero-Structured Superconductors We trace the conceptual basis of the Multi-Band Approach (MBA) and recall the reasons for its wide following for composite superconductors (SCs). Attention is then drawn to a feature that MBA ignores: the possibility that electrons in such an SC may also be bound via simultaneous exchanges of quanta with more than one ion-species—a lacuna which is addressed by the Generalized BCS Equations (GBCSEs). Based on several papers, we give a concise account of how this approach: 1) despite employing a single band, meets the criteria satisfied by MBA because a) GBCSEs are derived from a temperature-incorporated Bethe-Salpeter Equation the kernel of which is taken to be a “superpropagator” for a composite SC-each ion-species of which is distinguished by its own Debye temperature and interaction parameter and b) the band overlapping the Fermi surface is allowed to be of variable width. GBCSEs so-obtained reduce to the usual equations for the Tc and Δ of an elemental SC in the limit superpropagator → 1-phonon propagator; 2) accommodates moving Cooper pairs and thereby extends the scope of the original BCS theory which restricts the Hamiltonian at the outset to terms that correspond to pairs having zero centre-of-mass momentum. One can now derive an equation for the critical current density (j0) of a composite SC at T = 0 in terms of the Debye temperatures of its ions and their interaction parameters—parameters that also determine its Tc and Δs; 3) transforms the problem of optimizing j0 of a composite SC, and hence its Tc, into a problem of chemical engineering; 4) provides a common canopy for most composite SCs, including those that are usually regarded as outside the purview of the BCS theory and have therefore been called “exceptional”, e.g., the heavy-fermion SCs; 5) incorporates s-wave superconductivity as an in-built feature and can therefore deal with the iron-based SCs, and 6) leads to presumably verifiable *Present address: B 208 Sushant Lok 1, Gurgaon 122009, Haryana, India. How to cite this paper: Malik, G.P. (2018) An Overview of the Multi-Band and the Generalized BCS Equations-Based Approaches to Deal with Hetero-Structured Superconductors. Open Journal of Microphysics, 8, 7-13. https://doi.org/10.4236/ojm.2018.82002 Received: March 10, 2018 Accepted: May 8, 2018 Published: May 11, 2018 Copyright © 2018 by author and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access Introduction We trace in Section 2 the backdrop of Multi-Band Approach (MBA) for hetero-structured, multi-gapped superconductors (SCs) based on numerous papers, for the gist of which [1] [2] [3] suffice.Gleaned from [1], summarized then are the reasons for its wide adoption.In Section 3, based on [4] [5] [6] [7] and [8], is given an account of the Generalized-BCS Equations (GBCSEs)-based approach (CA henceforth because it complements MBA), which also has been valuably employed to deal with such SCs.The last section is devoted to a discussion of the salient distinguishing features of the two approaches and conclusion. MBA At the root of MBA is the work of Suhl et al. [2] who dealt with the superconductivity of transition elements for which the occupation of the 4s orbitals begins prior to complete filling of the 3d orbitals, leading to division of valence electrons between two bands.Pairing can therefore also be caused by cross-band scattering.Because the d-band has more vacant levels than the s-band, it makes a large contribution to the total density of states N(0).Two gaps and, in general, two T c s arise in this approach because the BCS interaction parameter λ ≡ [N(0)V] is now determined not via a single interaction energy "V", but via a quadratic equation involving three such energies: V s and V d due to scattering in the two bands individually and V sd due to cross-band scattering.Since in this model the equation employed to determine T c -for each value of λ-is the familiar BCS equation for elemental SCs derived for one-band, weak-coupling (λ < 0.5) theory, it cannot per se explain the occurrence of high-T c s.For this reason, the multi-band concept is supplemented by the well-known Migdal-Eliashberg-McMillan approach [3], which allows λ to be greater than even unity because it is based on an integral equation the expansion parameter of which is not λ, but (m e /M), where m e is the mass of an electron and M that of an ion.MBA has evolved around these basic ideas because anisotropic SCs necessitate that [1]: 1) the BCS assumption of F E kθ  (E F = Fermi energy, k = Boltzmann constant; θ = Debye temperature) be abandoned; 2) different locations in k-space should be characterized by distinct pairing strengths and order parameters (i.e., gaps); and 3) the assumption that the Fermi surface is isotropic/spherical be dispensed with.Indeed, numerous SCs have been listed in [1] for which MBA has found a useful application. CA: Physical Basis [4] A striking feature of all SCs that have T c s greater than that of Nb (≈9 K) is that they are multi-component materials, suggesting naturally that Cooper pairs (CPs) in them may also be bound via simultaneous exchanges of phonons with more than one species of ions.It has been shown [4] that the BCS equation for T c of an elemental SC can also be obtained via a Bethe-Salpeter equation (BSE) with a kernel corresponding to the one-phonon exchange mechanism (1PEM) in the ladder approximation.The first diagram in this series has one rung, the second two rungs, and so on.If the number of rungs between any two space-time points in each of these diagrams is doubled, then we have the 2-phonon exchange mechanism (2PEM) in operation.Similarly-depending on composition of the SC-CPs may also be bound via a 3-phonon exchange mechanism (3PEM).It hence follows that in a composite SC, CPs may exist with different values of the binding energy (2|W|).Since the inequalities must hold, and since |W| = ∆ [4], we are naturally led to an explanation of why multi-component SCs are characterized by multiple gaps. GBCSEs Incorporating Chemical Potential in the 2PEM Scenario Employing ( ) ( ) ( ) ( ) as the kernel of a BSE, the following E F -incorporated equations have been derived for |W 20 | (to be identified with ∆ 2 > ∆ 1 ) and T c , where ∆ 1 and ∆ 2 are any two gap-values of an SC which may also be characterized by additional ∆-values [4] [5]: where chemical potential μ has been used interchangeably with E F , θ 1 and θ 2 > θ 1 are the Debye temperatures of the ion-species that cause pairing and ( )  their interaction parameters, no distinc-tion is made between the values of μ and the λs at T = 0 and T = T c , and Re ensures that the integrals yield real values even when μ < kθ 2 .Note that when λ 2 = 0, λ 1 = λ, θ 1 = θ, |W 20 | = |W| and k µ θ  , (2) becomes identical with the BCS equation for T c of an elemental SC, and (1) leads to ( ) , where in the parentheses is noted the BCS equation for ∆.Via a detailed comparative study of these equations for six elemental SCs [4], it has been shown that the equation for |W| provides a viable alternative to the equation for ∆.We note that s ± -wave feature is an inbuilt feature of (1) since it has been derived by assuming that the signature of W 20 changes on crossing the Fermi surface.Such an assumption leaves the BCS equation for ∆ unchanged because it is quadratic in ∆. Critical Current Density j0 of an SC at T = 0 via CA in the 2PEM Scenario It has been shown that [6], ( ) ( ) ) ) where In these equations, θ is the Debye temperature of the SC and θ 1 and θ 2 the Debye temperatures of ion-species that cause pairing, m* (m e ) is the effective (free) electron mass, γ the electronic specific heat constant and v g the gram-atomic volume of the SC; (n s /2), e* (twice the electronic charge), v 0 , and P 0 are, respectively, the number density, electronic charge, critical velocity and critical momentum of CPs (momentum at which ∆ vanishes), and , a dimensionless construct to be obtained by solving , where i i r θ θ = .This equation is derived via a BSE with the same kernel as employed for (1) and ( 2), except that now . A more accurate (but rather elaborate) equation that additionally contains E F explicitly has been derived in [8]; the values of y obtained via these equations differ significantly only when μ/kθ ≈ 0.3 or less. CA at Work, as Exemplified via Applications to a Cuprate and an Iron-Based SC 1) Tl 2 Ba 2 CaCu 2 O 8 (Tl-2212) [4] [6]: "Resolving" θ (Tl-2212) = 254 K, we obtain θ Ca = 254 K, θ Tl = 289 K, θ Ba = 296 K. Assuming that pairing is due to the Ca and Tl ions and treating μ as an independent variable, we find via ( 1) and ( 2) that the empirical values |W 20 | = 23.9meV and T c = 110 K of the SC can be explained by a multitude of {μ, λ Ca , λ Tl }-values.For each such set of values, we solve the equation for y given above and then calculate j 0 via (3), which leads to a multitude of values for the larger set {μ, λ Ca , λ Tl , y, j 0 }.Appeal to the empirical value, j 0 = 6.5 × 10 7 A/cm , then fixes the set as: {μ = 49.8 meV, λ Ca = 0.4899, λ Tl = 0.4543, y = 1.512}.As predictions, we have: s = 4.23, n s = 4.39 × 10 20 cm −3 , v 0 = 9.35 × 10 5 cm/sec.Repetition of the above procedure for pairing via the Ca and Ba ions, or the Ba and Tl ions, fixes λ Ba .Each of the sets {θ Ca , λ Ca }, {θ Ba , λ Ba } and {θ T1 , λ T1 } then leads to a value of |W 10 | in the 1PEM scenario, and to |W 30 | in the 3PEM scenario when all these sets operative simultaneously. 2) Ba 0.6 K 0.4 Fe 2 As 2 (BaAs) [4] [5]: We obtain θ Ba = 124.6K, θ Fe = 399.4K and θ As = 148.6K from θ BaAs = 274 K. From among the multitude of empirical gap-values that characterize it, , e.g., ≈ 0, 2.5, 3.3, 3.6, 4, 6, 7, 7.6, 8.5, 9, 12 meV, we choose 6 and 12 meV as our starting point and take its T c as 37 K, which are also the values commonly employed in MBA.We now assume that while the smaller gap and the T c are due to the Ba and Fe ions, the larger gap pertains to the 3PEM scenario (involving also the As ions).This necessitates supplementing ( 1) and (2) by another equation, which follows from (1) by replacing |W 20 | by |W 30 | and adding ( ) ; ; As W λ λ θ θ θ → → = to its LHS; θ 2 = θ Fe on the RHS remains unchanged because it is greater than either of θ 1 (=θ Ba ) and θ 2 (=θ As ).By solving three simultaneous equations, viz., (1), (2), and (3), as against the normal practice of appealing only to the T c and Δ-values of the SC, we are now led to a multitude of values for the set {μ, λ Ba , λ Fe , λ As , y, j 0 }.Appeal to j 0 = 2.5 × 10 7 Acm −2 then fixes this set as {μ = 14.2 meV, λ Ba = 0.1155, λ Fe = 0.3838, λ As = 0.2196, y = 3.433}.Besides, we are thus led to a quantitative explanation of several empirical features of the SC, such as: E F /kT c = 4.45, gap-values other than 6 and 12 meV, e.g., ≈ 0 and 9 meV, (T c ) max (via 3PEM) as exceeding 50 K, and the "dome-like" structure of its T c vs. a tuneable variable; and, as predictions, to values of s, n s , v 0 , and coherence length ξ as: hold for arbitrary values of E F , the ions responsible for pairing have been distinguished by distinct θand λ-values and the valence band overlapping the undulating Fermi surface has been characterized by locally spherical values-reminiscent of the locally inertial frames employed in the general theory of relativity [9].We recall that even though none of the elemental SCs has a perfectly spherical Fermi Open Journal of Microphysics surface [10], such an assumption works for them-barring a few for which 0 2 2) A salient feature of CA is that it invariably appeals to the ion-species that comprise an SC, whereas the number of bands invoked in MBA for the same SC differs from author to author [5].Besides, by employing as input the values of any two gaps of an SC, CA goes on to shed light on several others, and puts its ∆s, T c and j 0 under the same umbrella-which are features not shared by MBA. 3) While (3) identifies the parameters that can enhance j 0 , and hence T c [11], of an SC, their optimization in practice is not straightforward because, while y depends on E F , so do its constituents m* and P 0 .Besides, any attempt to increase the value of (γ/v g ), which is also implicitly a function of E F , is bound to raise the problem of stability of the SC.Hence, in the quest for tangible clues to raise T c s of SCs, we need to have a comprehensive catalogue that includes, besides their T c -and ∆-values, the values of θ, j 0 , m*, v 0 , n s , γ and v g . 4) To conclude, with s ± -wave as an intrinsic feature of it, we have shown that CA transforms the problem of raising T c into one of chemical engineering and that it is applicable to a wide variety of SCs, including the Fe-based SCs-without invoking a new state for them, as has been suggested via MBA [12].Hence there is a need for its greater dissemination.Finally, both the approaches (without excluding others) need to be followed up since the cherished goal of room-temperature superconductivity may be reached by appealing to different sets of axioms-as in Euclidean geometry.
3,474.8
2018-05-09T00:00:00.000
[ "Physics" ]
Rational development of transformation in Clostridium thermocellum ATCC 27405 via complete methylome analysis and evasion of native restriction–modification systems A major barrier to both metabolic engineering and fundamental biological studies is the lack of genetic tools in most microorganisms. One example is Clostridium thermocellum ATCC 27405T, where genetic tools are not available to help validate decades of hypotheses. A significant barrier to DNA transformation is restriction–modification systems, which defend against foreign DNA methylated differently than the host. To determine the active restriction–modification systems in this strain, we performed complete methylome analysis via single-molecule, real-time sequencing to detect 6-methyladenine and 4-methylcytosine and the rarely used whole-genome bisulfite sequencing to detect 5-methylcytosine. Multiple active systems were identified, and corresponding DNA methyltransferases were expressed from the Escherichia coli chromosome to mimic the C. thermocellum methylome. Plasmid methylation was experimentally validated and successfully electroporated into C. thermocellum ATCC 27405. This combined approach enabled genetic modification of the C. thermocellum-type strain and acts as a blueprint for transformation of other non-model microorganisms. Electronic supplementary material The online version of this article (10.1007/s10295-019-02218-x) contains supplementary material, which is available to authorized users. Introduction Most microbial metabolic engineering for biotechnology research is performed in model organisms, because they are well studied and have a large toolbox to enable genetic modifications [7]. Non-model organisms, on the other hand, are an attractive alternative as potential industrial platforms for bioconversion, because they often possess complex phenotypes that are not currently feasible to engineer into model organisms, such as the ability to grow at extreme pH, extreme robustness/toxicity tolerance, or the ability to catabolize less common substrates such as syngas, methane, cellulose, or lignin [49,52]. However, work with non-model organisms is limited due to a lack tools and knowledge of the organism. A major barrier to genetic tools development is the inability to efficiently transform DNA, and to routinely allow for the use of non-model organisms; a systematic process for developing transformation is needed. One of the major barriers to successful transformation of bacteria is native DNA restriction-modification (RM) systems, which act as a bacterial immune system to cut DNA that is methylated differently than the host [2]. RM systems are classified into four types. In Types I, II, and III, a restriction enzyme typically cuts DNA that is unmethylated at a specific recognition sequence, and a corresponding DNA methyltransferase adds a methyl group to the host's DNA to protect these same sequences from restriction enzyme activity [2]. Type I systems are comprised of three subunits: a DNA methyltransferase, a DNA recognition subunit, and a restriction enzyme. This type of system recognizes two motifs of 3-4 bases separated by any 5-8 bases [25], such as the EcoKI system that targets AACNNNNNNGTGC (N is any base), and motifs are methylated at the N-6 position of one adenine per DNA strand to form N 6 -methyladenine (m 6 A). Type II and III systems are typically comprised of a methyltransferase and a restriction enzyme subunit. Type II systems are largely studied and commonly used as tools in molecular biology. Their recognition systems are often palindromic, and they can methylate bases to form m 6 A, N 4 -methylcytosine (m 4 C), or 5-methylcytosine (m 5 C) [16,31]. Type III systems recognize non-palindromic motifs and typically methylate to form m 6 A [38]. The last group, Type IV, is only comprised of a restriction enzyme, which recognizes motifs that are methylated differentially than the host [19,38]. To successfully evade most native RM systems, DNA needs to be methylated in the same way as the target organism, and many studies have shown that overcoming these systems is important for efficient DNA transformation [4,23,33,41,50]. To rationally evade RM systems, the targeted motifs first need to be identified. Microbes protect their chromosomal DNA from restriction via DNA methylation; therefore, methylome analysis can be used to identify these motifs. While not all DNA methyltransferases are associated with restriction enzymes [3], methylome analysis does reveal all potentially active Type I, II, and III RM systems present in an organism. A common approach to microbial methylome analysis uses single-molecule real-time (SMRT) sequencing on the PacBio platform [9,17,24,28], which identifies m 6 A and m 4 C motifs based on kinetic delays in nucleotide incorporation when a base is methylated [37]. This approach has been specifically used to help overcome restriction barriers to genetic transformation in a number of organisms [4,34,41]. SMRT sequencing can also identify m 5 C motifs, but the signal is less strong, so it requires immense coverage of the genome or modification of the methylated base via Tet1 oxidation [5], approaches that are not common practice. An alternate approach to detect m 5 C and m 4 C is methyl-C sequencing for whole-genome bisulfite sequencing (WGBS), in which the cytosine (but not m 5 C) is deaminated to uracil, followed by polymerase chain reaction (PCR) to convert the resulting uracils to thymines. The resulting PCR-amplified DNA can then be sequenced, and remaining cytosines in the sequence were previously methylated. This approach has not been routinely used in bacteria, and we have only identified a few studies that utilized WGBS for bacterial methylome analysis for characterization of RM systems [13,44,51]. Clostridium thermocellum is an anaerobic, thermophilic bacterium that efficiently deconstructs lignocellulosic biomass via large cell surface-associated enzyme complexes called cellulosomes. The resulting soluble sugars are fermented into products such as organic acids and ethanol [1]. There is particular interest in C. thermocellum due to its potential for biofuel production from lignocellulose via a process called consolidated bioprocessing (CBP), in which cellulolytic enzyme production and fermentation occur in a single reactor [26,27]. The cellulosome was first discovered in the Clostiridium thermocellum-type strain, ATCC 27405 [15], and a substantial amount of work has been done on this strain to understand carbon metabolism and the genes related to cellulosome production [8,32,35,43,48]. All of these studies examined the wild-type strain using omics tools such as transcriptomics and proteomics, but to date, this strain has not been genetically modified. This is in stark contrast to C. thermocellum DSM 1313, where transformation and genetics are readily available [29] and extensive metabolic engineering has been achieved [18,30,40,45,46]. The lack of genetic tools in strain ATCC 27405 has hindered both fundamental studies and development of this strain for bioengineering. Thus far, one publication has demonstrated transformation of this strain using a custom-made electroporator [47], and additional attempts to transform C. thermocellum ATCC 27405 using commercially available equipment have been unsuccessful. One other publication demonstrated a low transformation rate of 2.5 ± 1.5 colonies per microgram of DNA with a single plasmid using large cell and DNA volumes, which makes the process hard to utilize in the future [20]. One possible reason for the difference in success of transformation between strains DSM 1313 and ATCC 27405 is differences in RM systems, which are understudied in C. thermocellum ATCC 27405. New England Biolabs Restriction Enzyme Database (REBASE) [39] predicts the RM systems encoded in both strains of C. thermocellum, where ATCC 27405 encodes eight restriction systems and DSM 1313 only encodes five. Of the eight potential RM systems in ATCC 27405, one is a putative Type I system, genes Cthe_1144-1145, though it seems to be missing the restriction enzyme subunit. There are six putative Type II systems encoded: Cthe_1511-1513, Cthe_1638-1639, Cthe_1748-1749, Cthe_2470-2471, Cthe_2319-2320, and Cthe_1748-1749. Cthe_2740 and Cthe_1728-1729 do not have annotated associated restriction enzymes, but the other four encode putative restriction enzymes do. The genome also encodes one Type III RM system, Cthe_0518-0519, with Cthe_0518 being a predicted restriction enzyme. No apparent Type IV systems are encoded in the genome. Previously, one of these RM systems was shown to be active in cell extracts of strain ATCC 27405, and extracts exhibited endonuclease activity targeting the motif GATC, similar to the MboI restriction enzyme [14]. Here, we report the methylome of C. thermocellum ATCC 27405, identify and express the active methyltransferases, validate the expression for in vivo methylation, and successfully transform C. thermocellum ATCC 27405. Strains, plasmids, and growth conditions Escherichia coli Top 10 Δdcm::frt was created by deleting dcm in Top10 (Invitrogen) with the lambda Red recombinase system as previously described [6]. E. coli strains were grown aerobically in LB at 37 °C, and chloramphenicol was added for plasmid selection at a final concentration of 15 µg/ mL. C. thermocellum ATCC 27405 was grown in CTFUD medium [40], which is comprised of (per L) 3 g sodium citrate tribasic dehydrate, 1.3 g ammonium sulfate, 1.43 g potassium phosphate monobasic, 1.8 g potassium phosphate dibasic trihydrate, 0.5 g cysteine HCL, 10.5 g MOPS sodium salt, 6 g glycerol-2-phosphate disodium, 5 g cellobiose, 4.5 g yeast extract, 0.13 g calcium chloride dehydrate, 2.6 g magnesium chloride hexahydrate, 0.0001 g ferrous sulfate heptahydrate, and 0.5 ml 0.2% (w/v) resazurin. The pH is adjusted to 7.0 after addition of MOPS with 45% (w/v) potassium hydroxide. C. thermocellum was grown at 50 or 55 °C, as indicated, in a Coy anaerobic chamber. Thiamphenicol was added to the medium when appropriate at a final concentration of 5 µg/mL. Two plasmids, pNJ020 and pAMG216, were used for transformation of C. thermocellum, each containing the pNW33N origin of replication for C. thermocellum and the cat gene for thiamphenicol selection. Plasmid pNJ020 contains the p15a origin for medium copy-number replication in E. coli. Plasmid pAMG216 is derived from pAMG205 with the yeast machinery deleted, and it has a high copy-number pUC origin of replication for E. coli [11]. Methyltransferases were codon optimized for E. coli and synthesized with the T5Lac promoter by GenScript Biotech Corp (New Jersey, USA), amplified by PCR, and cloned into CRIM integration vectors [12] using Gibson assembly (New England Biolabs, NEB). The native C. thermocellum gene Cthe_0519 was cloned into pAH55 [12]. The bifunctional methyltransferase phi3TI, from Bacillus phage phi3T, was cloned into pAH144. Each plasmid was integrated, using the CRIM system [12], into the λ and HK022 phage attB sites of E. coli Top 10 Δdcm::frt, resulting in strain AG2006 (genotype: Top10 Δdcm::frt λ::Cthe_0519 HK022::phi3TI). Complete, annotated plasmid sequences are available in Supplemental File 1. SMRT sequencing methylome analysis Genomic DNA from C. thermocellum ATCC 27405 was isolated using the Genomic Tip kit (Qiagen according to the manufacturer's instructions and sent to Expression Analysis (Durham, NC, USA) for sequencing on a Pacific Biosciences (PacBio) instrument. Single-molecule real-time (SMRT) sequencing was performed using the PacBio RS technology with four SMRT cells. Methylated sequences were determined by Expression Analysis using the SMRT Analysis software [10]. Whole-genome bisulfite sequencing analysis MethylC-seq libraries were prepared and Illumina sequencing was performed (Genomics & Bioinformatics Core, University of Georgia) using an Illumina NextSeq 500 instrument. For data processing, raw reads were trimmed for adapters and preprocessed to remove low-quality reads using cutadapt 1.9.dev1 [21]. Qualified reads were aligned to the C. thermocellum ATCC 27405 reference genome [42]. Fully unmethylated lambda DNA was used as a control to calculate the sodium bisulfite reaction non-conversion rate of unmodified cytosines. Only cytosine sites with a minimum coverage (set as 3) were selected for subsequent analysis. Binomial test coupled with Benjamini-Hochberg correction was adopted to determine the methylation status of each cytosine. The m 5 C motifs were identified as previously described in [51]. Determining functionality of the methyltransferases Plasmids pNJ020 and pAMG216 were transformed into E. coli strains AG2006 and Top10 Δdcm::frt and grown in liquid culture with 0.1 mM IPTG to induce methyltransferase expression. Plasmid pNJ020 was isolated and digested in three separate reactions with NlaIII, TseI and HindIII, and HaeIII and HindIII (New England Biolabs, Ipswich, MA, USA) according to the manufacturer's instructions. To determine the amount of DNA methylated, digested plasmid was separated via agarose gel electrophoresis and quantified using BioRad Gel Imager software. Transformation of C. thermocellum ATCC 27405 Three 5 mL cultures were inoculated with C. thermocellum ATCC 27405 and grown overnight. Three 500 mL cultures were inoculated with 1% of the overnight cultures and grown at 55 °C. Cultures were harvested at an optical density 1 3 (O.D.) of ~ 1.0, transferred to a 500 mL centrifuge bottle, placed on ice for 30 min, and then centrifuged at 4 °C at 5000×g for 15 min. The supernatant was decanted and cells were washed with 250 mL cold electroporation buffer (10% glycerol, 250 mM sucrose), which was added without disrupting/resuspending the cell pellet. Cells were spun again, and the wash was repeated two more times. After the last wash, the electroporation buffer was completely removed, and the cell pellets were resuspended in 200 µL electroporation buffer and transferred to a microcentrifuge tube. Using fresh electrocompetent cells from each batch, 20 µL of cells were transformed with 1 µg of DNA. Square wave electroporation was performed in a 1 mM electroporation cuvette at 1200 v with a 1.5 ms pulse. Cells were resuspended in 1 mL of CTFUD liquid medium and mixed with molten CTFUD supplemented with 1.5% agar and thiamphenicol, allowed to solidified, and incubated at 50 °C for 3-5 days, when colonies were counted. Complete methylome analysis of C. thermocellum ATCC 27405 To determine which of the eight RM systems, predicted by REBASE, are active, methylome analysis was performed. All methylated motifs in C. thermocellum ATCC 27405 were determined through two sequencing techniques, PacBio SMRT sequencing and MethylC-sequencing for WGBS. PacBio SMRT sequencing detected three m 6 A motifs and WGBS detected two m 5 C motifs (Table 1). Based on the type of motif and homology to known enzymes, each motif was tentatively assigned to a DNA methyltransferase (Table 1). Cthe_2470 and Cthe_1511 are both annotated as Dam methyltransferases, and Cthe_1511 is encoded in a putative operon with the MboI-type restriction enzyme, suggesting that these enzymes target GATC. The CNCANNNNNNTTC motif is consistent with Type I RM system motifs (two 3-4 base-specific sites separated by 5-8 Ns), and so this is presumably targeted by the only Type I enzyme encoded in the genome-Cthe1144-1145. The non-palindromic sequence GTCAT is consistent with a Type III system, and so is likely targeted by the only encoded Type III enzyme, Cthe_0519, which is experimentally validated below. Cthe_2320 is annotated as a HaeIII family restriction endonuclease, which targets GGCC, suggesting that Cthe_2321 methylates GGCC. The only remaining m 5 C methyltransferase encoded in the genome is Cthe_1749, suggesting that it targets the remaining m 5 C site-GCWGC. These results are consistent with the predictions from REBASE. DNA methyltransferases were functionally expressed in E. coli To methylate plasmid DNA in the same way as C. thermocellum ATCC 27405, we engineered E. coli to mimic the C. thermocellum ATCC 27405 methylome. One of the motifs, GATC, is natively methylated by the Dam methyltransferase in E. coli. Two additional methyltransferases were expressed from the E. coli chromosome. First, Cthe_0519 was integrated the chromosome to target GTC AT. Next, a previously characterized bifunctional Bacillus phage methyltransferase, phi3TI [22], was added to the chromosome. The Phi3TI methyltransferase is known to methylate both GCWGC and GGCC, the same sites methylated in C. thermocellum by Cthe_1749 and 2321, so expressing this gene enabled methylation of both sites via the expression of a single gene. Cthe_0519 and phi3TI were added to the chromosome of an E. coli dam + dcm − strain, so that Dam natively methylates GATC, a methylated motif observed in C. thermocellum, but Dcm does not methylate CCWGG, which was not seen in the C. thermocellum methylome analysis. The last methylated motif, CNCANNNNNNTTC, does not occur on pNJ020, and therefore, the corresponding methyltransferase was not expressed from the E. coli chromosome. Methylated bases are in bold. In cases where T or G are bold, the methylation is on the A or C of the complementary strand, respectively. % m 6 A motifs were detected by PacBio SMRT sequencing, and m 5 C motifs were detected by WGBS. Modified is the percentage of these motifs in the genome that were detected as methylated. # of motifs in the genome is the number of times each motif appears in the genome. The "Predicted Methyltransferase" is the most likely C. thermocellum gene responsible for each methylation (N any base, W A or T) The in vivo activity of Cthe_0519 and Phi3TI methyltransferases was determined by restriction enzyme digestion to examine the extent of methylation by our heterologously expressed methyltransferases. Many commercially available restriction enzymes are blocked by overlapping DNA methylation. Therefore, a restriction enzyme was chosen for each motif that either partially overlaps the motif of interest or, when possible, targets the exact same motif. If the plasmid is properly methylated, then the enzyme will be blocked by the methylation and no cutting will occur. For instance, restriction enzyme NlaIII cuts unmethylated CATG, and a fraction of these NlaIII sites will overlap with GTCAT (when the sequence GTC ATG occurs) and will not be cut (Fig. 1a). For pNJ020, methylation of GTC AT by Cthe_0519 blocks NlaIII-cutting between a 393 and 62 bp band, resulting in the generation of 455 bp band instead (Fig. 1a). Complete blockage of this cut site, and therefore compete methylation, was observed for Cthe_0519 (Fig. 1b, Lane 1, 2), where complete disappearance of the 393 bp band and generation of the new 455 bp band are observed. For the GGCC motif, restriction enzyme HaeIII targets the same sequence, so we would expect complete blockage of restriction activity if methylation of this sequence occurs in E. coli. We linearized the plasmid with HindIII and digested with HaeIII (Fig. 1). Unmethylated DNA (Lane 3) was completely digested, while Phi3TI methylation mostly blocked HaeIII digestion (Lane 4), suggesting nearly complete methylation of GGCC by Phi3TI. For GCWGC methylation, enzyme TseI targets the same motif, so the plasmid was linearized with HindIII and digested with TseI. Unmethylated DNA (Lane 5) was completely digested, while Phi3TI methylation fully blocked TseI digestion (Lane 6), suggesting complete methylation of GCWGW by Phi3TI. Methylated DNA allows transformation of C. thermocellum ATCC 27405 To determine if targeted DNA methylation would allow transformation of C. thermocellum ATCC 27405, we tested transformation efficiency using pNJ020. This plasmid contains 8 GTC AT, 7 GCWGC, 5 GGCC, 5 GATC, and no CNCANNNNNNTTC sites. Plasmid DNA was methylated by the control strain Top10 Δdcm, which only methylates GATC, and pNJ020 methylated with Cthe_0519 and Phi3TI, which methylates GATC, GTC AT, GGCC (partially), and GCWGC, where all putative restriction systems should be evaded. The control transformation in which pNJ020 was only methylated with native E. coli Dam yielded no colonies, while the methylated plasmid yielded an average of 80 colony-forming units (CFU)/μg of plasmid DNA. Transformation of plasmid pAMG216 was also tested, which contains one Type I motif, and an average of 40 CFU/µg was observed ( Table 2). Discussion Development of genetic systems in non-model microbes is a grand challenge for the study of both fundamental and applied microbiology. Here, we show a rational, systematic process for developing transformation methods in C. thermocellum ATCC 27405 by overcoming the native RM systems. This was done by first identifying the methylated motifs using high-throughput sequencing techniques, including the rarely used WGBS technique for identifying m 5 C motifs. Methylome analysis was followed by heterologous expression of targeted methyltransferases from the E. coli chromosome to mimic the C. thermocellum ATCC 27405 methylome. Plasmid DNA was passaged through this E. coli strain to methylate it, and functionality of the methylases was confirmed through restriction enzyme digest of the plasmid DNA. Properly methylated plasmid avoids the native RM systems and allowed transformation using commercially available equipment, which opens the door to metabolic engineering and the development of more advanced genetic tools. By enabling reliable transformation, the type strain of C. thermocellum can now be studied through genetic manipulation. The first step to understanding and overcoming the native RM systems in C. thermocellum ATCC 27405 and other organisms is identifying the methylated motifs and the cognate methyltransferases. PacBio SMRT sequencing is the most commonly utilized method for microbial methylome analysis and can accurately identify m 6 A and m 4 C motifs. SMRT sequencing revealed three methylated motifs, and the corresponding RM systems were identified using REBASE. While SMRT sequencing is also able to detect m 5 C motifs, it does not always identify these motifs [37]. For example, the two m 5 C motifs in C. thermocellum ATCC 27405 were not discovered in the SMRT methylome analysis. Therefore, WGBS is a vital step to reliably determine the full methylome of a bacterial strain. Using WGBS on an Illumina platform, we were readily able to detect m 5 C motifs with less sequencing coverage. Recently, Oxford Nanopore sequencing has also been shown to detect DNA methylation [36]. As technologies advance, the simplicity of methylome analysis will likely increase. Previous studies have expressed methyltransferases to methylate DNA prior to transformation, but functionality of these enzymes beyond examining the impact on transformation efficiency is rarely tested. Here, we tested functionality of the expressed methyltransferases through a restriction enzyme digestion assay in which restriction activity is blocked when the plasmid is methylated. While the Cthe_0519 methyltransferase fully methylated the plasmid DNA, the Phi3TI methyltransferase only partially methylated the DNA. By digesting plasmid DNA isolated from the E. coli methylation strain with an enzyme that overlaps with the methylated motif of interest, functionality can be easily determined by the percentage of DNA cut/uncut. This approach also unambiguously confirmed that Cthe_0519 methylates GTC AT. An alternative approach to evading RM systems is to use plasmids that lack the targeted sequence. We used this approach for the putative Type I RM system (Cthe_1144-1145), where the motif was avoided using plasmid DNA (pNJ020) that does not contain the motif CNCANNNNNNTTC. Thus, isolation of pNJ020 out of the E. coli methylation strain results in plasmid DNA that fully mimicked the ATCC 27405 methylome. While this approach can be helpful, it is not always possible to avoid or remove the target motifs. For instance, when the motif is short, such as a four-base recognition sequence, it may not be feasible to remove them all. Some motifs may also happen to be in critical parts of the sequence, such as when they are in the origin of replication or in a sequence being used for homologous recombination. Therefore, expression of methyltransferases in E. coli will likely continue to be an attractive approach for developing transformation tools in new organisms. Interestingly, Cthe_1144-1145 seems to not be part of an active restriction system. It lacks a predicted restriction enzyme, and when transformation of a plasmid containing the corresponding Type I motif (pAMG216) was tested, there was not a substantial difference in transformation efficiency relative to pNJ020, suggesting that this system, indeed, lacks a restriction enzyme. The twofold difference in efficiency can likely be explained by the difference in plasmid size (pNJ020 is 3561 bp, while pAMG216 is 4700 bp), where the smaller plasmid has a greater number of plasmid molecules per microgram of DNA and may enter the cell more easily due to its smaller physical size. We have demonstrated reproducible transformation of C. thermocellum strain ATCC 27405 and are now poised to improve these methods to increase frequency. Improved methylation by the Phi3TI methyltransferase will likely increase transformation efficiency. In addition, for reasons unknown, strain ATCC 27405 does not form a cell pellet during centrifugation as well as strain DSM 1313. Therefore, identifying growth conditions under which ATCC 27405 forms tight cell pellets would make competent cell preparation simpler and presumably increase competent cell concentration, potentially leading to increased efficiencies. However, all approaches for genetic manipulation in C. thermocellum to date have relied on use of replicating plasmids, even for gene deletions [29], so the full suite of C. thermocellum genetic tools is now available in strain ATCC 27405, even at the current transformation efficiency. In conclusion, the combination of complete methylome analysis via both PacBio SMRT sequencing and WGBS and validated DNA methyltransferase expression allowed the rational development of transformation methods for C. thermocellum ATCC 27405. We anticipate that the approach for obtaining transformation demonstrated in this work may be applied to many other bacterial strains, especially in new, non-model organisms.
5,600
2019-07-24T00:00:00.000
[ "Biology", "Engineering" ]
The Future of Flu: A Review of the Human Challenge Model and Systems Biology for Advancement of Influenza Vaccinology Objectives: Novel approaches to advance the field of vaccinology must be investigated, and are particularly of importance for influenza in order to produce a more effective vaccine. A systematic review of human challenge studies for influenza was performed, with the goal of assessing safety and ethics and determining how these studies have led to therapeutic and vaccine development. A systematic review of systems biology approaches for the study of influenza was also performed, with a focus on how this technology has been utilized for influenza vaccine development. Methods: The PubMed database was searched for influenza human challenge studies, and for systems biology studies that have addressed both influenza infection and immunological effects of vaccination. Results: Influenza human challenge studies have led to important advancements in therapeutics and influenza immunization, and can be performed safely and ethically if certain criteria are met. Many studies have investigated the use of systems biology for evaluating immune response to influenza vaccine, and several promising molecular signatures may help advance our understanding of pathogenesis and be used as targets for influenza interventions. Combining these methodologies has the potential to lead to significant advances in the field of influenza vaccinology and therapeutics. Conclusions: Human challenge studies and systems biology approaches are important tools that should be used in concert to advance our understanding of influenza infection and provide targets for novel therapeutics and immunizations. INTRODUCTION Although influenza virus was recognized as an important pathogen over a century ago, influenza continues to cause a significant burden of disease. In the United States alone, it's estimated that in the 2017-2018 season there were 959,000 hospitalizations related to influenza illness, and 79,400 deaths (CDC, 2018). Worldwide, WHO estimates that annual influenza epidemics cause 3-5 million cases of severe disease, with 290,000-650,000 of these severe cases resulting in death (Influenza (Seasonal), 2018). Although annual influenza immunizations are recommended and antivirals are available, both have several limitations. The efficacy of the seasonal influenza vaccine is compromised by several factors: antigenic changes over time (requiring a strain specific match each year), slow manufacturing processing, vaccine strain egg-adapted changes, short duration of protection, lack of cross-reactivity, and poor immunogenicity in certain populations (e.g., the elderly) (Goodwin et al., 2006;Soema et al., 2015;Raymond et al., 2016). Antiviral agents such as neuraminidase inhibitors are most effective if administered early in the disease course, and even then have only a modest impact upon the duration of clinical symptoms (McNicholl and McNicholl, 2001). Furthermore, data are inconclusive regarding the ability of neuraminidase inhibitors to reduce the risk of complications, such as hospitalizations or progression to pneumonia (Doll et al., 2017). Novel platforms to understand influenza immunology are essential in order to address the burden of influenza disease and develop a more effective influenza vaccine that does not rely on annual updates. Combining old modalities-human challenge studies-with new technology-systems biology-has the potential to lead to exciting discoveries that can achieve this goal. There are many reasons why human challenge studies are essential for scientific progress, especially for the influenza field. Human challenge models have several benefits over traditional models such as animal and epidemiological models. Although mice and ferrets have been used in influenza challenges, animal models do not directly translate to humans in regards to their baseline immunity and subsequent immunological responses. Epidemiological or field studies have also been applied to study influenza vaccine efficacy, such as Petrie et al who followed a cohort of over 1,000 individuals from 2014 to 2015 to assess vaccine effectiveness (Petrie et al., 2017). However, these studies require a large sample size and often require numerous sampling points and several years to acquire sufficient data, with many conflicting and cofounding variables. In contrast, the human challenge model is efficient (relatively few participants are required to power a study), immunological responses of humans can be studied directly, and the exact timing of infection is known so that specific time points and measurements are precisely determined. Human challenge studies for influenza are particularly attractive, with current national emphasis on development of a universal influenza vaccine. As outlined by NIAID's strategic plan, a universal flu vaccine would be at least 75% effective, maintain protection for at least 1 year, protect against group I (e.g., H1, H5) and II (e.g., H3, H7) influenza A virus strains, and be effective for all age groups (Erbelding et al., 2018). The strategic plan also elucidated how a human challenge model could offer unique benefits to better understand the concept of imprinting, determine correlates of protection against influenza, and evaluate different universal influenza vaccine candidates. In this review, we will examine influenza human challenge studies that have been conducted and their safety, as well as review the ethical considerations in designing a challenge model. We will also review how the use of systems biology techniques in the context of human influenza challenge studies has great potential to advance our current understanding of the host response to acute influenza infection, and ultimately aid in the development of a universal influenza vaccine. The PubMed database was used to search for relevant clinical trials related to these subjects. We propose that successful integration of the right model (the human challenge study) in combination with systems biology approaches will help to better understand the immunological mechanisms of influenza infection and effects of vaccination, which will ultimately aid in the development of an improved vaccine (and perhaps even a universal influenza vaccine), and novel influenza therapeutics. History and Safety of Influenza Challenge Models In medical research, there is a long and complex history of human challenge studies, in which healthy participants are intentionally infected in order to study the natural history of a disease or to test experimental therapeutic and preventative measures. Perhaps the most famous challenge study in infectious disease was Edward Jenner's use of cowpox in 1796. Although he is not the first to use intentional infection to protect against disease-historical records show that similar practices were likely occurring in Africa, India and China (Gross and Sepkowitz, 1998) long before the eighteenth century-he was responsible for publishing and popularizing the idea. Jenner inoculated the 8-year-old boy James Phipps with cowpox derived from a lesion on the hand of a dairymaid, Sarah Nelms (Riedel, 2005). This successful challenge lead to the creation of the first vaccine, with widespread use in Europe by the year 1800, and eventual eradication of smallpox. The first well described influenza challenge study was published by Smorodintseff et al. in 1937. The authors infected 72 volunteers via inhalation of a human influenza virus, that had been maintained through the passage of ferrets and mice (Smorodintseff et al., 1937). They discovered that only a small percent (∼20%) developed disease that was mild in intensity. The model was deemed safe and was utilized for several decades thereafter to understand immune responses to influenza and test preventative and therapeutic measures. However, in the early 2000s, influenza human challenge studies came to a halt. This was a direct result of an adverse outcome associated with a human challenge study that was investigating the use of peramivir as a prophylactic agent (Ison et al., 2005). The subject was a 21-year-old, previously healthy individual with no prior cardiac history. Following the infection with mild influenza B infection and receipt of peramivir, he had asymptomatic ECG changes (described as new T wave inversions of leads II, aVF, and v4-6) on day 4 of the study. Repeat ECG at day 15 had returned to baseline, and he had no cardiac symptoms or enzyme elevation of CPK. He then traveled to Indonesia for 2 weeks, and became ill with an URI. When he returned to the US, he had an echocardiography performed 51 days after challenge since he had had initial ECG changes earlier. He had new reduced ejection fraction, with left ventricular enlargement and remained asymptomatic. Extensive work-up was unrevealing for infectious etiologies. Over the next 5 months, repeat echocardiograms showed gradual improvement and return to normal ejection fraction. Despite the subject's return to baseline health, and despite the lack of direct causality linking myocarditis to the influenza challenge stock, no further influenza challenge studies were conducted in the US for nearly a decade later. Internationally, as well, only a few challenge studies for influenza were conducted in this time period. After the H1N1 pandemic, Memoli et al. at the NIH re-visited the human challenge model with their validation of a A/H1N1 challenge strain (Memoli et al., 2015). His group challenged 46 healthy participants with a virus that was rescued using reverse genetics (A/CA/04/2009) , with the goal to determine the dose needed to produce mild to moderate influenza infection in at least 60% of the participants who had baseline HAI titers of ≤1:40. The experimental influenza virus was delivered intranasally, and the participants were kept in isolation for at least 9 days following the challenge. The participants' symptoms were monitored and their immune response (serum cytokines, HAI and neuraminidase inhibition titers) documented at specific pre-and post-challenge time points. Discharge from isolation occurred only after the participant had two negative nasal washes on consecutive days. All of the participants demonstrated clinical symptoms of infection (as intended), and 70% of participants had both viral shedding and symptoms. At the dose of 10 7 TCID 50 , 85% of the participants who received that dose had a ≥4fold rise in HAI titer by week 8. No significant adverse effects or complications of the influenza infection occurred. Carrat et al. also performed a large and thorough review of 56 different influenza challenge studies, and confirmed that infection from a challenge stock induced only mild disease, with one third of participants having a fever and one fifth of participants developing lower respiratory symptoms (Carrat et al., 2008). This extensive review, in addition to further studies by Memoli's team (Memoli et al., 2016;Hunsberger and Memoli, 2017;Han et al., 2018), demonstrated that influenza challenges can be implemented safely and used in influenza research. Ethical Considerations for Human Challenge Studies Human challenge studies raise specific ethical concerns because they place research subjects at risk with no potential for direct benefit. The main considerations that must be weighed include the level of risk to the participants, the overall social value of the study, the absence of good alternative study designs, and informed consent. In human subject research, the risk-to-benefit ratio is usually favorable to the subject, before the risk-to-benefit ratio is weighed for the broader society (Pollard et al., 2012). For example, an individual with a rare and life-threatening disease may choose to participate in a trial that tests a novel therapy, because they personally may benefit from a cure for an otherwise untreatable condition. However, for the human challenge model, the participant is not directly gaining anything for their health, and the risk-to-benefit ratio leans toward the side of risk. Therefore, an acceptable challenge model must have risks that are reasonable to the participants, because the challenge model can be justified only by the benefits to society and not to the individuals. To mitigate risk to the participants, certain criteria should be evaluated. The study hypothesis must only be answerable by challenging human subjects; if the question and endpoint of the study could be determined by animal models or in vitro techniques, the human challenge model should not be used. Infected subjects should also have therapy available (in the case of influenza, antivirals such as oseltamivir or peramivir) in the event that they develop severe influenza infection during the experimental challenge. Participants should receive compensation for their time and effort; however, the more controversial topic is how much participants should receive. Is there a certain amount that is too much, and thus coercive? And likewise, is there an amount that is too little based on the risk inherent in the challenge study? Several authors, such as Miller and Grady, have offered a framework to evaluate the ethical considerations of an infection-inducing challenge study (Miller and Grady, 2001). To maintain ethical integrity, other specific considerations must be addressed in designing an influenza human challenge study, as outlined in Table 1. If these criteria and considerations are carried out in a thoughtful manner, influenza challenge studies can be implemented safely and ethically. A limitation of the human challenge model is the requirement to have healthy volunteers with no significant comorbidities. As stated above, healthy volunteers must be recruited in order to be ethically sound, in order to reduce the overall risk and complications to the individual. However, this limits our understanding of influenza pathogenesis and vaccine performance in high-risk populations who have the greatest likelihood of severe influenza complications, such as the very young and elderly, immunocompromised, and those with comorbidities. Another inherent limitation of this model is the goal to produce only mild to moderate influenza disease, not severe disease. Again, the efficacy of new therapeutics would be most important in the population suffering severe disease, which cannot be tested in this model for ethical reasons. Strict inclusion and exclusion criteria to ensure healthy volunteers with minimal comorbidities. Full review of the proposed study by a third-party ethics committee. Selection of appropriate clinical and microbiological endpoints to minimize risk to participants. Transparent informed consent and fair compensation for participants. Facilities and trained staff that can ensure close and careful monitoring of infected participants. Proof of decreased infectivity (e.g., undetectable virus by molecular testing) upon discharge to eliminate possibility of transmission to general public. Adequate clinical follow-up and evaluation for adverse events or sequelae of influenza infection. *Loosely adapted from Darton et al. (2015) and Miller and Grady (2001). Frontiers in Cellular and Infection Microbiology | www.frontiersin.org Advancing Influenza Therapeutics With the Influenza Human Challenge Model The influenza challenge model has proven useful in advancing the development of several current antivirals to the market, while also useful in terminating the clinical development of others. Two large randomized controlled trials were conducted in 1997 by Hayden et al. which investigated the use of oseltamivir, an oral neuraminidase inhibitor, for both prophylaxis and treatment (Hayden et al., 1999). Healthy participants (N = 117) were inoculated with influenza A/Texas/36/91 (H1N1), and given oseltamivir (at two different doses) or placebo. The results showed that prophylaxis and early treatment with oseltamivir significantly reduced symptoms and had anti-viral effects. This trial is the basis for the standard of care practiced today-oseltamvir continues to be recommended for influenza prophylaxis and treatment by clinical treatment guidelines (Uyeki et al., 2019). Peramivir, another neuraminidase inhibitor agent, was also tested using a similar design. Four randomized, double-blind, placebo-controlled trials were conducted, with 288 healthy volunteers inoculated with either A/Texas/36/91 (H1N1) or B/Yamagata/16/88 (Barroso et al., 2005). At the time of the study, oseltamivir was the only oral neuraminidase inhibitor available. Therefore, the authors sought to determine the tolerability and antiviral efficacy of oral peramivir for treatment and prophylaxis of influenza A and B, using the experimental influenza virus. The results showed that an oral dose of 200 mg twice daily or 400 mg once a day was effective for influenza A, and that prophylaxis with peramivir did not significantly reduce viral shedding. The authors also found relatively low blood peramivir concentrations, and recommended pursuing further study with parental dosing. In both the oseltamivir and peramivir studies, the challenge model was essential for understanding the exact timing of infection in order to test the efficacy of the new agents and identify the appropriate dosing schedule. Testing of therapies that did not work were equally important in advancing the field. For example, a topical interferon inducer was tested in subjects experimentally infected with an influenza A H3N2 strain (Douglas et al., 1975). They found no difference in frequency or severity of illness, or in quantitative viral shedding between the placebo group and the group who received the interferon inducer. Another study investigated a new antiviral (agent ICI 130,685) that was similar to amantadine for both prophylactic and therapeutic use (Al-Nakib et al., 1986). The authors found that that the higher dosage of 200 mg/day of the new agent did have protective efficacy as compared to placebo when used for prophylaxis; however, the number of side effects in this group were double the side effects reported for the placebo group. For the therapeutic group, the new agent did show a reduced mean daily clinical score and decreased virus in the nasal washings as compared to the placebo arm. Yet, these reductions were not statistically significant for viral concentration until day 3, and for symptom score until day 4. Therefore, this product did not forward toward licensure since the risks (increased side effects) were greater than the benefits (only minor reductions in viral shedding and late symptom improvement). Advancing Vaccine Development With the Influenza Challenge Model Human challenge studies have also been critical for the development of influenza vaccines. The first influenza vaccines were developed and distributed in the US in 1938, and provided to soldiers in World War II (Hannoun, 2013). Thomas Francis, who was a leader in the field of influenza and the author of "Doctrine of Original Antigenic Sin, " published a small trial in 1940 in which he inoculated active influenza viral cultures into the nares of 11 human subjects (Francis, 1940). He reported that none of the subjects experienced significant signs or symptoms of infection, and one individual showed a rise of influenza antibodies. He proposed that this technique may have potential for vaccine development. Another study in 1942-1943 challenged individuals with active influenza virus who had received an allantoic fluid vaccine at least 4 months before (Henle et al., 1943). Although it was a small study, the investigators were able to determine the efficacy of the vaccine. Of their controls, ten of twenty-eight individuals developed clinical influenza after inhalation of the isolated and active virus. In contrast, only one of forty-four individuals became ill with influenza after receiving the vaccination. Other early investigations had similar goals, and tested vaccine efficacy for both influenza A and B (Francis and Magill, 1937;Francis et al., 1945;Henle et al., 1946). In 1971, Couch et al. investigated the use of a recombinant influenza A vaccine (X-31 influenza A 2 , Hong Kong variant) in comparison to the standard vaccine (Couch et al., 1971). Two groups of healthy volunteers were given either the recombinant or the standard vaccine, and then challenged with the live virus strain (Hong Kong variant) used in the vaccine a month later. They measured neuraminidase-inhibiting antibody in the serum, neutralizing antibody in nasal secretions, influenza virus in the nasal secretions, and degree of clinical illness. The results demonstrated that the two vaccines were equally effective. Treanor et al. later challenged healthy adults with wild-type influenza strains to compare the efficacy of a trivalent, live, cold-adapted influenza vaccine (CAIV-T) against the trivalent inactivated influenza vaccine (TIV) . After challenging the immunized individuals, they measured influenza illness (defined as respiratory symptoms with wild-type influenza virus isolated from nasal passages or >4-fold increase of HAI antibody from baseline). The efficacy of the immunizations was calculated as 85% for CAIV-T and 71% for TIV, although the difference was not statistically significant. Pathogenesis and Immunology of Influenza Infection Several studies have used experimental influenza inoculation of humans to investigate the pathogenesis and immune mechanisms associated with infection. Many sought to better understand viral infectivity by route of transmission (e.g., aerosolization), or determine if influenza could be transmitted across different host species (Kasel et al., 1965). For example, one study sought to determine if equine influenza virus could produce clinical symptoms if given to a human (Alford et al., 1966). Others investigated various human immune responses following experimental infection. The mechanism of systemic and local antibody responses to infection was examined in a study conducted by Murphy et al. Their volunteers were children, aged 1.5 to 4.5 years old, and were inoculated intranasally with either A/Alaska/7/77 (H3N2) or A/Hong Kong/123/77 (H1N1) viruses (developed as candidate vaccine viruses) (Murphy et al., 1982). They found that there was a correlation between the IgA HA antibodies recovered from the nasal passageways and the serum IgG response. Thus, they proposed that intranasal inoculation stimulates both systemic and local immune responses and could be used for immunization purposes. Brown et al. sought to further explore the distinction between serum and secretory IgA antibodies in response to infection. By challenging 13 human volunteers with intranasal A/Peking/2/79(H3N2) wildtype virus, they measured serum and nasal IgA antibodies and their subclasses pre-and post-inoculation with the influenza virus (Brown et al., 1985). They found that IgA1 accounted for most of the increase in IgA anti-HA levels after infection, and determined that the origin of serum IgA antibodies to HA were from the mucosa. Other investigations examined cell-mediated immune responses, challenging healthy volunteers with different strains of influenza A and measuring serum antibody, viral shedding, and other peripheral blood parameters (such as white blood cell counts), as well as local and systemic cytokine responses. In 1977, Dolin et al administered influenza A virus to 19 volunteers in order to assess cell-mediated immune responses up to 4 weeks after the challenge (Dolin et al., 1977). Eight of the nineteen volunteers had clinical symptoms and 4-fold increases of serum antibody, and lymphopenia was described in this cohort. The authors found that depression of lymphocytes and decreased functionality of the lymphocytes were present even at 4 weeks post-challenge. Another study by Hayden et al challenged volunteers with a H1N1 influenza strain and examined the relationship between clinical symptoms and cytokine responses (Hayden et al., 1998). They collected nasal lavage fluids, plasma and serum levels from the participants and analyzed various cytokines (IL-1 β, IL-2, IL-6, IL-8, IFN-α, TGF-β, and TNF-α) over time. The authors described an association between nasal fluid IFN-α, IL-6, and TNF-α levels and fever on day 2 postchallenge, and IFN-α and IL-6 levels with lower respiratory symptoms on days 5 and 6. They found that IL-6 levels were associated with total, systemic, and upper respiratory symptom scores on days 2 and 4. They concluded that IL-6 is likely the main factor involved in causing fever in influenza (not TNF-α, which is usually described as contributing to fever symptoms in other infections) based on the larger magnitude of IL-6 response early in the course of infection, and IFN-α is responsible for early systemic and local symptoms experienced in influenza infection. The authors hoped that their description of the cytokine response to influenza could be utilized to either (a) develop therapeutic agents that would target specific cytokine responses, or (b) use cytokine levels to more accurately measure the impact of an antiviral intervention. An interesting study by Gentile et al. utilized the influenza human challenge model to control for inflammatory responses seen in patients with allergic rhinitis, which they hypothesized would cause more severe disease (Gentile et al., 2001). Gentile et al enrolled 27 participants who had a history of allergic or nonallergic rhinitis, and then inoculated the participants with an influenza A H1N1 strain and measured anti-IgE-induced leukocyte histamine release, plasma histamine levels, and serum IgG, IgA, IgM, and IgE. The results showed no significant enhancement of systemic immune or inflammatory responses after inoculation in the allergic rhinitis group. The authors speculated that perhaps no difference was observed because the experimental infection was so mild in nature. Systems Biology and Influenza Immunization The field of systems biology applied to vaccines uses mathematical modeling, networking, and other measurements to describe and predict the human immune response to vaccines. In addition, systems biology approaches have the potential to better explain the host-viral interaction and elucidate specific immunological pathways and mechanisms in regards to both influenza natural infection and influenza vaccination. Using systems biology methodologies may be especially useful in the context of influenza human challenge studies. Nakaya et al. have done extensive work describing specific molecular signatures in individuals who have received seasonal influenza vaccines (Nakaya et al., 2011). They compared immune responses in individuals who had received either the trivalent inactivated vaccine (TIV) or live attenuated influenza vaccine (LAIV). They identified molecular signatures that correlated with B cell responses at day 7 and 28 after immunization, and demonstrated how these could be used to predict an individual's later immune response to the vaccine. Several specific genes were identified that corresponded with HAI response, many of which were regulated by XBP-1 (transcription factor). XBP-1 has been shown to be necessary for the differentiation of plasma cells and the unfolded protein response (Iwakoshi et al., 2003). Another important feature of an effective vaccine is its ability to produce and maintain a long-term durable antibody response to provide long-lasting protection. In a separate study, Nakaya et al. investigated antibody responses after influenza vaccination at day 28 vs. 180, with emphasis on identifying molecular pathways associated with either a persistent or waning antibody response with time (Nakaya et al., 2015). One potentially important pathway associated with persistent antibody response involves P-selectin (SELP), which affects mobility of leukocytes to the vaccine administration site from the peripheral blood stream. The authors were also able to identify transcriptional responses to vaccination, with particular interest in micro RNA expression of miR-424. MiR-424 is a regulator of the interferon response that occurs post vaccination. Another contribution was the finding of specific signatures associated with different age groups. The authors demonstrated that in the elderly population, modules Study title (first author) Year Key molecular correlate(s) identified Contribution Systems analysis of immunity to influenza vaccination across multiple years and in diverse populations reveals shared molecular signatures (Nakaya et al., 2015) 2015 P-Selectin (SELP): pathway associated with persistent antibody response. miR-424: regulator of the interferon response post vaccination Identified pathways involved in a more long-term, durable response to influenza immunization. Identified interferon response and inflammatory markers that differ according to age. Persistent inflammatory state in elderly may explain immunosenescence. Systems biology of immunity to MF59-adjuvanted vs. nonadjuvanted trivalent seasonal influenza vaccines in early childhood (Nakaya et al., 2016) 2016 Module 75: "antiviral interferon signature." Module 165: "enriched in activated dendritic cells" Identified modules that were correlated with strong HAI response post-vaccination with adjuvant. Systems biology of vaccination for seasonal influenza in humans (Nakaya et al., 2011) 2011 XBP-1: corresponds with HAI response Identified molecular signatures that correlated with B cell response to vaccinations, and showed how these can be used to predict vaccine response. Global analyses of human immune variation reveal baseline predictors of post-vaccination responses (Tsang et al., 2014) 2014 PBMC subpopulation frequencies (baseline) Described baseline characteristics that can be used to predict serologic response to influenza vaccines. Apoptosis and other immune biomarkers predict influenza vaccine responsiveness (Furman et al., 2013) 2013 APO module: involved in apoptosis Described a positive association between two gene modules involved with apoptosis and vaccine response to influenza. Overexpressed day 1 post-vaccination for the TIV group only, and more prominent in younger children. Identified transcriptional patterns post-vaccination demonstrating that vaccines induced expression of interferon-related genes, which also was associated with antibody production. Early patterns of gene expression correlate with the humoral immune response to influenza vaccination in humans (Bucasas et al., 2011) 2011 494-gene signature (mediates interferon response): positive correlation with antibody response to vaccination Described a signature that corresponded to antibody response to the trivalent influenza vaccine. associated with antiviral and IFN-related genes were impaired in the early innate response as compared to the younger population. In the elderly population, there was an enhanced NK-cell related expression and higher proportions of monocytes at baseline and post vaccination. It is hypothesized that perhaps these persistent inflammatory responses seen in the elderly may actually be having a negative effect by inhibiting appropriate vaccine immune responses, and perhaps explains the underlying mechanism of immunosenescence. Several other studies have examined patterns of gene expression in response to influenza vaccination. The results of key experiments are summarized in Table 2. Systems biology approaches have also been used to study human influenza infection, mostly in human cell culture, in order to better define pathogenesis, virulence factors, and immune responses. For example, transcriptomic responses to a human tissue culture cell line infected with various strains of influenza were examined by Josset et al. (2014). By analyzing transcriptomic responses to Influenza A Virus (IAV), the authors described that the avian H7N9 had further adapted to the human host. They also were able to test various therapeutics in vitro, and demonstrated antiviral responses associated with gene expression alterations. Other authors have studied proteomics in macrophages and monocytes of the lung after IAV infection. These studies showed interferon and TNF-alpha expression in response to infection, as well as pathways leading to secretions of specific proteins and antiviral cytokines (Lee et al., 2009;Lietzén et al., 2011;Cypryk et al., 2017). Ultimately, these findings demonstrate that macrophages could be used as a biomarker to determine the severity of influenza infection. Applying Systems Biology to the Human Influenza Challenge Model To continue to advance our understanding of influenza, the techniques of systems biology should be furthered applied to the influenza challenge model. Very few studies have been published that use both techniques in concert. An influenza human challenge study was conducted by Woods in collaboration with Retroscreen Virology (London, UK), with prevaccination and post-vaccination RNA extracted from volunteers to examine gene expression, with the purpose of exploring new diagnostic options (Woods et al., 2013). Twenty-four healthy volunteers were inoculated with A/Brisbane/59/2007 (H1N1), and 17 were inoculated with A/Wisconsin/67/2005 (H3N2). The peripheral blood transcriptome was then analyzed at several time points over the course of 7 days. The results showed that peripheral blood gene expression after infection had a distinct signature that was specific to either H1N1 or H3N2 infection. Importantly, these genomic signatures were able to identify infected individuals before they manifested symptoms or had only mild, non-specific symptoms. This early recognition could lead to earlier administration of antivirals and have a more significant impact on lessening the severity of influenza illness. Sobel Leonard et al. used the same 17 participants inoculated with H3N2 from Woods' trial to evaluate evolution of the influenza virus at the time of transmission (Sobel Leonard et al., 2016). Using deep sequencing techniques, the nasal washings of the participants inoculated were compared with the viral stock and the reference strain (A/Wisconsin/67/2005) used to create the stock virus. They found that in acute infection in the human host, the influenza virus populations can undergo rapid viral evolution and changes, which largely occurs during a transmission bottleneck effect that is highly selective. Sobel Leonard et al. were able to further characterize these findings by describing viral evolution within the host and identifying genetic selection factors (Sobel Leonard et al., 2017). A recent study by Jochems et al investigated the role of influenza infection leading to secondary bacterial pneumonia, using a "double" experimental human challenge model and systems biology approaches (Jochems et al., 2018). Subjects were first inoculated with human type 6B Streptococcus pneumoniae (Spn) to simulate carriage, which is an important pre-requisite of clinical pneumococcal pneumonia. The authors then gave the participants live attenuated influenza virus (LAIV) to simulate influenza infection. Using systems biology approaches of nasal secretions, they were able to identify specific immune mediators that control Spn carriage, and determine how influenza infection can affect these pathways. The results showed that LAIV increased the carriage density of Spn by impairing degranulation of nasal neutrophils and decreasing the recruitment of monocytes, which are two mechanisms essential for bacterial clearance. Although the study has limitations-only one pneumococcal serotype (6B) was used, and the influenza challenge was with LAIV, not wild-type influenza-the findings are important and highlight how the innate immune system is involved in Spn carriage and clearance, and how pre-existing viral infections can negatively affect immune-mediated control of infection. This study also underscores the power of applying systems biology approaches to the human challenge model. The granular details and elucidation of immune mechanisms were only able to be determined by integrating these two techniques. CONCLUSIONS There is an urgent need to utilize novel platforms that can lead to the development of more effective vaccines and therapeutics for influenza, which continues to cause a significant burden of disease. Human challenge models have been successfully used for centuries. With advancing technologies and new methods to investigate the host-pathogen interaction, human challenge studies will be essential for progress, and can be performed in a safe and ethical manner. Furthermore, systems biology (e.g., transcriptomics, metabolomics, proteomics, lipidomics, etc.) allows for fundamental changes and patterns of the human immune system to be dissected. Harmonizing these two modalities is very promising; future studies should address using systems biology in a human challenge model to identify important gaps in our knowledge of influenza pathogenesis, and identify essential pathways involved in producing effective immune responses to vaccination. The ultimate goal would be to use these methods in concert to discover novel therapeutics, and potentially even lead to the development of a universal influenza vaccine. AUTHOR CONTRIBUTIONS AS and NR drafted the article. AS contributed to the collection and assembly of data, and the analysis and interpretation of the data. ND, AM, EA, and NR critically revised the article. ND contributed to the revision of the article and provided intellectual content for the ethics section. FUNDING This work was supported by the NIH NIAID (Vaccinology T32, Award No. T32AI074492).
7,754.6
2019-04-17T00:00:00.000
[ "Medicine", "Biology" ]
Numerical investigation of boiling process in micropores . The model for direct numerical simulation of the boiling process on porous surfaces was proposed and verified in the present paper. The VOF method using the CSF method to simulate surface tension was applied to solve this problem. The model was verified on the problem of pool boiling on a flat plate, where boiling patterns were obtained, and the heat transfer coefficient was compared with the experiment and analytical solution. Images of vapour bubble formation and dependence of heat transfer coefficient on temperature difference were obtained for a two-dimensional problem with uniform packing at various wall temperatures. Introduction Air or water cooling was suitable for the heat dissipation needs of many industries.Due to the increase in heat flux density, it became necessary to resort to various tricks to further using of single-phase cooling systems, such as increasing the thermal conductivity of the coolant [1], changes in the geometry of cooling systems and their overall optimization [2].Such improvements were primarily needed for power microelectronics, computer microelectronics, laser systems, and other industries.Also, the optimization of two-phase cooling leads to the reduction of already used devices such as air conditioner condensers [3] or refrigeration devices [4], leading to a reduction in their material capacity and price. In the microelectronics industry heat pipes and evaporation chambers are mainly used for heat dissipation.These devices use a porous medium to transfer heat from the hot region to the cold region, which acts as a wick to transport working fluid from the condenser to the evaporator due to the capillary effect.The thermal resistance of the porous medium is an important obstacle to heat transfer into the evaporator section [5].Evaporation and condensation at the wickvapour interface are the main phase transition mechanisms in a heat pipe.At a certain heat flux, a bubbling regime of liquid boiling is formed in a wick, which results in a significant decrease in thermal resistance [6,7].Previous studies have shown that the coating of the surfaces with porous media causes an increase in the heat transfer coefficient for bubble boiling mode on a flat plate.However, the porous coating increases thermal resistance between the heated surface and vapour.Therefore, thermal resistance in the wick-substrate system should be reduced to improve wick performance.Also, an increase in heat transfer can be significantly affected by the next: an increase in the number of vaporization centres due to using the porous surface and an increase in heat transfer rate of phase transition due to an increase in the number and rate of bubble detachment from surface.To better explain the mechanisms involved, the mechanism of vapour bubble growth in a porous structure needs to be better understood.In several studies, for example [5], experimental investigations have been carried out for porous structures during boiling. The backfill method and its porosity (anisotropic or isotropic) are also important.Thus, in paper [8] was demonstrated that anisotropic backfill with porosity in the range of 50-71% showed the best heat dissipation results.This is because vapour bubbles leave the vapour formation zone faster than for lower porosity.Isotropic backfill results in lateral growth of vapour bubbles, which subsequently form a continuous vapour region, thus negatively affecting heat transfer.There was also considered a variant of filling where individual particles formed a structure similar to a wick.With this formulation of the problem, the lateral interval of individual particles increased, and the vertical interval decreased.It was shown that using particles of 250 µm in diameter improved heat transfer compared to one with a diameter of 150 µm.Further increasing the diameter, however, did not improve the results.Variation of particle diameter for separately formed wicks was also considered.It was shown that wicks assembled from particles with a diameter of 50 µm had the best ability to conduct heat.Also in paper [8], the studies of the optimization of wick height of porous media have been carried out.As a result of calculations, the best results were shown for the value of height to width ratio of 4-5. Many factors influence to behaviour of vapour bubbles: drag force, surface tension force, bubble size at detachment from the surface, pore size, porosity coefficient, and many others.All these parameters must be considered when modelling the boiling on a porous surface.To solve this problem, the VOF method with the CSF scheme for surface tension modelling was applied.The method has also been adapted for its use at a microscale. Mathematical model To date, the most widespread methods are those implementing the idea of continuous markers.Due to their efficiency and ease of implementation, among algorithms of continuous volume markers, the Volume of Fluid (VOF) method has gained the most popularity today, which has proven itself well for calculating free surface flows.The idea of this method is that liquid and gas are considered as a single two-component medium and the spatial distribution of phases within the computational domain is determined using a special marker function F(x,y,z,t).The volume fraction of the liquid phase in the computational cell is taken as F(x,y,z,t) = 0 if the cell is empty, F(x,y,z,t) = 1 if the cell is completely filled with liquid, and 0 < F(x,y,z,t) < 1 if phase boundary passes through the cell. Since the free surface moves with the liquid, tracking the movement of the free boundary in space is carried out by solving the transport equation for the volume fraction of the liquid phase in the cell: where v is the velocity vector of the two-phase medium, found from the solution of a hydrodynamic equation system consisting of a mass conservation equation or continuity equation: and motion equations: ( ) where τ is tensor of viscous stresses, Fs is volume force vector, p is static pressure, and ρ is the density of twophase medium.The components of the viscous stress tensor τ ij are: where µ is the dynamic viscosity coefficient of twophase medium. The density and the molecular viscosity of considered two-component medium are determined through volume fraction of liquid phase in cell using the mixing rule: (1 ) where ρ l , μ l are the density and the viscosity of the liquid, defined below, ρ v , μ v are the density and the viscosity of vapour. Obtained values of density ρ and viscosity μ are used in motion equations and determine the physical properties of the two-phase medium.Special attention is paid to surface tension when considering liquid flows with a phase boundary.Therefore, one of the advantages of the VOF method is that it allows for relatively easy modelling of the effect of surface tension forces. The CSF (Continuum Surface Force) algorithm, which involves introducing an additional volume force Fs into motion equations, was used to model surface tension in this problem.The magnitude of this force is determined from the relation: where σ is the coefficient of surface tension, k is the curvature of the free surface, which is determined as the divergence of normal vector: The normal to free surface is calculated as the gradient of the volume fraction of the liquid phase in the cell: The magnitude of the normal vector is determined by contact angle θ on the solid wall: cos sin where n w and τ w are normal and tangential to the wall vectors. The energy equation is given by: where the enthalpy h is defined as: The effective thermal conductivity k eff is common to both phases: The term S h in this equation represents the volumetric heat source due to phase change at the interface: where lv m is the mass transfer rate from liquid to vapour, vl m is the mass transfer rate from vapour to liquid, and h lv is the latent heat of vaporization from liquid to vapour. The mass transfer rate is implemented using the Lee model: where constant γ has dimension of 1/s and is defined as follows: where  is the accommodation coefficient, b d is the bubble diameter, � is the molar mass, R is the universal gas constant, sat T is the saturation temperature, and L is the latent heat. After analyzing the works [9], [10], and [11], the constant γ in this work was assumed to be equal to 100 for both evaporation and condensation. Model Verification As a test, a numerical study of pool boiling on a flat surface with dimensions of 26.5×26.5 mm 2 was conducted.A constant heat flux density of 25 kW/m 2 , 50 kW/m 2 , 75 kW/m 2 , and 100 kW/m 2 was set on this surface.Lateral walls were adiabatic, and upper wall was an outlet.As a result of calculation, values of heat transfer coefficient at given heat flux densities were obtained and compared with results of experiment [4] and theoretical dependence.In case of bubble boiling of a stationary liquid in a large volume, heat transfer coefficient can be calculated using the formula: Here ν is kinematic viscosity coefficient, C p is heat capacity, r is heat of vaporization, k is thermal conductivity, σ is surface tension of liquid at saturation temperature T s ; ρ', ρ" is densities of liquid and vapor at T s , T s is saturation temperature in Kelvin. The comparison of obtained results is presented in Figure 1, which shows the dependence of the heat transfer coefficient during boiling of pure water on heat flux density on the heater.As can be seen from this figure, the obtained results using the considered methodology are quite consistent with the calculation formula and experimental data [12]. Moreover, this methodology allows the tracking of nucleation and evolution of vapour bubbles dynamically.Isosurfaces at a vapour concentration of 0.5 at different times are presented in Figure 2, which visualizes the interface between the liquid and vapour.As can be seen from Figure 2, the used approach allows for modelling both the boiling process and dynamics of vapour bubble development, as well as obtaining integral and instantaneous values of various quantities and characteristics of the boiling process.Further calculations were carried out for the problem of boiling on the surface of a capillary-porous coating, which was modeled as a packing of spheres with specified sizes in a two-dimensional setting. 2D setting of problem For a two-dimensional setting, a geometry was constructed with sphere diameters of 150 microns and a distance of 10 microns between them, forming a uniform packing, as shown in figure 3. The total size of the computational domain is 0.61×1 mm 2 .Periodic boundary conditions were set on lateral walls, a free outlet was set on the upper boundary, temperature and adhesion conditions were set on spheres, and adiabatic boundary conditions and adhesion conditions were set on the lower wall.Gravity force is directed from top to bottom.The contact angle was chosen to be 160º.The second-order QUICK scheme was used to approximate convective terms of hydrodynamics equations.An implicit first-order scheme was used to approximate unsteady terms of hydrodynamics equations.The second-order upwind scheme was used to approximate the energy equation.The VOF equations were solved using the Geo-Reconstruct scheme.Diffusion fluxes and source terms were approximated using a second-order scheme.The relationship between velocity and pressure fields was implemented using the Piso procedure. Liquid and vapour considered as incompressible fluids, thermophysical properties were constant (at a temperature of 373K). An unstructured mesh with a step of 1 micron and a total of 386,000 cells was constructed.At initial time, the domain was filled with the water at a temperature of 373 K. To correctly resolve water-vapour interface and ensure numerical stability, the Courant number was taken to be 0.2. Figures 4 and 5 show the process of water boiling in sphere packing.At wall temperatures of 378 K (figure 4), small bubbles are only on the surface of the packing after the time of 2•10 -4 s, but for temperatures of 383 K (figure 5), vapour bubbles form throughout the entire packing and have a larger size.At a calculation time of 2•10 -3 s, the vapour almost fills the entire volume of packing for all wall temperatures.Figure 6 also shows the dependence of the heat transfer coefficient on the temperature difference. Conclusion A mathematical model was developed for direct numerical simulation of the boiling process on the surface of a capillary-porous coating.Such coating was modelled using a packing of spheres with specified sizes.The model was based on the VOF method for tracking the water-vapour interface and takes into account capillary effects.The Lee model was also used to describe phase transition and constants of this model were selected for the considered tasks.Debugging and test calculations were carried out to determine the minimum size of the computational domain, the detail of computational grids, time step, boundary conditions, and other numerical parameters to obtain correct results of boiling process modelling in the porous coating. The model was verified on the task of pool boiling on a flat plate, where results obtained using the considered methodology are quite consistent with the calculation formula and experimental data.This methodology also allows the tracking of nucleation and evolution of vapour bubbles dynamically. Images of vapour bubble formation were obtained for a two-dimensional problem with uniform packing at various wall temperatures.At a wall temperature of 378K, small bubbles form only on the surface of the packing at the initial time, while at a wall temperature of 383K, vapour bubbles form throughout the packing and have a larger size.When thermal balance is reached, almost the entire internal volume of packing is occupied by vapour. The study was supported by a grant from the Russian Science Foundation No. 22-49-08018. Fig. 1 . Fig. 1.Dependence of heat transfer coefficient at boiling on heat flux density at the heater.
3,150
2023-01-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Anti-inflammatory properties of some novel thiazolo[4,5-b]pyridin-2-ones Synthesis of novel N3 and C5 substituted thiazolo[4,5-b]pyridin-2-ones was carried out on the basis of [3+3]-cyclocodensation, acylation and alkylation reactions. The structures of the obtained compounds were confirmed by 1H NMR spectroscopy, and elemental analysis. The anti-inflammatory action of novel thiazolo[4,5-b]pyridine-2-one derivatives was evaluated in vivo employing the carrageenan-induced rat paw edema method. When compared with Ibuprofen, some our compounds were found to be more potent. Introduction Inflammation is a major pathogenetic component of many diseases of different etiology and one of the most important problems of general pathology and clinic. This reaction of the body to damage, is involved in the formation of many diseases. The problem of the pharmacological regulation of inflammation is relevant to modern medicine (Brenner and Krakauer 2003). There is a considerable amount of medication used to treat inflammation. Non-steroidal anti-inflammatory drugs, which combine a whole range of properties, displaying anti-inflammatory, analgesic, antipyretic activity are in particular demand (Bacchi et al. 2012). However, they all have ulcerogenic properties to varying degrees (Al-Shidhani et al. 2015). In order to overcome these restrictions worldwide, the search for new effective and safe anti-inflammatory drugs is continuing. The development of chemistry of heterocyclic compounds is largely due to the practical direction of research. It is sufficient to note that among the most well-known and widely used drugs, more than 60% belong to heterocyclic compounds (Taylor et al. 2016), so the work in this direction is rapidly developing and relevant. In particular, there is growing interest in nitrogen-containing fused heterocyclic systems, as many of them exhibit broad-spectrum biological activity (Smirnova et al. 2006;Chaban et al. 2017aChaban et al. , 2018a. Pyridine derivatives make up a large part of the modern drug arsenal. Of the 1.5 thousand most commonly used drugs, more than 10% account for the compounds having the pyridine ring (Ali Altaf et al. 2015). Equally interesting are 4-thiazolidones (Abhinit et al. 2009;Lozynska et al. 2015;Chhabria et al. 2016;Tymoshuk et al. 2019). Thiazolidone derivatives anelated with the pyridine cycle, in particular thiazolopyridine, are of particular interest to researchers because these compounds exhibit different types of biological activity. Among them were identified substances with antioxidant (Chaban et al. , 2019aKlenina et al. 2013Klenina et al. , 2017, fungicidal (Marzoog and Al-Thebeiti 2000), anti-inflammatory (Chaban et al. 2016(Chaban et al. , 2017b(Chaban et al. , 2018b, anti-mitotic (Victor et al. 2017), tuberculostatic (Chaban et al. 2014), herbicidal (Hegde and Mahoney 1993) and antitumor (Chaban et al. 2012a) activities, agonists of H3-histamine receptors (Walczyn´ski et al. 2005), antagonists of metabotropic glutamate receptors 5 (mGluR5) (Lin et al. 2009), substances with high inhibitory activity against epidermal growth factor receptors (Komoriya et al. 2006) and several other enzymes (Singh et al. 1995). Given the above synthesis of new thiazolopyridines, as well as the pharmacological screening of the anti-inflammatory activity of the newly synthesized compounds is an interesting and relevant direction. Materials and methods All chemicals were of analytical grade and commercially available. All reagents and solvents were used without further purification and drying. All the melting points were determined in an open capillary and are uncorrected. 1 H-spectra were recorded on a Varian Mercury 400 (400 MHz for 1 H) instrument with TMS or deuterated solvent as an internal reference. Mass spectra were run using Agilent 1100 series LC/MSD, Agilent Technologies Inc. with an API-ES/APCI ionization mode. Satisfactory elemental analyses were obtained for new compounds (C ± 0.17, H ± 0.21, N ± 0.19). Ibuprofen was purchased from a medical store. Chemistry General procedure for the synthesis of 3-aryl-5-hydroxy-7methyl-3H-thiazolo[4,5-b]pyridin-2-ones (1-8). Metallic Sodium (109 mmol) was dissolved in anhydrous methanol (150 ml), to the resulting solution was added the corresponding 3-aryl-4-iminothiazolidin-2-one (50 mmol) and acetoacetic ether (8.5 ml) at 20 °C. The mixture is left for 5 days, stirring on a magnetic stirrer. Then it is acidified with acetate to pH ~ 5, diluted five times with water, the precipitate is filtered off, washed with water and dried. Recrystallized from acetic acid. The obtained substances are white, gray or yellowish crystalline powders, well soluble in DMF, DMSO, alkali solutions, low in benzene, toluene, alcohols; bad -in other organic solvents and water. -7-methyl-3-phenyl-3H-thiazolo[4,5-b] H 3.10,N 9.57. Found: C 52.98,H 3.14, General procedure for the production of acylation products of 5-hydroxy-7-methyl-3-phenyl- 3H-thiazolo[4,5-b] pyridin-2-one by aliphatic chloroanhydride to form compounds 9-11. In a flat bottom flask dissolve 10 mmol of compound 1 in 10 ml of anhydrous dioxane. To the resulting solution was added a solution consisting of 10 mmol of the corresponding aliphatic chloroanhydride and 10 mmol of triethylamine in 10 ml of dioxane. Maintain for 10 minutes in a drying oven at a temperature of 100 °C and poured into water. After recrystallization from acetic acid, the white or yellowish powders are soluble when heated in ethanol, DMF, acetic acid. Anti-inflammatory activity The experiment was performed on nonlinear white rats of both sexes weighing 180-200 g. Rats were kept in the animal house under standard conditions of light and temperature on the general diet prior to the experiment. Total swelling was caused by aseptic injection of 0.1 ml of a 2% solution of carrageenan under aponeurosis of the sole of the hind limb of the rat for 0.5 h. animals were intraperitoneally injected with the test substance at a dose of 50 mg/kg prior to administration of the carrageenan solution. The presence of an inflammatory reaction was determined by changing the volume of the limb oncometric method at the beginning of the experiment and 4 hours after the introduction of the phlogogenic agent. For comparison, the anti-inflammatory effect of a known anti-inflammatory drug-ibuprofen in medium therapeutic doses was studied in similar conditions. The Ethics Committee of the Danylo Halytsky National Medical University, established by the Ministry of Health of Ukraine, approved the experimental protocol. The suppression of the inflammatory reaction was expressed as a percentage reduction in the volume of the paw, and it was calculated by the following equation: where Vcontrol is the increase in paw volume in control group animals; V is the increase in paw volume in animals injected with the test substances. To expand the combinatorial library of thiazolo[4,5-b] pyridines, the transformation of 3-phenyl-5-hydroxy-7-me-thyl-3H-thiazolo[4,5-b]pyridin-2-one (1) at position C 5 was performed. The synthetic potential of the hydroxy group of compound 1 is represented by its interaction with a series of carboxylic acid chlorides in an acylation reaction. It is established that the optimum conditions for obtaining the corresponding acylated derivatives of 3-phenyl-5-hydroxy-7-methyl-3H-thiazolo[4,5-b]pyridin-2-one (9-16) are the reaction in the environment of dioxane under interaction with aliphatic chlorides and pyridine in the case of interaction with aromatic chlorides (Figure 2). Compound 10 may be considered as a key intermediate in 5-hetarylsulfanyl-acetic acid 7-methyl-2-oxo-3-phenyl-2,3-dihydro-thiazolo[4,5-b]pyridin-5-yl esters obtaining being treated with appropriate thiols. The reaction mixture reflux for 60 min in 96% ethanol medium were optimal conditions for compounds 17-20 formation proceeding in good yields (Figure 3). Thus the series of thiazolo[4,5-b]pyridin-2-one acetamides has been diversified by alkylation reactions applying compound 10 as alkylating agent employing it into the reactions with heteryl moiety thiols which can be considered an effective and general route to a wide range of acetamides preparation. The structure of the compounds obtained and the interpretation of the chemical studies were confirmed by elemental analysis and 1H NMR spectroscopy. All these new compounds gave spectroscopic data in accordance with the proposed structures. Anti-inflammatory activity in vivo evaluation Exudative is considered to be a classic example of acute inflammation. The effect of the synthesized substances on the course of the exudative phase of inflammation was studied on the basis of a carrageenan model of inflammatory edema of the paws of white rats (Pillai et al. 2004). In vivo studies of novel thiazolo[4,5-b]pyridine-2-one derivatives were carried out for anti-inflammatory activity employing the carrageenan-induced rat paw edema method. Carrageenan-induced paw edema is the most common animal model of acute inflammation. Marked paw edema was caused in rats with sub-planter injection of 0.1 ml of 2% carrageenan. Investigated compounds were dissolved in DMSO and injected intraperitoneally 50 mg/kg body weight 0.5 h prior to carrageenan injection. The NSAID drug Ibuprofen in its effective therapeutic dose was tested simultaneously as an activity reference. Anti-inflammatory activity was estimated by measuring the paw edema volume 4 h after the carrageenan injection. Results of paw edema decreasing were expressed as the average ± standard deviation and compared statistically with the control group using Student's t-test. A level of p<0.05 was adopted as the test of significance (Table 1). According to the anti-inflammatory activity pharmacological screening for synthesized products, in a significant number of cases the anti-inflammatory effect is equivalent to that of the reference drug Ibuprofen. For some compounds, the anti-inflammatory effect was less than the of the reference drug. The inflammatory response inhibition for them are in the range of 20.2-35.4%. However, some substances activity exceeds Ibuprofen, which gives reason to consider this condensed system as a promising molecular framework for the design of potential anti-inflammatory agents. The results of pharmacological screening and analysis of the structure of molecules and the nature of the substituents in different positions of the thiazolopyridine cycle allow us to distinguish a number of patterns of dependence «structure -anti-inflammatory action» among the derivatives of thiazolo[4,5-b]pyridin-2-one. The results obtained demonstrate that the anti-inflammatory effect of the synthesized compounds is probably due to the contribution of 5-hydroxy-7-methyl-3-phenyl-3H-thiazolo[4,5-b]pyridine nucleus and a number of structural fragments that are pharmacophore for the class heterocycles and the type of pharmacological activity. Conclusions As a result of the [3 +3]cyclocodensation, acylation and alkylation reactions, the synthesis of novel thiazolo [4,5-b] pyridin-2-ones has been carried out. For the synthesized compounds, in vivo screening of anti-inflammatory activity was conducted, the results of which indicate that the test substances in terms of activity approach or exceed the preparation of the comparison Ibuprofen. Thus the core thiazolo[4,5-b]pyridine heterocyclic system may be regarded as a promising scaffold for the effective anti-inflammatory a drug candidates development.
2,314.2
2020-08-28T00:00:00.000
[ "Chemistry", "Medicine" ]
Exosomes neutralize synaptic-plasticity-disrupting activity of Aβ assemblies in vivo Background Exosomes, small extracellular vesicles of endosomal origin, have been suggested to be involved in both the metabolism and aggregation of Alzheimer’s disease (AD)-associated amyloid β-protein (Aβ). Despite their ubiquitous presence and the inclusion of components which can potentially interact with Aβ, the role of exosomes in regulating synaptic dysfunction induced by Aβ has not been explored. Results We here provide in vivo evidence that exosomes derived from N2a cells or human cerebrospinal fluid can abrogate the synaptic-plasticity-disrupting activity of both synthetic and AD brain-derived Aβ. Mechanistically, this effect involves sequestration of synaptotoxic Aβ assemblies by exosomal surface proteins such as PrPC rather than Aβ proteolysis. Conclusions These data suggest that exosomes can counteract the inhibitory action of Aβ, which contributes to perpetual capability for synaptic plasticity. Background Alzheimer's disease (AD) is characterized by progressive cognitive decline [1,2]. Accumulating evidence has attributed this deficit in the cognitive capacity of patients and the potentially responsible failure in neural circuits to an increased amount of amyloid β-protein (Aβ), particularly soluble Aβ oligomers rather than fibrils [3]. To examine the mechanisms that underlie the synaptic dysfunction caused by Aβ oligomers, several laboratories have utilized a cellular correlate of learning and memorylong-term potentiation (LTP) -and have studied the effectiveness of different forms of soluble Aβ preparations including Aβ-derived diffusible ligands (ADDLs) and AD brain-derived Aβ [4][5][6][7]. As Aβ oligomers appear to execute their deleterious activities (i.e., LTP impairment) by binding to their putative receptors such as p75 neurotrophin receptor, insulin receptor, and cellular prion protein (PrP C ) [4,[7][8][9], Aβ assemblies or their receptors have been targeted to develop effective therapeutic strategies [6,10,11]. Despite enormous efforts, however, the molecular identity and importance of intrinsic extracellular factors for regulating the activities of Aβ oligomers are still poorly understood. In this study, we focused on one class of extracellular vesicles, exosomes, as a potential regulator of Aβ and its effects on synaptic plasticity in vivo. Exosomes are small (30 -100 nm diameter) membranous vesicles that are secreted naturally into the extracellular space upon fusion of multivesicular bodies with the plasma membrane [12]. Although exosomes have been proposed to exert multiple physiological roles [13] and are also known to contain machinery to synthesize, degrade and induce aggregation of Aβ [14][15][16], whether these factors in exosomes increase or decrease the deleterious actions of Aβ is a matter of debate [15][16][17][18]. Direct assessment of the effect of exosomes on the activity of synaptotoxic Aβ has been impeded by the difficulty in controlling their levels in vivo. Here, we have manipulated the concentration of exosomes in the brain by infusing exosomes intracerebroventricularly (i.c.v.) and then examined their effect on Aβ-mediated impairment of synaptic plasticity. We find that exosomes neutralize the synaptic-plasticity disrupting activities of Aβ in vivo, and also show that these effects are primarily the result of the sequestration of Aβ oligomers via exosomal surface proteins such as PrP C . The potential relevance of our findings to AD is underscored by our observation that exosomes from human cerebrospinal fluid (CSF) prevent the impairment of LTP that is mediated by Aβ derived from AD brain extracts. The apparent size discrepancy for the Aβ species present in our ADDL preparation is likely to result from technical limits of the used methods. Specifically, since SDS-PAGE is highly denaturing, it might not be suitable for determination of native sizes of Aβ assembly, but could be used to distinguish the SDS-stable forms from labile Aβ species. While AFM could be used to detect oligomeric forms of Aβ, certain assemblies would not adhere to mica well enough and as a result, were not detected. Nonetheless, our characterization of ADDLs revealed the presence of a heterogeneous mixture of different Aβ species, some of which were at least partially stable in SDS and which existed as small globular structures of 3 -6 nm [5,19]. To prepare exosome fractions, we had excluded plasma membrane-derived fragments and other non-exosomal vesicles through the optimized procedures [20]. Contrasting to the vesicles originated from Golgi body that float at 1.05 to 1.12 g/ml and endoplasmic reticulum-derived vesicles at 1.18 to 1.25 g/ml, exosomes are the only vesicles sizing 30~100 nm and gradient density ranging 1.131 .19 g/ml ( Figure 1E) [12,20]. Exosomes are further defined by their expression of marker proteins such as Flotillin-1, Alix or PrP C that are highly enriched in the exosomal fractions ( Figure 1E), their ultrastructure and size ( Figure 1F, G) [12]. Altogether, we verified that (E) Exosomes isolated from the conditioned medium of N2a cells had their density 1.13 g/ml to 1.19 g/ml, and contained the exosomal marker proteins Alix, Flotilin-1 and PrP C . Multiple (non-, mono-or di-) glycosylated PrP C proteins were detected between 20~35 kDa on SDS-PAGE. (F) By EM, exosomes appeared as closed vesicles of 30-120 nm in diameter (Scale bar: 100 nm), (G) a size range that agreed with that measured by DLS. In agreement with prior reports [5], high-frequency stimulation (HFS) failed to trigger robust LTP in anesthetized rats that had received i.c.v. injection of ADDLs (PBS + ADDL, 105 ± 6%, n = 4 vs. PBS + PBS, 166 ± 10%, n = 4 at 3 h post-HFS, P < 0.001, one-way ANOVA with post hoc Tukey; Figure 2A). Somewhat unexpectedly, prior infusion of 4 μg exosomes markedly attenuated the synaptic-plasticity-disrupting action of ADDLs. Indeed, despite the administration of ADDLs, HFS now induced robust LTP that was comparable to the control levels and which remained stable for more than 3 h (Exo + ADDL, 152 ± 6%, n = 5, P < 0.01 vs. PBS + ADDL; P > 0.4 vs. PBS + PBS, one-way ANOVA with post hoc Tukey; Figure 2A). Of note, the effect of exosomes against ADDL-induced LTP inhibition was largely dependent upon the amount of exosomes, producing a significant effect when 4 μg or more was infused ( Figure 2B). Unless otherwise specified, therefore, we used 4 μg exosomes in the subsequent studies. In this condition, however, neither exosomes nor ADDLs significantly affected baseline synaptic transmission ( Figure 2C). Exosomes might exert this protective effect by enhancing LTP per se, and/or functionally counteract the plasticity-disrupting effect of ADDLs. When we examined the ability of exosomes to convert decremental LTP into stable LTP or boost control LTP, however, we did not detect any significant difference on weak HFS-induced decremental LTP (PBS, 106 ± 7%, n = 5 vs. Exo, 117 ± 6%, n = 4, P > 0.3, unpaired t-test; Figure 2D) or standard HFS-induced LTP (PBS, 172 ± 13%, n = 5 vs. Exo, 175 ± 8%, n = 4, P > 0.8, unpaired t-test; Figure 2D). Thus, direct facilitatory effects on the magnitude of LTP are unlikely to account for the capability of exosomes to rapidly abrogate the inhibitory effects of ADDLs. ADDLs are sequestered on the surface of exosomes To address possible mechanisms underlying the protective action of exosomes against ADDL-induced LTP inhibition, we first examined whether exosomes degrade Aβ, which could abrogate the plasticity-disrupting effect. When we incubated ADDLs with exosomes in the same ratio used for LTP experiments, this resulted in a loss of the Aβ species that migrated at~4 kDa (monomer) on SDS-PAGE (32 ± 13%, P < 0.01, n = 5, Mann-Whitney U test; Figure 3A). Unlike Aβ monomer, Aβ oligomers were largely unaffected by the incubation with exosomes (~12 kDa Aβ, 96 ± 10%, P > 0.5;~16 kDa Aβ, 97 ± 5%, P > 0.05, n = 5, Mann-Whitney U test; Figure 3A), indicating that exosomes did not effectively degrade Aβ oligomers at least over the time course of our experiments. Although the reason for the loss of Aβ monomer is unclear, it could result from the degradation of authentic Aβ monomer by exosomal proteases such as insulindegrading enzyme (IDE) [15,21]. However, since monomeric Aβ does not inhibit LTP [22] and IDE is not believed to degrade plasticity-disrupting Aβ oligomers [23], such degradation would not be expected to contribute to the rescue of the ADDL-mediated block of LTP. On the other hand, exosomes might decrease free Aβ oligomers available by shifting free Aβ to the exosomebound Aβ. We examined this possibility by incubating ADDLs with exosomes and then physically separating (by centrifugation) exosomes from unbound Aβ. A major proportion of Aβ oligomers co-migrated with the exosomes that were readily pelleted with ultracentrifugation whereas only a small fraction of free ADDLs remained in the supernatant fraction (mean % of pelleted Aβ relative to total Aβ: PBS + ADDL, 7 ± 2% vs. Exo + ADDL, 82 ± 7%, P < 0.05, n = 3, Mann-Whitney U test; Figure 3B). However, this could have resulted potentially from Aβ assemblies that were simply pelleted to the exosomecontaining fraction after being aggregated by exosomes [16], rather than being directly bound to exosomes. Therefore, we have corroborated the direct binding of Aβ assemblies and exosomes by directly pulling down the exosome-bound Aβ after their in vitro incubation (Additional file 1: Figure S1), which argues against the possibility. To elucidate the possible fate of Aβ oligomers following binding onto exosomes, we have developed a partial trypsinization protocol to degrade only proteins on the outside of exosomes (see Methods for detailed infor- mation) and applied this method after the incubation of exosomes and ADDLs. If ADDLs were internalized into exosomes, the resultant ADDLs residing in the lumen of exosomes should be resistant to trypsin, which would likely leave more ADDLs remaining after the treatment of trypsin. Inconsistent with this notion, the remaining amount of ADDLs did not differ in the absence and presence of exosomes (% of remaining Aβ after trypsinization: PBS + ADDL, 12 ± 4% vs. Exo + ADDL, 17 ± 1%; P = 0.51, n = 3, Mann-Whitney U test; Figure 3C). Therefore, a major proportion of ADDLs remains on the surface of exosomes even after binding to exosomes in the time frame we examined, rather than being internalized into exosomes. Collectively, it is reasonable to speculate that the protective effect of exosomes against ADDLinduced LTP impairment arises from sequestering and immobilization of Aβ oligomers at the surface of exosomes. Exosomal surface proteins including PrP C are required for the protective role of exosomes against Aβ To further investigate the direct interaction of ADDLs with exosomes, we used trypsin in a mild condition (see Methods for details) to assess whether exosomal surface proteins were required for ADDL-neutralizing activity. The partial trypsinization efficiently removed the exosomal surface proteins while leaving the luminal protein intact ( Figure 4A), and without affecting the integrity of exosomes ( Figure 4B, C). Importantly, the trypsinized exosomes were no longer capable of rescuing the ADDL-mediated block of LTP (T -Exo + ADDL, 161 ± 9%, n = 5 vs. T + Exo + ADDL, 107 ± 6%, n = 4, P < 0.01, one-way ANOVA with post hoc Tukey; Figure 4D), and did not alter either LTP per se (T + Exo + PBS, 170 ± 7%, n = 4, P > 0.9 compared to PBS + PBS, one-way ANOVA with post hoc Tukey; Figure 4D) or baseline synaptic transmission ( Figure 4E). In agreement with the LTP results, trypsinized exosomes bound only a smaller fraction of Aβ oligomers compared to non-trypsinized exosomes (T -Exo + ADDL, 67 ± 3% vs. T + Exo + ADDL, 24 ± 2%, P < 0.01, n = 5, Mann-Whitney U test; Figure 4F). These data indicate that surface proteins of exosomes are required for the sequestration of synaptotoxic Aβ assemblies, which is consistent with prior reports that binding of Aβ oligomers to neuronal membranes is mediated by trypsin-sensitive molecules [5]. Aβ oligomers bind to PrP C , a cell membrane-bound glycoprotein that express abundantly in the central nervous system, specifically and with high affinity [6,7,24]; PrP C was also known to be expressed at high levels on exosomes [25,26]. Thus, we sought to examine whether exosomal PrP C contributes to the sequestration of ADDLs by exosomes. To this end, we prepared exosomes from either Prnp +/+ (wild-type, PrP C WT) or Prnp -/-(PrP C knockout, PrP C KO) hippocampal cell lines [27] ( Figure 5A-C). Because we injected exogenously PrP C WT or KO exosomes through i.c.v. to wild type rat throughout the study, acute infusion of these exosomes into the brain can change only PrP C of extracellular exosomes alone, but not neuronal PrP C level of subject animals. Intriguingly, the effectiveness of PrP C KO exosomes in preventing ADDL-induced LTP inhibition was significantly reduced compared with that of WT exosomes (WT Exo + ADDL, 159 ± 5%, n = 6, P < 0.001 vs. PBS + ADDL, 101 ± 5%, n = 6; P < 0.05 vs. KO Exo + ADDL, 129 ± 3%, n = 5, one-way ANOVA with post hoc Tukey; Figure 5D, E) and the binding of Aβ oligomers to PrP C KO exosomes was also significantly decreased compared to WT exosomes (WT Exo + ADDL, 70 ± 2% vs. KO Exo + ADDL, 45 ± 5%; P < 0.01, n = 5, Mann-Whitney U test; Figure 5F). The finding that knock-out of exosomal PrP C reduced ADDL binding and the exosomeinduced rescue of LTP to a similar extent suggests that both effects are mediated, at least in part, through PrP C . Both N2a cell-and human CSF-derived exosomes prevent AD brain-derived Aβ from affecting LTP Because it remains unknown whether Aβ assemblies formed in vitro accurately represent Aβ species found in human brain, we investigated if exosomes could prevent the disruptive activity of the most disease-relevant form of Aβ, Aβ extracted from AD brain. Aqueous extracts of AD brain contained Aβ species which migrated on SDS-PAGE as monomers and dimers ( Figure 6A) and potently inhibited LTP ( Figure 6B, E and F). Consistent with our previous reports [11], this inhibition of LTP was attributable to Aβ but not any other components of the AD extract since specific removal of Aβ reversed this effect, whereas mock-immunodepletion did not (PBS + AD-Aβ + , 110 ± 9%, n = 5 vs. PBS + AD-Aβ -, 176 ± 7%, n = 5, P < 0.001, one-way ANOVA with post hoc Tukey; Figure 6B, F). Importantly, the i.c.v. infusion of N2a cell-derived exosomes fully abrogated the inhibitory effect of Aβ-containing AD brain extracts (Exo + AD-Aβ + , 175 ± 9%, n = 7, P < 0.001 vs. PBS + AD-Aβ + , one-way ANOVA with post hoc Tukey; Figure 6B, C and F), but did not affect LTP induced in the presence of Aβ-immunodepleted AD brain extracts (Exo + AD-Aβ -, 179 ± 10%, n = 5, P > 0.05 vs. PBS + AD-Aβ -, oneway ANOVA with post hoc Tukey; Figure 6B, C and F). Next, we tested whether human brain-derived exosomes could also neutralize plasticity-disrupting forms of Aβ. To do this, we isolated exosomes from the CSF of healthy volunteers ( Figure 6D). Due to the limited amount of CSF Figure 6 Both N2a cell-and human CSF-derived exosomes prevent AD brain-derived Aβ from inhibiting in vivo LTP. (A) Immunoprecipitation/Western blot analysis of TBS extracts of AD brain. Comparing to synthetic Aβ 1-42 loaded for control, AD-Aβ + contained 2.2 ng/ml of Aβ monomer and 0.76 ng/ml of Aβ dimer whereas Aβ was completely removed from AD-Aβ -. AD stands for the starting AD brain extract, TBS for buffer vehicle and N.S. denotes non-specific bands, presumably that arise due to the reaction of the IP antibody with the antibodies used for Western blotting. (B) Infusion of AD-Aβ + disrupted LTP in vivo, but not by AD-Aβ -. Animals were pre-injected with PBS (5μl, asterisk) followed by a second injection (hash) of either AD-Aβor AD-Aβ + containing~18 pg Aβ. An arrow indicates HFS application and insets show representative traces at the color-matched time points. Calibration: 1.5 mV and 10 ms. (C) Cell-derived exosomes prevented AD-Aβ + from disrupting LTP. In animals that received N2a cell-derived exosomes (4 μg in 5 μl, asterisk), AD-Aβ + (6 μl, hash) no longer inhibited LTP, similar to animals injected with AD-Aβ -(6 μl, hash). Arrow, insets and calibration as in B. (D) The exosomes prepared from human CSF were detected in fractions with a buoyant density around 1.149 g/ml and contained Flotillin-1 and PrP C . (E) Infusion of human CSF-derived exosomes (huExo) abrogated the disruption of LTP by AD-Aβ + . Administration (asterisks, total 10 μl i.c.v., spread over two infusions) of samples of AD-Aβ + extracts after 30 min pre-incubation with CSF exosomes (total 1 μg) no longer inhibited LTP. Arrow, insets and calibration as in B. (F) A summary histogram of the data in B, C and E with statistical comparisons. Error bars, ± SEM. Statistical significance was expressed as **, P < 0.01; ***, P < 0.001 comparing to respective control. exosomes available, we modified our experimental paradigm to pre-incubating CSF exosomes with AD brain extracts and then injecting the mixture before HFS as previously used [10]. When CSF exosomes (1 μg) were pre-incubated with AD brain extracts, normal LTP was induced whereas injection of the same Aβ-containing AD brain extracts without exosomes consistently inhibited LTP (huExo + AD-Aβ + , 163 ± 14%, n = 5, P > 0.9 vs. PBS, 167 ± 7%, n = 4; P < 0.01 vs. PBS + AD-Aβ + , 112 ± 5%, n = 5, one-way ANOVA with post hoc Tukey; Figure 6E, F). These results demonstrate that CSF-derived exosomes can also protect LTP against the plasticity-disrupting activity of AD brain-derived Aβ. Discussion Although cellular functions of exosomes in nervous system are not completely understood, previous studies have provided evidence that exosomes can participate in paracrine delivery of biologically active and infectious materials such as Aβ, PrP C and α-synuclein [28][29][30][31]. It was also suggested that lipid components or IDE on the surface of exosomes were involved in the regulation of Aβ by activating fibrillization or proteolysis [15,16,32,33]. Although those reports had suggested the potential interaction between Aβ and exosomes, however, the physiological role of exosomes remains largely unknown particularly for Aβ-induced synaptotoxicity [15,16,18]. This would stem from the fact that the controlled manipulation of the levels of exosomes in the brain is very difficult and thus the direct assessment of the putative roles that exosomes exert has been hampered. Using an infusion paradigm, we discovered that addition of exogenous exosomes into brain can abrogate the synapticplasticity-disrupting activities of Aβ, most likely through direct sequestration of Aβ oligomers. Whereas it is generally postulated that synaptic failure in AD is caused by soluble Aβ assemblies, the molecular mechanisms whereby Aβ assemblies are formed and maintained for AD pathogenesis remains unclear yet [4]. Although we could not fully identify the molecular identity of synaptotoxic Aβ assemblies, we confirmed that our Aβ preparations from synthetic Aβ 1-42 peptide or AD brain extraction can effectively inhibit synaptic plasticity in vivo [5,6,11,34], which validated the efficacy of the used experimental conditions. Furthermore, multiple characterization assays that we used revealed the presence of a heterogeneous mixture of different sized Aβ species and also relatively pure exosomal preparation, consistent with previous reports [5,12,19,20]. In addition to the protective effect of exosomes against synaptotoxic activity of ADDLs, we found that exosomes were able to ameliorate the plasticity-disrupting activity of the most pathophysiologically-relevant form of Aβ, Aβ extracted from AD brain. Interestingly, we had to inject at least 45 ng of ADDLs and only 18 pg of Aβ from AD brain to produce potent inhibition of LTP. We and others have previously reported that the potency of synthetic Aβ used to disrupt the memory of learned behavior or to impair LTP is usually several orders of magnitude higher than that of naturally produced Aβ from the AD brain or APP expressing-cultured cell lines [11,34,35]. The different potency of the two Aβ preparations used in the present experiments likely reflects the fact that although they contained similar concentrations of synaptic plasticity disrupting Aβ, other additional assemblies, that are presumed to be relatively inactive, are present in higher concentration in the ADDL preparation compared to AD brain extracts. The protective effect of exosomes against the synapticplasticity-disrupting activity of Aβ leaves the question about the underlying mechanisms. As Aβ itself is a sticky protein and exosomes contain a variety of proteins and lipid components, there are several possibilities including non-specific proteolysis or sequestration such that a number of proteins, lipids, and membranous vesicles could affect Aβ-mediated LTP inhibition. Throughout biochemical experiments and in vivo electrophysiology, however, we demonstrate that proteolysis of Aβ is unlikely account for the protective effect; rather exosomes could sequester Aβ oligomer in a manner analogous to binding of neutralizing antibodies [10,36]. Still, we cannot directly interpret the effect of decreased Aβ monomer by exosomes in vivo since the effect of monomeric Aβ on synaptic plasticity had been examined only on hippocampal slice and the different protocol used to induce LTP could also compound Aβ-induced synaptic alteration [22,37]. Over brain slices and primarily cultured neurons, Aβ monomer showed protective effects on LTP and neuronal survival [38,39]. Therefore, the possible outcomes that chronically-decreased Aβ monomer produces should be further studied. Moreover, the sequestration of Aβ depends upon surface proteins of exosomes such as PrP C , which supports the idea that specific involvement of exosomal surface proteins to capture or immobilize soluble Aβ oligomer at exosomes. Especially, the ineffectiveness of trypsinized exosomes in neutralizing synaptotoxicity of Aβ argues against the possibility of involving exosomal lipids for the exosomes' protective effect. Because we examined only the effect of exosomes in this study, the effect of other small membranous vesicles that also derive from plasma membrane or other intracellular origins on Aβ-mediated synaptotoxicity should be verified further. The role of PrP C as a putative receptor for Aβ oligomer, and its involvement in Aβ-mediated impairment of LTP has been intensely debated [7,[40][41][42], but there is no controversy regarding the ability of PrP C to bind Aβ. Multiple independent studies concluded that PrP C binds Aβ oligomers specifically and with high affinity [6,7,40,[43][44][45]. In this study, we detected that exosomes from PrP C -deficient cells are significantly less able to protect against Aβ than PrP C -containing exosomes. This PrP C -dependent effect of exosomes would be due to the high affinity binding of Aβ to PrP C . Therefore, exosomes that derived from cell lines expressing mutated PrP C at the binding site for Aβ (95-105 residues of PrP C ) could be used to further verify the function of PrP C in detail [7,45]. In this study, we can use only the exosomes from "immortalized cell lines" of WT or PrP C -depleted neurons due to the difficulty to collect large amount of exosomes from primarily cultured neuron or from CSF of genetically modified mice. Notably, the fact that ablation of PrP C did not completely obviate the protective effects of exosomes suggests that exosomal proteins other than PrP C might also contribute to the sequestration of toxic Aβ oligomers, as consistent with previous observation that Aβ binding was only partially reduced to PrP C -deficient neurons [7]. To elucidate the full repertoire of candidate exosomal proteins involved in the interaction with Aβ and to further understand their molecular mechanisms, further screening and functional studies will be necessary. It might be informative to examine whether exosomes in culture medium play a protective role against the toxic effect of Aβ on primarily cultured neuron. However, Aβ-induced deficit in synaptic plasticity normally occurs well before manifest loss of neurons in AD models [2,4,46]. To establish whether the effects of Aβ and exosomes on synaptic plasticity are reflected at the level of cognition, behavioral tests determining their effect on cognitive function will be required. In this study, we provide evidence for the neutralizing action of exosomes against Aβ-induced LTP impairment using both N2a cell-derived exosomes and human CSFderived exosomes. These observations raised an important question: Do endogenous exosomes normally prevent Aβmediated impairment of synaptic plasticity? However, demonstration for effects of endogenous exosomes on AD pathogenesis or Aβ-induced alteration of synaptic plasticity was very challenging due to the difficulties to modify the nature and quantity of exosomes in the brain without any side effects. For example, when we activated the recycling of endosomes to increase the release of exosomes, the manipulation might affect production and release of Aβ [47]. The amount of exosomes prepared from human CSF or interstitial fluid of brain has been measured only in a few studies for its scarcity [48]. Although we were also unable to quantify the exosome content in a systematic manner due to the limited supply of fresh CSF, we did obtain approximately 8 μg of exosomes from 10 ml of human CSF following the purification steps including density gradient fractionation that normally involves considerable loss of exosomes (up to 60 % of the starting amount; see ref. Tauro et al. [49]). Accordingly, we esti-mated~2 μg endogenous exosomes contained in 1 ml of human CSF as consistence with previous study [48]. Therefore, we surmise that our i.c.v. infusion of 4 μg exosomes would yield~4 times the concentration of endogenous exosomes present in rat CSF, assuming that rat CSF (ranged 500 μl total; see ref. Lai et al. [50]) contained an exosome content similar to that of human CSF. Importantly, 1 μg of CSF-derived exosomes exhibited a significantly protective effect when co-injected with Aβ-containing AD brain extracts ( Figure 6). Taken together, we speculate that exosomes may protect synaptic plasticity against amyloidogenic insults in situ particularly over an extended time window. Considering studies indicating the increased release of exosomes by 2.5 -4 folds under in vitro hypoxia condition [15,51], it is very likely that this process could be occurred in pathological condition. Eventually, exosome-bound Aβ might be taken up by microglia for degradation in normal condition [16], or they can be the seed for the plaque formation in pathological condition [52]. The efficiency of this process may be critical in determining the onset and progression of AD given the causal contribution of synaptic failure to the disease and cognitive decline [2]. Since both Aβ and exosomes are released from the brain in an activity-dependent manner [53,54], the dynamic change of exosome concentration in brain, especially in AD patients, is a subject that we feel should be explored further. Conclusions Collectively, exosomes are able to sequester synaptotoxic Aβ oligomers via surface proteins such as PrP C and thereby rescue LTP from Aβ-mediated impairment in vivo. Importantly, our findings based on exosomes isolated from human CSF and Aβ from AD brain strongly indicate that the pathophysiologically relevant forms of Aβ in the brain can be sequestered by exosomes. Although we were unable to quantitatively measure the change of total exosome concentration in the brain after exogenous application, at the very least we were successful in using human CSF samples to provide a reasonable and predictive window on exosome levels in the brain and thus to assess the utility of this measure as a biomarker for AD. Similarly, when we can manipulate the levels of endogenous exosomes in a more precise manner, we will be in a better position to ascertain their pathophysiological contribution to AD and perhaps supply exogenous exosomes or artificially engineered forms of lipid vesicles for a therapeutic benefit. Animals Male Wistar rats (250 -350 g) were used for in vivo recording experiments. They were housed under a 12-hour light/dark cycle and given ad libitum access to food and water. The rats were anesthetized with urethane (ethyl carbamate, 1.5 g/kg, i.p.). The body temperature was maintained at 37.4 -38°C for the duration of the experiments. All procedures for animal experiments were approved by the ethical review committee of Trinity College Dublin and the Department of Health and Children, Ireland and POSTECH (Pohang University of Science & Technology), Korea and performed in accordance with the relevant guidelines. ADDLs preparation Aβ 1-42 (American peptide) was dissolved in 1,1,1,3,3,3hexafluoro-2-propanol (Sigma) to a concentration of 1 mM. The solution was allowed to evaporate for 2 h and then dried in a Speed Vac. The resulting film of peptide was stored at -20°C or immediately resuspended in dimethyl sulfoxide (Sigma) to produce a 1 mM solution. This solution was sonicated for 10 min in a sonic bath, and then diluted to 100 μM in phenol red-free Ham's F12 medium (Life Technology) and incubated for 12 h at 4°C. The resulting solution was then spun at 100,000 g for 1 h and either used immediately or stored at -80°C for up to 2 weeks. Monomeric Aβ 1-42 was prepared by dissolving the peptide film to 100 μM in 10 mM NaOH solution (pH 11). Isolation of exosomes N2a cells were grown in exosome-depleted medium comprising 44.5% DMEM, 44.5% Opti-MEM with 10% FBS and 1% Penicillin/Streptomycin under a humidified environment of 5% CO 2 /95% O 2 incubator at 37°C. PrP C WT or KO cells (HW8-1 and Hpl3-4, respectively) established from the primary cultured hippocampal neuron of Prnp +/+ and Prnp -/mice [27] were grown in exosome-depleted medium composed of 89% DMEM, 10% FBS and 1% Penicillin/Streptomycin. Exosomes were prepared as previously described [20] with minor modifications. In brief, exosome-enriched media was fractionated by centrifugation (200,000 g × 2 h) on a 5 -30% of opti-prep gradient (Axis-Shield) in a SW-41 rotor (Beckman Coulter). 1 ml from each fraction was collected and diluted 1:10 with pre-cooled phosphatebuffered saline (PBS) and collected by centrifugation for 1 h at 100,000 g. A portion of the resultant pellets were boiled in 2× sample buffer and used for Western blotting. Fractions enriched in exosomes were used for subsequent studies. The amount of exosomes used was expressed in terms of total protein which was determined using the Pierce BCA assay kit (Thermo Scientific). All procedures for collection and usage of human CSF were approved by the Mater Misericordiae University Hospital Research Ethics committee, Ireland. CSF was obtained from a 61-year-old female and a 71-year-old male donors both of whom were healthy and cognitively normal. 10 ml of CSF in total was taken by lumbar puncture from the L3/L4 interspace, and kept on ice. CSF was used to isolate exosomes within 2 h of collection, using the procedure described above. Atomic force microscopy (AFM) 10 μl of 10 μM ADDLs in PBS was incubated on freshly cleaved mica for 1 min. The mica was washed twice with 100 μl of deionized water and dried under a gentle stream of N 2 gas. Tapping mode AFM imaging was performed in air using Multimode/Nanoscope IIIa (Digital instruments) equipped with a J-scanner. The images were taken with a TESP cantilever (Veeco) at a sample rate of 0.85 Hz. Section analysis (Nanoscope V) was employed to measure the z-height of distinct globules (>50) and the z-height was used as a representative value for the size of Aβ oligomers [19]. Electron microscopy (EM) 5 μl drops of exosomes (50 μg/ml) were loaded onto carbon-coated 200 μm copper grids and incubated for 1 min. The samples were then stained with 2% uranyl acetate for 2 min, and excess solution carefully removed and the grid left to air dry. Images were captured using an electron microscope (JEOL) operated at 100 kV. Dynamic light scattering (DLS) spectroscopy The sizes of exosomes (10 μg in 100 μl) or ADDLs (10 μM in 100 μl) were measured by DLS performed with Zetasizer Nanoseries instrument (Malvern Nano-Zetasizer). The mean values of particle sizes were obtained from more than 3 independent preparations. in vivo electrophysiology and i.c.v. infusion Electrodes were made and implanted in anaesthetized animals as described previously [6]. Briefly, twisted-wire bipolar electrodes were constructed with Teflon-coated tungsten wires (62.5 μm inner core diameter, 75 μm external diameter, A-M Systems). Field excitatory postsynaptic potentials (fEPSPs) were recorded from the stratum radiatum of the CA1 area of the right dorsal hippocampus in response to stimulation of the ipsilateral Schaffer collateral-commissural pathway. Electrode implantation sites were identified using stereotaxic coordinates relative to bregma, with the recording site located 3.4 mm posterior to bregma and 2.5 mm right of midline, and the stimulating electrode located 4.2 mm posterior to bregma and 3.8 mm right of midline. The optimal depth of the electrodes was determined using electrophysiological criteria and verified post-mortem. Test fEPSPs were evoked at a frequency of 0.033 Hz at the stimulation intensities adjusted to elicit fEPSP amplitudes of 40 -50% of maximum. The high-frequency stimulation (HFS) protocol for inducing LTP consisted of 10 bursts of 20 stimuli with an inter-stimulus interval of 5 ms (200 Hz), and an interburst interval of 2 sec. The intensity was increased so as to give 75% of maximum amplitudes of fEPSPs during the HFS. The weak HFS consisted of 10 bursts of 10 stimuli with an inter-stimulus interval of 10 ms (100 Hz), and an inter-burst interval of 2 sec. The initial slopes of fEPSPs were measured and the average of ten sweeps was plotted. Unless otherwise specified, fEPSP slopes (% Baseline) indicate the mean slopes between 170 -180 min after HFS in each condition. To infuse samples, a stainless-steel guide cannula (22 gauge, 0.7 mm outer diameter, 13 mm length) was implanted above the right lateral ventricle (1 mm lateral to the midline and 4 mm below the surface of the dura) just prior to electrode implantation. The placement of the cannula was verified post-mortem with i.c.v. infusion of Indian Blue ink dye. Binding assays between ADDLs and exosomes ADDLs were centrifuged for 1 h at 100,000 g prior to incubation with exosomes. The supernatant contained more than 95% of the starting peptide. 1 μg of this ADDL supernatant was then added to identical volumes of purified trypsinized or mock-trypsinized exosomes (160 μg) and incubated at 37°C for 30 min in 10 ml PBS. Thereafter, exosomes were separated from the unbound Aβ by centrifuging for 1 h at 100,000 g. The exosome pellet was dissolved in 2× sample buffer and 25% of the mixture used for Western blotting for exosome-bound Aβ. 25% of the supernatant resulted from 100,000 g centrifugation was collected and used for immunoprecipitation with 6E10/Western blotting for exosome-unbound Aβ. Mean % of~12 and 16 kDa Aβ bound to exosomes (P) relative to total~12 and 16 kDa Aβ (P + S) was used for the presentation with bar graph. Limited trypsinization for surface proteins of exosomes Exosomes (0.5 mg/ml) were incubated with trypsin (1 mg/ml, Sigma) for 30 min at 37°C and the reaction was stopped by addition of a serine protease inhibitor Pefabloc SC™ (4 mg/ml, Sigma). After this treatment, exosomes were re-isolated by density gradient centrifugation (as described in the procedures for exosomes isolation). The effect of trypsin on surface and luminal proteins was verified by assessment of trypsinized-and mock-trypsinized exosomes with antibodies against PrP C , CD81 (exosomal surface proteins) or Alix (luminal protein). Immunoprecipitation of exosomes Exosomes were incubated with anti-Flotillin-1 antibody (8 μl) and pre-washed Protein A/G agarose bead (Calbiochem) at 4°C for 6 h. The resulting precipitates were washed with PBS and 25% of each sample was used for western blotting. AD brain extracts Human tissue was obtained and used in accordance with local IRB guidelines. A sample of temporal cortex from a 92-year-old woman with a history of dementia and confirmed AD pathology was used to prepare water-soluble extracts and the extracts were examined for the presence of Aβ as described previously [6]. Briefly, a~2 g cube of frozen tissue was thawed on ice, gray matter isolated, chopped into small pieces with a razor blade and then homogenized in 5 volumes of ice-cold 20 mM Tris-HCl, pH 7.4, containing 150 mM NaCl (TBS) with 25 strokes of a Dounce homogenizer (Fisher). The water-soluble fraction was separated from the insoluble fraction by centrifugation at 91,000 g and 4°C in a TLA 55 rotor (Beckman Coulter) for 78 min and the supernatant was used for the subsequent studies. To eliminate lowmolecular-weight bioactive molecules and drugs, the supernatant was exchanged into sterile 50 mM ammonium acetate, pH 8.5 using a 5 ml Hi-trap desalting column (GE Healthcare Bio-Sciences). Thereafter the extracts were divided into 2 parts: one aliquot was immunodepleted (AD-Aβ -) of Aβ by 3 rounds of 12 h incubations with the anti-Aβ antibody, AW7 [55], and protein A at 4°C. The second portion was treated identically, but pre-immune serum was used instead of AW7 anti-Aβ antiserum and so produced a "mock"-immunodepleted samples (AD-Aβ + ). The amount and form of Aβ was analyzed in duplicate 0.3 ml samples by immunoprecipitated with AW7 at a dilution of 1:80 and by western blotting using a combination of the C-terminal monoclonal antibodies, 2G3 and 21F12 (each at a concentration of 1 μg/ml). Detection was achieved using fluorochrome-coupled anti-mouse secondary antibody
8,555.4
2013-11-13T00:00:00.000
[ "Biology", "Chemistry" ]
KRASG12C mutant lung adenocarcinoma: unique biology, novel therapies and new challenges KRAS mutant lung cancer is the most prevalent molecular subclass of adenocarcinoma (LUAD), which is a heterogenous group depending on the mutation-type which affects not only the function of the oncogene but affects the biological behavior of the cancer as well. Furthermore, KRAS mutation affects radiation sensitivity but leads also to bevacizumab and bisphosphonate resistance as well. It was highly significant that allele specific irreversible inhibitors have been developed for the smoking associated G12C mutant KRAS (sotorasib and adagrasib). Based on trial data both sotorasib and adagrasib obtained conditional approval by FDA for the treatment of previously treated advanced LUAD. Similar to other target therapies, clinical administration of KRASG12C inhibitors (sotorasib and adagrasib) resulted in acquired resistance due to various genetic changes not only in KRAS but in other oncogenes as well. Recent clinical studies are aiming to increase the efficacy of G12C inhibitors by novel combination strategies. Introduction The most frequent histological type of lung cancer is adenocarcinoma (LUAD) comprising half of the cases and the vast majority of the non-small cell lung cancers (NSCLC).The molecular classification of adenocarcinoma subgroup is established and is well known, where the most frequent genetic alteration among non-Asian patients is KRAS mutation (1/3) followed by EGFR (5%-15%) while in Asian patients EGFR mutation is the most frequent followed by KRAS mutation [1,2].Other relatively frequent mutations affecting BRAF and MET and by incidence followed by so called translocation cancers involving ALK/ROS1 less frequently RET or NTRK.At the same incidence levels, MET and HER2 amplifications also occur in this histological type [3].It is of note that HRR mutations are also relatively frequent though less appreciated [4] (Figure 1).In the past decade target therapy changed the treatment of lung adenocarcinoma which left KRAS mutant lung cancer in an orphan status which changed recently significantly [5]. Molecular epidemiology of KRASG12C mutant lung cancer KRAS mutant lung cancer has three variants: type-1 is a characterized by mucinous histology with TTF1 expression, type-2 is characterized by high TMB and PDL1 expression while type-3 group contains KEAP mutation [6].Other studies performed subclassification based on gene expression signatures and defined a p16 mutant, a p53 mutant and a STK11 mutant forms all having different expression profiles [7]. KRAS mutation in lung cancer has three predominant forms: the most frequent is G12C (~40%) followed by ~20-20%, G12D and G12V, respectively [1,2,8].It is widely accepted that KRAS mutation in lung cancer is smoking associated but it is only proven for G12C while the G12D and G12V are associated with chromosomal instability and/or mismatch repair deficiency [9].There is a clear association between smoking and allelic variants of mutant KRAS: among recent smokers far the most frequent is G12C mutation while among non-smokers G12V is the predominant (Figure 2).The presence of G12C mutation among non-smokers (~10%) indicates the effect of passive smoking [10]. Various KRAS mutants are differ in biochemical and signaling functions: in G12C mutant the mitogenic RAS-RAF-MEK pathway is the most active, while in others the AKT signaling seems to be equally active, most probably due to changes in RAF affinity of the protein (Table 1, Ref [11]).Furthermore, individual mutants are characterized by differential alterations in GTP-ase activity or to sensitivity toward GAP proteins.Furthermore, the GDP/GTP exchange potential of the individual mutants seems also be different in various variants.There are other data supporting different lung carcinogenesis behind mutant KRAS variants: G12C mutation is associated with EGFR4 mutation, G12D mutations tend to have PDGRA mutation while G12V mutation containing tumor used to have PTEN mutation [1,7].Allelic imbalance of KRAS genes may also affect its function.In KRAS mutant lung adenocarcinoma heterozygous loss of the wild type allele is very frequent (~75%) leaving the mutant allele the only functioning KRAS (a kind of homozygosity), whereas the copy gains of the mutant allele is much less frequent [12].Other analyses defined the oncogenic driver roles of various KRAS mutant forms and found that G12C is a real major driver oncogene in lung cancer, unlike G12D/V which are only "mini-drivers," cooperating with other mutant oncogens [13]. Biology and therapeutic sensitivity of KRASG12C mutant lung cancer Analysis of a large KRAS mutant LUAD database indicated that this type of lung cancer has increased potency to metastatize to the lung but decreased one to the liver and to invade the pleural surface [14].Furthermore, it was shown that in case of bone metastases KRAS mutant status is an independent negative prognostic factor [14].As far as the chemotherapeutic sensitivity concerns, most of the KRAS mutant variant FIGURE 2 Connection between smoking history and KRAS mutant types in lung adenocarcinoma [10]. containing tumors are equally sensitive toward platinum-based therapies, except the G12V mutant which seems to be more sensitive to this chemotherapy than others [10].Another retrospective analysis tested the efficacy of bevacizumab in combination with chemotherapy and demonstrated that it is more efficient in KRAS wild-type tumors which was due to the resistance of the G12D mutant form [15]. Analysis of the treatment outcome of bone metastatic lung carcinoma patients indicated that the KRAS mutant tumors seems to be resistant to radiation therapy and to bisphosphonates [16] as it was predicted by the preclinical models [17].A recent analysis of the G12D mutant lung cancers demonstrated that the density of CD8 + T cells, the TMB and the tumor cell expression level of PDL1 are lower as compared to other KRAS mutants including G12C [18].More importantly, the efficacy of immune checkpoint inhibitors turned out to be poorer in G12D mutant lung cancers. Novel drugs to target mutant KRAS The race for the G12C mutant KRAS inhibitor Although it was considered undruggable, development of mutant KRAS inhibitors lastly became successful [19].By the development of KRASG12C inhibitors.The challenge was here that-on the contrary to the various oncogenic tyrosine kinases where the increase kinase activity is the target-here in case of a GTP-ase the lost function is the target so a direct enzyme inhibitor is not an option.On the other hand, since the wild type KRAS is a critical signaling component of most of the normal cells, the inhibitor must be highly selective for the mutant isoform.As a result, a new class of inhibitors have been designed: the allele-specific (i.e., mutation specific) irreversible inhibitors.The idea was that since the KRAS is active in the GTP-bound state the novels drugs accumulate it in the off-state which is the GDP-bound KRAS (Figure 3). The first in class of such KRASG12C inhibitor was published in 2013 [20] and a drug was approved for lung cancer in 2021 [20] which was a very rapid developmental process.The race was won by Amgen by a novel drug which is not only allelespecific (G12C) but also bind to a novel pocket (c95-99) critical in GTP-binding (ref [21], AMG-510, sotorasib).Preclinical data indicated that this novel inhibitor, not only blocks the mitogenic signaling (RAS-RAF-MEK) but is synergistic with platinumbased chemotherapy, with MEK inhibitors or with immune checkpoint inhibitors [21].For the second place of this race arrived Mirati with a chemically distinct but functionally similar compound MRTX849/adagrasib which is characterized by very good pharmacological characteristics and which has a very good penetrance of the blood-brain barrier, forecasting its use for brain metastases [22].It is of note that the half -life in the circulation of AMG510 is 5 h as compared to adagrasib's 23 h.Meanwhile there are several other G12C inhibitors developed [23], some even reached clinical testing but only GDC-6036 exhibited early clinical efficacy [24]. Other mutant KRAS inhibitors Developments in this filed continued by the G12D inhibitors which is far more frequent in other cancers but much less in lung adenocarcinoma.Unfortunately, irreversible inhibitors are nonexistent but a G12D selective inhibitor was developed: MRTX1133 which locks KRAS protein in the GTP-bound state which is in clinical development right now [25].Furthermore, there are other novel inhibitors such as KRAS12D1-3 and RAS(ON)G12D [26]. Pan-RAS inhibitors Other directions are the development of so-called pan-RAS inhibitors.BI-2852 induces homodimers of KRAS and turned out to be a KRASG12D selective inhibitor [27].A real pan-KRAS inhibitor which even reached successful clinical testing is RMC-6236, a powerful RAS(ON) inhibitor which showed activity in G12V and other rare mutant forms [28]. Indirect RAS inhibitors One of the main GTP-exchange protein of RAS is SOS1 and it serves as drug development target: there are several new molecules are on the market and some of them entered the clinic [29].It would be interesting to see the side effect profile since these inhibitors are equally effective against all RAS isoforms and all variants, wild type or mutant.RAS proteins are phosphorylated at C32 of the exon2 by SRC and SHP2 phosphatase acting at this site.There are several SHP2 blockers in development and some of them entered clinical phase [30]. Novel treatment options for KRASG12C mutant lung adenocarcinoma In the past nearly 20 years, treatments targeting EGFR and ALK have already become part of everyday patient care, but at the same time, the use of targeted therapy against the driver mutation present in the largest proportion, the KRAS mutation, has only become a realistic possibility in recent years.We currently have the most experience with two KRAS inhibitors; these are sotorasib and adagrasib.The phase II trial for sotorasib was the Code-BreaK100, while that for adagarasib was the Krystal-1 clinical trial [31,32]. The Code-BreaK100 trial investigated the activity of oncedaily oral sotorasib 960 mg in patients with KRASG12C mutation-positive advanced NSCLC previously treated with platinum-based chemotherapy [31].The primary endpoint was objective response (complete or partial) based on independent central review.Key secondary endpoints included duration of response, disease control (complete response, partial response, or stable disease), progression-free survival, overall survival, and patient safety.The predictive value of some biomarkers was also analyzed.Among the 126 enrolled patients, the majority (81.0%) had previously received platinum-based chemotherapy and PD-1 or PD-L1 inhibitors.According to the central review, 124 patients had measurable disease at baseline and the therapeutic response could be evaluated.An objective response was observed in 46 patients [37.1%; 95% confidence interval (CI), 28.6-46.2],including 4 (3.2%)complete responses and 42 (33.9%)partial responses shown.The median duration of therapeutic response was 11.1 months (95% CI, 6.9-not evaluable).Disease control occurred in 100 patients (80.6%; 95% CI, 72.6-87.2).Median progression-free survival was 6.8 months (95% CI, 5.1-8.2), and median overall survival was 12.5 months (95% CI, 10.0-not evaluable).Treatment-related adverse events occurred in 88 of 126 patients (69.8%), including a grade 3 event in 25 patients (19.8%) and a grade 4 event in 1 patient (0.8%).Therapeutic responses were also analyzed in subgroups defined by PD-L1 expression, tumor mutational burden (TMB), and concurrent STK11, KEAP1, or TP53 mutations.Based on all of this, in this phase II study, sotorasib therapy showed clinical benefit in patients with previously treated KRASG12C-mutated NSCLC without new patient safety signals [31]. The Krystal-1 study evaluated adagrasib (600 mg orally twice daily) in patients with KRASG12C-mutated NSCLC who had received prior platinum-based chemotherapy and anti-PD1 or anti-PD-L1 immunotherapy [32].The primary endpoint was objective therapeutic response (ORR), assessed by an independent central review.Secondary endpoints included duration of response, progression-free survival, overall survival, and patient safety.A total of 116 patients with KRASG12C mutation-positive NSCLC were treated until October 15, 2021 (mean follow-up: 12.9 months); 98.3% had previously received both chemotherapy and immunotherapy.Of the 112 patients with measurable disease at baseline, 48 (42.9%) had a confirmed objective response with a median duration of 8.5 months [95% confidence interval (CI), 6.2-13.8],and the median progression-free survival was 6.5 months (95% CI, 4.7-8.4).As of January 15, 2022 (median follow-up, 15.6 months), the median overall survival was 12.6 months (95% CI, 9.2-19.2).In 33 patients with previously treated stable CNS metastases, the intracranial objective response rate was 33.3% (95% CI, 18.0-51.8).Treatment-related adverse events occurred in 97.4% of patients; grade 1 or 2 in 52.6%, grade 3 or higher in 44.8% (including two grade 5 events), and it became necessary to suspend medication in 6.9% of patients.Overall, in previously treated patients with KRASG12C-mutated NSCLC, adagrasib demonstrated clinical efficacy with no new patient safety alerts [32]. Below, we will review what differences can be verified between the two agents based on the results of these trials (Table 2).In phase II trials, the ORR was higher with adagrasib (43%) than with sotorasib (37%), and the rate of progressive disease (PD) was lower with adagrasib (16% for sotorasib vs. 5% for adagrasib), as shown in Table 2.However, in the absence of a head-to-head comparison, the results of such comparisons should be evaluated with caution [33].Median PFS was similar between the two drugs (sotorasib, 6.6 months and adagrasib, 6.5 months).Drug-related adverse events were more common with adagrasib than with sotorasib, and, as a result, treatment interruption or dose reduction is more common with adagrasib (sotorasib, 22% and adagrasib, 52%).The confirmatory phase III trial for sotorasib was the Code-BreaK200 [34], while for adagrasib it was the Krystal-12 study. In the Krystal-12 trial, docetaxel was also the comparator agent and the inclusion criteria were the same as in the CodeBreak200 study, however, the randomization ratio was 2: 1 in favor of adagrasib.Patients received 600 mg of adagrasib twice daily, and 75 mg/body surface area of docetaxel every 3 weeks.Adagrasib produced an ORR of 42.9% and a PFS of 6.5 months.Both drugs showed the already known side effect profile, the most common toxicities were diarrhea, musculoskeletal pain, fatigue and hepatotoxicity [35]. In NSCLC approximately 30%-40% of patients develop brain metastases during the course of the disease.In 2022, brain metastasis specific activity of adagrasib has been reported by Sabari et al. [36] Retrospectively, 374 NSCLC patients with KRAS mutations (149 with G12C mutation and 225 with non-G12C mutation) were analyzed for brain metastases.Overall, 40% of patients with KRASG12C or non-G12C mutations developed brain metastases during the follow-up period.77% of patients had a diagnosis of synchronous brain metastases detected within 3 months of initial diagnosis.Brain metastasis occurred less frequently in NSCLC patients with KRAS mutations than in NSCLC patients with other oncogenic driver mutations [30].In a retrospective review of 579 patients with metastatic NSCLC, the incidence of brain metastasis was highest in NSCLC patients with ROS1 (36%) and ALK (34%) mutations/fusions, followed by EGFR (28%) and KRAS (28%).In NSCLC without a driver oncogene, brain metastasis occurred in only 21% of patients [37].The response of brain metastases to radiation therapy may vary depending on the driver oncogene.In an analysis by Arrieta et al., the response rate to radiotherapy was higher in NSCLC patients with EGFR (64.5%) or ALK (54.5%) mutations than in those without driver mutations (35%).However, in NSCLC patients with KRAS mutations, this rate is only 20%, which further emphasizes the need for effective treatments in this group [38]. Only limited data are available on the CNS activity of sotorasib in metastatic NSCLC.Although patients with active, untreated brain metastases were excluded from the Code-BreaK100 study, 2 of 16 patients with stable brain metastases had a complete response to therapy, and 12 achieved stable disease with sotorasib therapy, representing 88% of the patients with intracranial disease control [39].In addition, several case studies have been published of patients with brain metastases in whom radiological regression was confirmed and symptoms resolved with sotorasib treatment [40,41].Yeh et al. reported a patient with NSCLC harboring a KRASG12C mutation with symptomatic leptomeningeal involvement and multiple brain metastases treated with sotorasib monotherapy [41].The patient showed clinical improvement 2 weeks after the start of sotorasib treatment, and brain MRI showed clear radiological improvement in several metastatic foci and meningeal involvement.In this case, sotorasib was effective against untreated, symptomatic metastases.However, severe hepatotoxicity necessitated discontinuation of sotorasib, leading to disease progression.Therefore, although sotorasib is also effective in metastases affecting the central nervous system, further prospective studies are needed. Negrao et al. studied the intracranial efficacy of adagrasib in KRASG12C-mutated NSCLC patients with untreated CNS metastases enrolled in the KRYSTAL-1 study [42].25 patients were enrolled and evaluated (mean follow-up, 13.7 months), and 19 patients had radiologically evaluable intracranial activity.Safety was consistent with previous reports for adagrasib: treatmentrelated grade 3 adverse events occurred in 10 patients (40%), grade 4 in 1 patient (4%), and there was no grade 5 adverse events.The most common CNS-specific adverse reactions were dysgeusia (24%) and dizziness (20%).Adagrasib showed an intracranial ORR of 42% and a DCR of 90%, as well as a PFS of 5.4 months and an OS of 11.4 months, which is promising for the treatment of patients with untreated CNS metastases. The clinical trial results of the KRAS inhibitors sotorasib and adagrasib are promising, however, currently they are inferior to EGFR inhibitors or ALK inhibitors in terms of both therapeutic duration (PFS, OS) and side effect profile.Further extensive studies-mainly targeting predictive markers and resistance mechanisms-are necessary in order to be able to treat permanently and effectively this large group of patients with a good quality of life. Primary and acquired resistance mechanisms Primary resistance There are characteristic co-occurring mutations in KRAS mutant lung cancer such as STK11 and KEAP1.STK11 mutation was shown to be associated with resistance to immunotherapy [43].In the CodeBreak100 study the association of STK11 and KEAP1 mutations have been evaluated in relation to the efficacy of sotorasib and found that the lowest response rate was found in tumors having KEAP1 mutation/STK11 wild type genotype while the highest was seen in tumors with STK11mutant/ KEAP1 wild type genotype [44].A recent genomic analysis of a large G12C mutant lung cancer cohort treated with G12C inhibitors revealed that co-occurring mutations of KEAP1, SMARC4 and CDKN2A were independent negative predictive factors of inhibitor efficacy while mutations in the DDR genes were positive predictive ones [45]. Acquired resistance Acquired resistance to sotorasib treatment of lung cancer patients had various pathomechanisms At the first place it was found the disappearance of G12C mutation from cancer cells or the amplification of the wild type KRAS gene.Other KRASrelated genetic alterations were the acquired novel mutation types (G13V, G12D, G12V, V8L, V141I) or the novel mutations affecting NRAS.Furthermore, mutations of the EGFR signaling pathway members such as EGFR or BRAF are also occurred [46].Although at not high frequency, but amplifications of MET or HER2 have also been reported [47,48]. Upon adagrasib resistance it was described histological transformation from adenocarcinoma to squamous [49] a bit similar to what was seen in case of EGFR inhibitor resistance.It can occur most probably in those cases where the original tumor is a combined adenosquamous variant since KRAS mutation is adenocarcinoma specific genetic alterations.In case of acquired resistance to adagrasib at first place also novel KRAS mutations have been identified (G12D/R/W, G13D, Q61H, R68S, H95D/Q/ R, Y96C).The resistance mechanism does not involve the EGFR signaling instead the RET signaling with mutations affecting RET, BRAF and MAP2K1.Furthermore, gene amplification here also involved MET but interestingly there were several gene fusions in the resistant tumors involving, ALK, RET, FGFR3 and BRAF [49]. The resistance mutations of KRAS can be classified into three main categories.Mutations in the codon12 or codon61 decrease the potential of the KRAS protein to hydrolyze GTP.Mutations at codon 13 increase the GDP-GTP exchange, while mutations at R68, H95, Y96 and Q99 decreases the affinity of the inhibitors. It is interesting that various mutational profiles of the KRAS mutant lung cancers affect the development of resistance to sotorasib or adagrasib [49] The H95 mutations may confer resistance to adagrasib but does not affect the activity of sotorasib.On the other hand, G13D, R68M, A59S/T mutations confer sotorasib resistance but retain adagrasib sensitivity [48].Finally, m72 or Q99 mutations cause adagrasib resistance but do not affect sotorasib sensitivity [50].Based on these data it can be hypothesized that the development of acquired resistance could be treated by sequential use of the other G12C inhibitor. Developing combinational approaches The observed clinical efficacy and the developing resistances both stimulated novel clinical approaches to improve the efficacy of G12C inhibitors sotorasib and adagrasib (Table 3) [51].Since G12C mutant lung cancer is an immunologically hot tumor it was evident to start combinations with PD1/PDL1 inhibitors: in case of sotorasib the combination partner is AKG404 (a PD1 inhibitor) in case of adagrasib the partner is Pembrolizumab (also a PD1 inhibitor).Since one of the resistance mechanisms of G12C inhibitors involves the reactivation of EGFR signaling pathway, sotorasib is now clinically tested in combination with afatinib (an EGFR tirozin kinase inhibitor).In case of both G12C inhibitors the efficacy against colorectal cancer is a significant problem therefore combinational trials using anti-EGFR antibodies.Other interesting novel combination involves bevacizumab (anti-VEGF) since this therapy was shown to be inactive in KRAS mutant lung cancer [15].Furthermore, combinational trials of G12C inhibitors are already initiated with traditional chemotherapies such as carboplatin/pemetrexed.Since acquired resistance to G12C inhibitors may involve reactivation of alternative signaling pathways such as PI3KCA (sotorasib) combination with mTOR inhibitor seems to be a rational approach.It is a completely different approach to increase the KRAS inhibitory efficacy of G12C inhibitors by either SOS1 inhibitors (to block GEF protein activation) or with SHP2 inhibitors (to block reactivation mechanisms) [51].Since these approaches are pan-RAS targeted, it will be an interesting issue to see that for the prize of increased G12C inhibition what kind of prize can be paid in terms of side effects. Conclusion KRAS mutant lung adenocarcinoma is the most frequent molecular subtype of lung cancer but it is still a heterogenous entity since the individual allelic variants are biologically heterogenous.The most frequent allelic variant of KRAS mutant lung cancer is the smoking related G12C which became the focus of the development of mutant-specific irreversible KRAS inhibitors.More importantly, two of the G12C inhibitors, sotorasib and adagrasib were effective clinically in advanced G12C mutant lung adenocarcinoma patients resulting in conditional approval (linked to annual reporting of the expected clinical efficacy).Meanwhile, similar to other target therapies, upon administration of G12C inhibitors clinical resistance develops which is due to various biological processes predominated by secondary mutations of the KRAS gene.Since the clinical efficacy of G12C inhibitors is not overwhelming, there is a room for improvement which is the bases of development of various combination approaches of G12C inhibitors including immunotherapeutic agents, EGFR inhibitors or RAS signaling modulators.Since mutant KRAS was long considered undruggable, the development and the clinical of G12C inhibitors pave the way for the development of non-G12C mutant KRAS inhibitors, opening the door for a new era of target therapies aiming at the most frequently mutated human oncogene in various cancers including the lung adenocarcinoma. FIGURE 1 FIGURE 1Molecular classification of lung adenocarcinoma. TABLE 2 Summary of the clinical efficacies of sotorasib and adagrasib.
5,192.6
2024-01-04T00:00:00.000
[ "Medicine", "Biology" ]
Interval-valued intuitionistic fuzzy graphs : The definitions of eight different types of interval-valued intuitionistic fuzzy graphs are introduced and their representations by index matrices are discussed. Examples for operations over these graphs are given. Introduction The concept of the Intuitionistic Fuzzy Graph (IFG) was introduced by Anthony Shannon and the author in 1994 in [5]. In recent years, the concept was essentially extended and it found different applications. Initially, following [2] we introduce for the first time definitions of Cartesian products over Interval-Valued Intuitionistic Fuzzy Sets (IVIFSs), while in [2] similar definitions are given only for the intuitionistic fuzzy sets (IFS) case. Using these definitions, four definitions of Interval-Valued Intuitionistic Fuzzy Graphs (IVIFGs) will be given. After this, four more general definitions of IVIFGs will be introduced and some conditions for the correctness of these graphs will be discussed. All definitions for IVIFSs are given in [2]. Cartesian products over intuitionistic fuzzy sets and interval-valued intuitionistic fuzzy sets Let E 1 and E 2 be two universes and let be two IFSs; A -over E 1 and B -over E 2 . Following [2], we define: For the above two universes, let be two IVIFSs; A -over E 1 and B -over E 2 . Let each one of the intervals Following the definition of an IVIFSs, we suppose that for each x ∈ E 1 , y ∈ E 2 : Now, following [2], we define the following five Cartesian products over two IVIFSs and a new (sixth) one: Proof. We will prove that for every four real numbers a, b, c, d holds. By a direct check we obtain that (1) is true for a = c, b = d, and for a = 0 and b = 0. In all other cases we have the equality From the validity of (1) it follows that On the other hand, for every four real numbers a, b, c, d ∈ [0, 1], such that a+b ≤ 1, c+d ≤ 1, it is valid that and hence A × 2 B is an IVIFS. For the four other products the checks are analogous. In [2], this Theorem has not been formulated and proved. define the degree of membership and the degree of non-membership, respectively, of the element v ∈ V to the set V ; the functions define the degree of membership and the degree of non-membership, respectively, of the element x, y ∈ E 1 × E 2 to the set A ⊆ E 1 × E 2 ; these functions have the forms of the corresponding components of the •-Cartesian product over IFSs, where • ∈ {× 1 , × 2 , . . . , × 5 } is an operation over IFSs, and for all x, y ∈ E 1 × E 2 , The above definition is old (see, e.g., [2]), while the following three types of IVIFGs are introduced for the first time. Let us call the first definition (•)-(IFS, IFS)-IFG. Now, we introduce the following three new definitions. define the degree of membership and the degree of non-membership, respectively, of the element v ∈ V to the set V ; the functions ) define the degree of membership and the degree of non-membership, respectively, of the element where for each set Z, P(Z) is the set of the subsets of Z; these functions have the forms of the corresponding components of the •-Cartesian product over IVIFSs, where • ∈ {× 1 , × 2 , . . . , × 5 } is an operation over IVIFSs, and for all x, y ∈ E 1 × E 2 , Let us have a (fixed) set of vertices V. An (•)-(IVIFS, IFS)-IFG G (over V) will be the ordered pair 1]) and N V : V → P([0, 1]) define the degree of membership and the degree of non-membership, respectively, of the element v ∈ V to the set V ; functions 1] define the degree of membership and the degree of non-membership, respectively, of the element x, y ∈ E 1 × E 2 to the set A ⊆ E 1 × E 2 ; these functions have the forms of the corresponding components of the •-Cartesian product over IFSs, where • ∈ {× 1 , × 2 , . . . , × 5 } is an operation over IFSs, and for all x, y ∈ E 1 × E 2 , As in [4] and by analogy with [2], we illustrate the last of the above definitions by an example of a Berge's graph (see Fig. 1; the labels of the vertices and arcs show the corresponding degrees). Let the following Index Matrix (IM, see, e.g. [3]) giving M -and N -values be defined for its A-values (for example, the data can be obtained as a result of some observations). Having in mind that each real number r can be represented as an interval [r, r], we see that the first three types of graphs are partial cases of the fourth type. Second four types of interval-valued intuitionistic fuzzy graphs The new graphs have similar to the above form, but without the condition for the forms of their elements µ G and ν G , or M G and N G elements. So, their definitions are the following. Let us have a (fixed) set of vertices V. An (IFS, IFS)-IFG G (over V) will be the ordered pair Let us have a (fixed) set of vertices V. An (IFS, IVIFS)-IFG G (over V) will be the ordered Let us have a (fixed) set of vertices V. An (IVIFS, IFS)-IFG G (over V) will be the ordered Let us have a (fixed) set of vertices V. An (IVIFS, IVIFS)-IFG G (over V) will be the ordered 1]) and N V : V → P([0, 1]) define the degree of membership and the degree of non-membership, respectively, of the element v ∈ V to the set V ; functions M A : E 1 × E 2 → P([0, 1]) and ν A : E 1 × E 2 → P([0, 1]) define the degree of membership and the degree of non-membership, respectively, of the element x, y ∈ E 1 × E 2 to the set A ⊆ E 1 × E 2 ; and for all x, y ∈ E 1 × E 2 , Obviously, the first four definitions are partial cases of the new four definitions, respectively. Remarks on the eight types of interval-valued intuitionistic fuzzy graphs From [3], it is clear that in the general case, if V = {v 1 , v 2 , . . . , v n }, then the index matrix (IM) of the first, second, fifth and sixth graphs can have the form v 2 · · · v n v 1 a 1,1 a 1,2 · · · a 1,n v 2 a 2,1 a 2,2 · · · a 2,n . . . . . . . . . . . . . . . v n a n,1 a n,2 · · · a n,n where n is the cardinality of set V and while the third, fourth, seventh and eighth graphs can have the form of the same IM, but now Now, we can represent each of the four types of graphs in IM-form as Following the ideas from [3], it can be easily seen that the above IM can be modified to the following form: where V * I , V * O and V * are respectively the sets of the input, output and internal vertices of the graph. At least one arc leaves every vertex of the first type, but none enters; at least one arc enters each vertex of the second type but none leaves it; every vertex of the third type has at least one arc ending in it and at least one arc starting from it. Obviously, the graph matrix (in the sense of IM) now will be of a smaller dimension than the ordinary graph matrix. Moreover, it can be non-square, unlike the ordinary graph, matrices. As in the ordinary case, the vertex v p ∈ V has a loop if and only if a p,p = µ p,p , ν p,p for the vertex v p and µ p,p > 0 (hence ν p,p < 1). Let us write below for brevity G instead of G * and V instead of V * . Let the graphs G 1 and G 2 be given and let where s = 1, 2 and V s and V s are the sets of the graph vertices (input and internal, and output and internal, respectively). Then, using the apparatus of the IMs, we construct the graph which is a union of the graphs G 1 and G 2 . The new graph has the description where a i,j is determined by the respective IM-formulas for operation ∪ from [3]. Analogously, we can construct a graph which is the intersection of the two given graphs G 1 and G 2 . It would have the form where a i,j is determined by the respective IM-formulas for operation ∩ from [3]. 6 Level operators over interval-valued intuitionistic fuzzy sets and interval-valued intuitionistic fuzzy graphs These operators are introduced over IFSs in the first paper in this area - [1], and a part of them, over IVIFSs in [2]. First, we give the original definitions. Let α, β ∈ [0, 1] be fixed numbers for which α + β ≤ 1 and let Now, we introduce four new operators We will call the above sets sets of (α, β)-level generated by A. Each of these operators can be applied over each of the eight types of interval-valued intuitionistic fuzzy graphs, but for this aim, we must modify them, because on the one hand they must be applied over V -elements, and on the other hand, must be applied over A-elements. Conclusion The introduced types of graphs will be object of further research. In near future, the author plans to study the possibility for applying different interval-valued intuitionistic fuzzy operators over these graphs. Also, different other types of graphs will be discussed (multigraphs, trees and others) and their applications in different areas will be searched, especially the areas of artificial intelligence, data mining and big data.
2,420.6
2019-04-01T00:00:00.000
[ "Mathematics" ]
Changes in IL-16 Expression in the Ovary during Aging and Its Potential Consequences to Ovarian Pathology Aging in females is not only associated with the changes in hormonal status but is also responsible for dysregulation of immune functions in various organs including ovaries. The goal of this study was to determine whether the expression of interleukin 16 (IL-16), a proinflammatory and chemoattractant cytokine, changes during ovarian aging, to determine factors involved in such changes in IL-16 expression, and to examine if changes in IL-16 expression during aging predisposes the ovary to pathologies. Ovarian tissues from premenopausal women (30-50 years old), women at early menopause (55-59 years old), and late menopause (60-85 years old) were used. In addition, tumor tissues from patients with ovarian high-grade serous carcinoma at early stage (n = 5) were also used as reference tissue for comparing the expression of several selected markers in aging ovaries. The expression of IL-16, frequency of macrophages (a source of IL-16) and expression of microRNA (miR) 125a-5p (a regulator of IL-16 gene) were performed by immunohistochemistry, immunoblotting, and gene expression assays. In addition, we examined changes in nuclear expression of IL-16 expression with regards to exposure to follicle-stimulating hormone (FSH) by in vitro cell culture assays with human ovarian cancer cells. The frequencies of IL-16 expressing cells were significantly higher in ovarian stroma in women at early and late menopause as compared with premenopausal women (P < 0.0001). Similar patterns were also observed for macrophages. Expression of miR-125a-5p decreased significantly (P < 0.001) with the increase in IL-16 expression during aging. Furthermore, expression of nuclear IL-16 increased remarkably upon exposure to FSH. Consequently, ovarian aging is associated with increased expression of IL-16 including its nuclear fraction. Therefore, persistent high levels of FSH in postmenopausal women may be a factor for enhanced expression of IL-16. Effects of increased nuclear fraction of IL-16 need to be examined. Introduction Aging in females is associated with a decrease in ovarian function including folliculogenesis and gonadal steroid production [1]. As the ovary ages, depletion in follicular recruitment and growth leads to the gradual withdrawal of ovarian steroid-induced negative feedback on the pituitary resulting in sustained high levels of circulatory gonadotrophin including follicle-stimulating hormone (FSH) [2,3]. As estrogen is known to be involved in various physiological processes, decrease in its level during aging affects the growth and maintenance of its target organs [4][5][6]. On the other hand, sustained high levels of circulatory FSH during aging may perturb ovarian homeostatic balance facilitating the development of abnormal condition including chronic inflammation [7] and may be risk factor associated with ovarian cancer (OVCA). Moreover, estrogen is known to be associated with the enhancement and/or maintenance of immunity [8][9][10][11]. Thus, ovarian aging not only affects reproductive functions but it may also increase the susceptibility of ovarian tissues to chronic conditions. However, the effects of aging and its mechanism(s) in the ovary including chronic conditions are not well understood. This information is critically important for the prevention of various chronic abnormalities in the ovary specially in menopausal women. Chronic inflammation and oxidative stress as it occurs in the ovary have been proposed as hallmarks of various pathological conditions including malignancy [12][13][14]. Ovarian tissues are exposed to various inflammatory factors as part of physiological processes like ovulation or infection by pathogens. Ovulation has been suggested as an inflammatory event [15]. With the release of egg during ovulation, ovarian surface epithelial cells at the site of ovulatory rupture and the fimbrial surface of the fallopian tube at the site of receiving the ovulated eggs are exposed to various toxic metabolites produced in the egg due to its metabolic processes [16]. Moreover, ovulatory injury leads to the influx of immune cells at the site of rupture in the ovarian surface and at the site of receiving of ovulated eggs in the fimbria. This results in localized inflammation and increased demand for oxygen by the accumulated immune Cell, facilitating the development of hypoxia and oxidative stress [17]. Thus, inflammation and oxidative stress are prevalent in the ovary and the fimbria of the fallopian tube. Furthermore, the oviduct is open to both external and internal environments through the vagina and fimbria, respectively, predisposing the reproductive tissues to external pathogens or internal toxic byproducts of various physiological processes. Close proximity of ovarian tissues with the fimbria increases the chance of gaining entrance of pathogens to the ovaries. In addition, food-borne pathogens from a perforated gastrointestinal tract may gain entrance to the ovary via the systemic circulation. Thus, the ovarian and fimbrial tissues are exposed to various inflammatory conditions due to frequent ovulation, external pathogens, and internal toxic byproducts as well as persistent high levels of circulatory FSH. Therefore, information on the effects of exposure to these agents will be helpful to prevent ovarian abnormalities during aging. Ovarian tissues express many cytokines [18,19]. Cytokines are proteins secreted by many cell types including immune cells, epithelial cells, fibroblasts, and stromal cells. Cytokines are involved in the regulation of cellular growth and differentiation, homeostasis, and immune functions in normal tissues as well as in pathological conditions including tumors [18]. In normal ovaries, interleukin-(IL-) 6 and TGF-β have been suggested to be involved in follicular development by preventing follicular atresia [20,21]. IL-1 and TNF-α have been shown to be associated with inhibition of progesterone secretion and regression of the corpus luteum [22,23]. Ovarian follicles have been reported to produce IL-8, while IL-11 was found in the follicular fluid [24,25]. Unfortunately, most of the studies on cytokines on ovarian function are limited to premenopausal ovaries and studies on ovarian cytokines during aging including late menopausal stage ovaries are very scanty. Recent studies have shown increased expression of IL-16 in ovarian tumors [26,27]. However, no information is available if persistent high levels of IL-16 expression are a risk factor to develop OVCA. IL-16 is a proinflammatory cytokine and a chemotactic factor for other immune cells to the site of inflammation [28]. Frequent exposure of ovarian and fimbrial tissues to ovulatory insults, external and internal agents (pathogenic/ metabolic), and increased levels of FSH may lead to chronic inflammation in these issues and may induce increased expression of IL-16. Chronic inflammation is a hallmark of cancer development. However, it is unknown if the expression of IL-16 increases during aging in ovaries and fimbria, and whether persistent high levels of IL-16 are associated with OVCA development. The goal of this study was to examine whether IL-16 expression increases in ovarian and fimbrial tissues during aging and whether such increase in IL-16 expression is associated with increased risk of OVCA in postmenopausal women. Clinical Specimens. Archived premenopausal and postmenopausal ovarian tissues from healthy/normal subjects and their blood samples were collected from the Department of Pathology Rush University Medical center, Chicago, IL. All specimens were collected under the Institutional Review Board (IRB) of the Rush University Medical Center approved protocol. These subjects underwent surgery for nonovarian cause. Ovarian tumor tissues (n = 3, ovarian high-grade serous carcinoma (HGSC), used as positive control) were collected from patients underwent surgery following the diagnosis of suspected ovarian mass. 2.2. Processing of Tissue Specimens. Ovarian tissues were processed for paraffin and/or frozen embedding, protein extraction, and gene expression studies. Briefly, upon receiving, tissues were washed with phosphate-buffered solution (PBS) and divided into four pieces including for paraffin and frozen sections, protein, and total RNA extraction. For paraffin embedding, tissue specimens were treated with neutral buffered formalin for 72 hours followed by washing with running water overnight, cut into blocks of desired sizes, dehydrated by treating with an ascending series of ethanol and xylene, and embedded in paraffin. The portion of fresh tissues for RNA extraction was treated with RNA later (RNAlater™ Stabilization Solution, Thermofisher, Waltham, MA) and stored at -80°C for later use. For frozen sections, portions of fresh tissues were embedded in OCT compound (Miles Inc., Elkhart, IN) and snap-frozen in a mixture of methanol and solid carbon dioxide and stored at -80°C for later use. The portion of fresh tissues for protein extraction was stored at -80°C for later use. Serum samples were separated from blood and stored at -80°C for further use. Immunohistochemistry (IHC) . IHC was carried out as reported earlier [26] using primary antibodies and other reagents as per the instructions of the manufacturers. Briefly, sections were deparaffinized with xylene and a graded series of alcohol followed by washing briefly in deionized (DI) water. Antigens on the section were unmasked by heating the sections with sodium citrate-containing antigen retrieval solution. Sections were then cooled at room temperature in phosphate-buffered solution (PBS) followed by blocking of endogenous peroxidase using 0.03% H 2 O 2 -containing methanol at ice-cold condition. Sections were then rinsed in PBS and treated with normal horse serum for 15 min to block endogenous nonspecific bindings using VECTASTAIN ABC Kit (Vector laboratories, INC., Burlingame, CA). Sections were then incubated overnight at 4°C with primary antibodies mentioned above. Following washing 5 min X 3 with PBS, sections were incubated at room temperature with secondary antibodies and avidin/biotinylated enzyme complex (VECTASTAIN ABC Kit (Vector laboratories, INC., Burlingame, CA)) one hour each and 5 min X 3 washing with PBS in between. After washing with PBS, immunoreactions on sections were visualized by incubating them with a chromogen DAB solution (3.3 ′ -diaminobenzidine, DAB-Peroxidase Substrate Kit-Vector laboratories, INC., Burlingame, CA) while examining under a light microscope. Sections were then washed in running water followed by counterstaining with hematoxylin. After rinsing in water, sections were then dehydrated with a graded series of alcohol and xylene, mounted with mounting media, covered with cover slip, and oven dried. Control (negative) staining was carried out by omitting primary antibodies, and no staining was observed in these sections (Supplementary figure 1). Counting of Immunopositive Cells or Intensity of Immunostaining. Frequency of immunopositive cells was counted, or intensities of immunostaining were determined using a light microscope attached to a computer-assisted digital imaging software (MicroSuite, version 5; Olympus Corporation, Tokyo, Japan). Counting was performed by two individuals blinded of the age or pathology of the subjects/patients. For IL-16 or macrophages, immunopositive cells in 3-5 areas with highest population in a section at 40X magnification were counted, and average of them was expressed as the frequency of IL-16-expressing cells or macrophages in 20,000 μm 2 area of the tissue as reported earlier [26]. Similarly, the intensities of FSHR expression were expressed in 20,000 μm 2 area as an arbitrary value as reported earlier with little modification [29]. 2.4.5. Western Blotting. Proteins were extracted from tissue samples as reported earlier [26]. Briefly, tissue specimens were homogenized using a Polytron homogenizer (Brinkman Instruments, Westbury, NY). Homogenized samples were then centrifuged, supernatants were collected, and protein concentrations from supernatants were measured using Bradford BioRad Protein Assay (Bio-Rad, Hercules, CA) method as reported earlier [26]. Protease inhibitor was added, and samples were stored at -80°C for later use. Three samples from each age group (including 30-50-, 55-59-, 60-69-, and 70-85-year old subjects) as well as OVCA patients were selected randomly for immunoblotting, and each sample was examined two times (2X) in immunoblotting. Briefly, a panel of 4 samples (tissue extracts), one from each age group, and a positive control (indicated below) was examined in each immunoblot. In gel electrophoresis, the same amount of protein (50 g) for each sample was loaded. Proteins were separated by one-dimensional electrophoresis as reported earlier [26], and separated proteins were transferred to the membrane. Immunoblotting of membranes was performed using anti-IL-16 mentioned above as primary antibodies (at 1 : 1000 dilution) and anti-rabbit HRP as secondary antibody. Immunoreactions on the membrane were visualized as chemiluminescence products (Super Dura West substrate; Pierce/Thermo Fisher, Rockford, IL). Images were captured by Quantity One software using Chemidoc XRS (Bio-Rad, Hercules, CA) system according to the manufacturer's recommendation as reported previously [26]. Images of 3 immunoblots were selected randomly for analysis. Intensity of signals of IL-16 protein expression in immunoblotting was determined from the images using the analysis® getIT! Software (Olympus Soft Imaging Solutions Corporation, Lakewood, CO). Signal intensities were quantified as arbitrary values and reported as mean + SEM in 20,000 μm 2 area. An ovarian HGSC specimen was used as positive control for IL-16 protein expression while β-actin protein was used as housekeeping protein. Gene Expression Studies (Quantitative Real-Time Polymerase Chain Reaction). Expression of IL-16 gene and its regulator microRNA miR-125a-5p was examined in representative specimens (8 premenopausal ovaries, 6 early menopausal ovaries, and 4 late menopausal ovaries) by quantitative real-time polymerase chain reaction (qRT-PCR). Total RNA was extracted from all specimens using TRIzol reagent (Invitrogen, Carlsbad, CA) according to the manufacturer's recommendation. RNA was then measured at an optical density (OD) of 260 nm and an OD of 260/ purity, as previously reported [30]. The expression of IL-16 messenger RNA (mRNA) and miR-125a-5p in normal ovaries and fimbriae was measured by quantitative real-time polymerase chain reaction (qRT-PCR). The human specific IL-16 primer (QT00075138) designed by QuantiTech and miR-125a-5p designed by Applied Biosystems (Foster City, CA) were used for qRT-PCR analyses. β-Actin was used as housekeeping gene in qRT-PCR experiments. Gene expression amplification was determined using the method of the differences (δ) in cycle threshold (ΔCt) for IL-16 mRNA and miR-125a-5p according to the manufacturer's recommendation. Subtracting ΔCt from each group from the average ΔCt determined the ΔΔCt. 2 -ΔΔCt was used to calculate the fold change in the differences in IL-16 mRNA and miR-125a-5p expression levels. hours. After incubation, media was collected and saved. Cell fractionation was performed using the Cell Fractionation Kit (Abcam, Cambridge, UK) according to the manufacturer's recommendation. Cells were rinsed with PBS, trypsinized, and harvested (pellet). Pellet containing FSH-treated or untreated (control) were then resuspended in 1X Buffer A to 6:6 × 10 6 cells/mL and diluted by 1,000-fold using Buffer B. Cells were incubated at room temperature for 7 minutes with constant mixing, followed by centrifugation at 5,000 × g for 1 minute at 4°C. Pellet was removed and saved, while supernatant was then removed and centrifuged at 10,000 × g for 1 minutes at 4°C. The final supernatant contains fraction C (Cytosol). The saved pellet was then resuspended in Buffer A, diluted in Buffer C, and incubated at room temperature for 10 minutes with constant mixing. The suspension was centrifuged at 5,000 × g for 1 minute at 4°C. Pellet was removed and saved, while supernatant was then removed and centrifuged at 10,000 × g for 1 minutes at 4°C. The final supernatant contains fraction M (mitochondrial). The saved pellet was resuspended in Buffer A, containing fraction N (nuclear). Treatment of Cells with FSH and Cell Fractionation. 2.5. Statistical Analysis. Differences in the frequency of IL-16-expressing cells or macrophages or in the intensities of FSHR expression during aging were assessed by ANOVA and unpaired t-tests. Differences in the signal intensity of IL-16 protein expression in immunoblotting among differ-ent age groups were also determined by ANOVA and unpaired t-tests. All reported P values were a two-sided where P < 0:05 was considered significant. Statistical analysis was performed using the Prism GraphPad software. Results Ovarian H&E sections from premenopausal subjects showed preantral follicles were embedded in the stroma, while no follicle was observed in sections from early and late menopausal women (Figure 1). Cortical inclusion cysts (CIC) ( Figure 2) and ovarian surface invaginations (INV) ( Figure 3) were observed in ovarian sections from pre-and postmenopausal women. Differences in the histomorphology of these features among pre-and postmenopausal women at early and late stages were not observed (Figures 2 and 3). However, compared with premenopausal, CICs and INVs were more frequent in late menopausal ovaries (Figures 2 and 3). The frequency of IL-16-expressing cells in the ovarian stroma of premenopausal subjects was 4:0 ± 0:2 cells in 20,000 μm 2 area of the tissue. However, it increased significantly (P < 0:0001) to 5:9 ± 0:2 cells in 20,000 μm 2 area of the tissue in women at an early menopausal stage ( Figure 5 (a)) and increased even further (7:0 ± 0:3 in 20,000 μm 2 area of the tissue) with aging in subjects at a late menopausal stage (P < 0:0001) ( Figure 5(a)). IL-16 Expression Observed by Immunoblotting. Immunoblotting studies showed a band of approximatelỹ 60 kDa for IL-16 in all specimens examined with different intensities (Figure 6(a)). In immunoblotting, a weak or faint band for IL-16 protein was detected in specimens from premenopausal subjects (Figure 6(a)). In contrast, subjects in the early menopausal group showed strong expression for IL-16 in immunoblotting which was stronger in subjects at the late menopausal stage (Figure 6(a)). However, the intensity of IL-16 protein expression was Figure 6(b)). Compared to untreated counterparts, OSE cells treated with FSH for 24 hours showed stronger expression of IL-16 ( Figure 6(b)). Similar patterns of expression were also observed in ovarian HGSC cells (OVCAR3). Expression of IL-16 Gene and Its Regulatory MicroRNA. Expression of IL-16 gene was detected by qRT-PCR in all specimens examined (Figure 6(d)). Compared to subjects in premenopausal stage, expression of IL-16 gene increased significantly in subjects at early menopausal stage (P < 0:001) and even further in subjects at the late menopausal stage (Figure 6(d)). Gene expression studies showed that increase in IL-16 gene expression during aging was inversely associated with expression of its regulatory microRNA, miR-125a-5p ( Figure 6(e)). Compared with premenopausal subjects, expression of miR-125a-5p was significantly lower in women at an early menopausal stage and decreased further in subjects at the late menopausal stage (Figure 6(e)). Overall, gene expression studies supported an inverse relation between the expression of IL-16 gene and its regulator miR-125a-5p during ovarian aging. Discussion This is the first study reporting an increase in the frequency of IL-16-expressing cells in ovaries during aging in women at postmenopausal stage. This study also showed significant increase in the frequency of IL-16-expressing cells in ovarian stromal invaginations (INVs) but not in cortical inclusion cysts (CICs), a structure formed following ovulation in the ovaries. Furthermore, this study also showed that increase in IL-16 gene expression was associated with the decrease in its regulatory microRNA miR-125a-5p during aging. The increase in the frequency of IL-16-expressing cells during aging was associated with the increase in the frequency of macrophages and persistent high levels of FSH in postmenopausal women. In addition, FSH treatment of normal ovarian cells showed increased expression of nuclear IL-16. Overall, the results of this study suggest that ovarian aging is associated with prevalence of chronic stress and inflammation, and two risk factors reported to be associated with ovarian pathologies including malignancy. This study showed that expression of IL-16 including the frequency of IL-16 expressing cells in the ovary increased significantly during aging suggesting the prevalence of chronic inflammation in ovaries in late menopausal stage subjects. Classical inflammation requires coordination among different cell types and their secretions that mediate responses against deleterious stimuli [31]. However, inflammation in ovarian tissues during aging does not present the features of classical inflammation as it is not associated with infection, widespread tissue injury, or autoimmune conditions. In contrast, age-associated inflammation in ovaries is local and may be due to metabolic imbalances [13,32] caused by agents including hormones during aging. Aging in females is associated with the decrease in ovarian functions and cessation of synthesis of ovarian steroids including estrogen [3]. It is possible that decrease in estrogen may be Bars with different letters are significantly different (compared to "a," "b" is significant with P < 0:005, compared to "b," "c" is significant with P < 0:03). involved in the development of chronic inflammation in menopausal ovaries during aging. Estrogen has been implicated as an anti-inflammatory agent as it has been shown to suppress the secretion of inflammatory cytokines including IL-6 and TNF-α by macrophages and dendritic cells [33]. Furthermore, circulatory levels of TNF-α, IL-1, and IL-6 have been reported to be increased in women at late menopausal stage, and their levels decreased significantly in response to hormone replacement therapy (HRT) [34]. Thus, it is possible that lack of estrogen in the ovaries in women at late menopausal stage may be a reason for the increased levels of IL-16 expression in aging ovaries. Alternatively, it is possible that persistent high levels of FSH in late menopausal stage women may be a factor for high levels of IL-16 expression. Absence of negative feedback (due to the lack of estrogen) leads to the persistent high levels of FSH in women at late menopausal stage. Macrophages have been suggested to be a source of IL-16 synthesis [35]. Macrophage stimulating factor (MCSF) is an important cytokine which is involved in the regulation of proliferation [36], differentiation, and migration of tissue macrophages as well as is important for the maintenance of ovarian function [37]. Increasing concentrations of FSH have been shown to stimulate the expression of MCSF receptor mRNA suggesting the enhancement in MCSFR expression by FSH [38]. This action of FSH has been reported to be inhibited by estrogen treatment. Thus, it is possible that persistent high levels of FSH in postmenopausal women might be a reason of increased IL-16 production by macrophages in aging ovaries. This assumption is further supported by the results of (e) Compared with premenopausal subjects, the intensities of FSHR expression were significantly (P < 0:04) higher in early menopausal women and increased further in late-stage menopausal women (P < 0:0001) (e). Furthermore, compared with CICs, the intensity of FSHR expression was significantly higher in INVs (P < 0:004) (e). y-axis shows mean ± SEM (n = 5 for each group) in 20,000 μm 2 area of the tissue, and bars with different letters are significantly different. Details of statistical analysis are mentioned in materials and method section of the main text. 11 Journal of Immunology Research this study that the frequencies of macrophages were higher in women at late menopausal stage than premenopausal women. However, specific targets of FSH in aging ovaries in the context of IL-16 secretion are not fully understood. This study showed, in addition to OSE cells, epithelial cells in CICs and INVs were positive for FSHR expression suggesting that INVs and CICs are also targets for FSH. This study further showed that compared with OSE and CICs, INVs showed stronger expression of FSHR. Thus, INVs might be a predilection site for a chronic inflammation due to persistent exposure to high FSH. INVs and CICs are features formed by the ovarian surface epithelial layer in the ovary following ovulations. This study also showed that compared to OSE cells, the frequency of IL-16-expressing cells was higher in CICs and highest in INVs. Thus, it is possible that CICs and INVs might be invaded by immune cells including macrophages (in response to ovulatory insults) which may be a source of increased expression of IL-16 in these tissues. Alternatively, chronic inflammation may be prevalent in CICs and INVs due to their persistent exposure to FSH as observed by the increased expression of FSHR in these tissues. This assumption is supported by one of the observations of this study that treatment of normal OSE cells with human recombinant FSH for 24 hours resulted in remarkable increase in nuclear expression of IL-16 with similarities in patterns of expression by OVCAR3 cancer cell lines. However, how the expression of IL-16 increases at molecular levels is not known. MicroRNAs are endogenously synthesized short noncoding RNA molecules [39] which bind to the 3′untranslated region (UTR) of target genes and play important roles in gene regulation at the posttranscriptional level, thereby, inhibit or reduce the translation of respective target genes [40]. In this study, the levels of IL-16 expression during aging increased while the expression of its regulatory miR-125a-5p decreased significantly in women at late menopausal stage. Although specific reason(s) involved in the decrease in miR-125a-5p leading to the increase in IL-16 gene expression is not known, it is possible that changes in hormonal milieu during aging in postmenopausal women may be a factor. Because menopause is associated with the cessation of estrogen production by the ovary [41] and high persistent levels of circulating FSH, it is possible that either the lack of estrogen or high levels of FSH might have a role in the suppression of IL-16-gene regulating microRNA miR-125a-5p. Estradiol treatment has been reported to enhance expression of miR-125b [42]. Thus, it is possible that increased expression of IL-16 gene in aging ovaries might be due to the decrease in its regulatory microRNA miR-125a-5p expression because of the cessation of estrogen synthesis in postmenopausal women. Overall, IL-16, a pro-inflammatory and chemotactic cytokine, is produced by a variety of cells including immune cells and epithelial cells of different organs [43][44][45]. IL-16 has been implicated in several cancers including OVCA [26,46]. OVCA in most cases is a malignancy of postmenopausal women and the median age of OVCA incidence is 63 years. Longstanding unresolved oxidative stress and chronic inflammation have been suggested as predisposing factors for malignancy including OVCA [13,14]. Stromal INVs and CICs are structures formed by the ovarian surface layer following ovulation and have been shown to be a predilection site for malignant transformation [47]. Increased expression of IL-16 in these structures as observed in this study suggests the prevalence of chronic inflammation in these structures. Moreover, deletions in the chromosome 19 with approximately 60% loss of heterozygosity have been reported to be associated with OVCA [48,49], and interestingly, miR-125a-5p is localized in the 19q13.41 locus of this chromosome. The consequence of increase in nuclear expression of IL-16 due to FSH exposure is not known. It is possible that, following enhancement in expression as a result of chronic oxidative stress due to the persistent exposure to FSH, IL-16 may translocate to the nucleus and lead to the formation of mutagenic DNA adducts. Previous reports suggest the formation of mutagenic DNA-adducts due to oxidation of DNA bases which may lead to malignancy [50]. Thus, dysregulation in miR-125a-5p during aging may be a reason for increased expression of IL-16 in postmenopausal women and may also increase the risk of developing OVCA as it is a disease of postmenopausal women. In conclusion, results of this study suggest that expression of IL-16, a proinflammatory and chemotactic cytokine, increases during aging in the ovaries in postmenopausal women. This increase in IL-16 expression was localized in CICs and INVs, the two structures in the ovary formed following ovulation and are sites with risk of malignant transformation. Moreover, increase in IL-16 expression was associated with its regulator microRNA miR-125a-5p, also a tumor suppressor microRNA. Thus, chronic inflammation in the ovaries in postmenopausal women may predispose them to ovarian pathology including malignant transformation in ovaries. Data Availability All data supporting the conclusions of this article are included in the article. Conflicts of Interest The authors have no conflict to declare
6,211.8
2022-04-26T00:00:00.000
[ "Medicine", "Biology" ]
Tuning Electronic Properties of the SiC-GeC Bilayer by External Electric Field: A First-Principles Study First-principles calculations were used to investigate the electronic properties of the SiC/GeC nanosheet (the thickness was about 8 Å). With no electric field (E-field), the SiC/GeC nanosheet was shown to have a direct bandgap of 1.90 eV. In the band structure, the valence band of the SiC/GeC nanosheet was mainly made up of C-p, while the conduction band was mainly made up of C-p, Si-p, and Ge-p, respectively. Application of the E-field to the SiC/GeC nanosheet was found to facilitate modulation of the bandgap, regularly reducing it to zero, which was linked to the direction and strength of the E-field. The major bandgap modulation was attributed to the migration of C-p, Si-p, and Ge-p orbitals around the Fermi level. Our conclusions might give some theoretical guidance for the development and application of the SiC/GeC nanosheet. Introduction Research on two-dimensional (2D) materials such as graphene has attracted considerable attention [1][2][3], and has been greatly influential on next-generation electronic and photonic applications due to their rich physical properties and outstanding electronic properties [4,5]. However, graphene is a gapless semiconductor, which causes problems for applications in graphene-based electronic devices. Therefore, many studies have focused on searching for other 2D materials [6][7][8][9]. Recently, except for graphene, research interests have been extended to other similar materials. In recent years, a large number of new 2D materials have been found, such as single-layer MoS 2 [10,11] and h-BN [12], which exist in wide bandgaps and have attracted much attention. Considering that the controllable-bandgap engineering of semiconductors is an essential part of nanoelectronics and optoelectronics, a comprehensive investigation on modulating the electronic properties of 2D materials is of great interest and is critical to widening the range of their applications. New 2D materials that consist of similar atomic structures, such as G/BN [13,14], G/MoS 2 [15], G/SiC [16], have come into our sight, which have aroused intensive studies. If we could control the bandgap of 2D semiconductor materials effectively, there would be new electronic and optical properties, and we could realize the application of these 2D materials in nanoelectronic devices. Very recently, modulation of the bandgap with the help of a geometrical strain or an external electric field has made 2D monolayer sheets particularly interesting materials for device applications at the nanoscale [17][18][19]. Silicon carbide (SiC) and germanium carbide (GeC) are promising two-dimensional materials whose nanostructures have attracted a great deal of attention due to their large bandgaps of 2.6 and 2.1 eV, respectively. This has been verified by many theoretical studies based on DFT calculations [20][21][22][23]. With the development of the semiconductor process, large SiC nanocomposites have been obtained. The energy gap of SiC can vary within wide spectral ranges. Kityk et al. studied the band structures of large SiC nanocomposites both experimentally and theoretically [24,25]. The calculated data agreed with the experimental results. Recently, many studies have been conducted to optimize their electrical properties. Shi et al. studied the electronic properties of the GeC/WS 2 heterostructure under an electric field (E-field) and modulated its bandgap [26]. Rao et al. studied SiC(GeC)/MoS 2 heterostructures and found enhanced optical absorption [27] and their results are promising for applications in field-effect transistors. To the best of our knowledge, the studies on the tunable electronic properties of heterostructures containing SiC and GeC are still lacking. In this paper, we investigate the electronic properties of a SiC/GeC bilayer by using first-principles calculations with van der Waals (vdW) correction. We found that the SiC/GeC bilayer exhibited a direct bandgap at the equilibrium state. Application of the external electric field was found to modulate the bandgap of the heterogeneous bilayers. Under the impact of an E-field, the bandgap, changing from 1.90 to 0 eV, showed a tunable tendency related to the direction and the strength of the E-field. Our results may prove some applications in vdW-based field-effect transistors. Materials and Methods Electronic structure calculations were performed using the plane-wave-based pseudopotential approach in the framework of density functional theory as implemented in the Vienna Ab initio Simulation Package (VASP) [28]. The electron-ion interaction was described by the projector augmented wave (PAW) method [29], and the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation [30] was used. The van der Waals corrections (DFT-D2) within the PBE functional proposed by Grimme were also employed [31]. The cut-off energy for the plane-wave basis set was set at 450 eV. The Monkhorst-Pack scheme was used to sample the Brillouin zone with a (5 × 5 × 1) k-mesh. The optimized lattice parameters of SiC and GeC monolayers are 3.09 and 3.23 Å, and the bond lengths of Si-C and Ge-C are 1.79 and 1.86 Å, respectively, which are consistent with other theoretical and experimental results (3.09 and 3.23 Å) [32,33]. The lattice mismatch is about 4.3% between SiC and GeC, which has little effect on the electronic properties of the SiC/GeC heterostructure [34,35]. Thus, we considered a coperiodic lattice consisting of a (4 × 4 × 1) GeC monolayer (16 Ge atoms and 16 C atoms) with (4 × 4 × 1) SiC (16 Si atoms and 16 C atoms), as shown in Figure 1. The vacuum distance of 20 Å was used to reduce the interactions between the periodic images in the supercell model. The atomic positions and the supercell size were fully relaxed, the energy convergent criterion was 10 −6 eV/atom, and forces on all relaxed atoms were 0.01 eV/Å. All the structures were relaxed by using the PBE function. While we calculated the binding energy, density of states, and bandgaps, we used the DFT-D2 method to describe the van der Waals interaction. structures of large SiC nanocomposites both experimentally and theoretically [24,25]. The calculated data agreed with the experimental results. Recently, many studies have been conducted to optimize their electrical properties. Shi et al. studied the electronic properties of the GeC/WS2 heterostructure under an electric field (E-field) and modulated its bandgap [26]. Rao et al. studied SiC(GeC)/MoS2 heterostructures and found enhanced optical absorption [27] and their results are promising for applications in field-effect transistors. To the best of our knowledge, the studies on the tunable electronic properties of heterostructures containing SiC and GeC are still lacking. In this paper, we investigate the electronic properties of a SiC/GeC bilayer by using firstprinciples calculations with van der Waals (vdW) correction. We found that the SiC/GeC bilayer exhibited a direct bandgap at the equilibrium state. Application of the external electric field was found to modulate the bandgap of the heterogeneous bilayers. Under the impact of an E-field, the bandgap, changing from 1.90 to 0 eV, showed a tunable tendency related to the direction and the strength of the E-field. Our results may prove some applications in vdW-based field-effect transistors. Materials and Methods Electronic structure calculations were performed using the plane-wave-based pseudopotential approach in the framework of density functional theory as implemented in the Vienna Ab initio Simulation Package (VASP) [28]. The electron-ion interaction was described by the projector augmented wave (PAW) method [29], and the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation [30] was used. The van der Waals corrections (DFT-D2) within the PBE functional proposed by Grimme were also employed [31]. The cut-off energy for the plane-wave basis set was set at 450 eV. The Monkhorst-Pack scheme was used to sample the Brillouin zone with a (5 × 5 × 1) k-mesh. The optimized lattice parameters of SiC and GeC monolayers are 3.09 and 3.23 Å, and the bond lengths of Si-C and Ge-C are 1.79 and 1.86 Å, respectively, which are consistent with other theoretical and experimental results (3.09 and 3.23 Å) [32,33]. The lattice mismatch is about 4.3% between SiC and GeC, which has little effect on the electronic properties of the SiC/GeC heterostructure [34,35]. Thus, we considered a coperiodic lattice consisting of a (4 × 4 × 1) GeC monolayer (16 Ge atoms and 16 C atoms) with (4 × 4 × 1) SiC (16 Si atoms and 16 C atoms), as shown in Figure 1. The vacuum distance of 20 Å was used to reduce the interactions between the periodic images in the supercell model. The atomic positions and the supercell size were fully relaxed, the energy convergent criterion was 10 −6 eV/atom, and forces on all relaxed atoms were 0.01 eV/Å. All the structures were relaxed by using the PBE function. While we calculated the binding energy, density of states, and bandgaps, we used the DFT-D2 method to describe the van der Waals interaction. Results Firstly, we calculated the bandgap of pristine SiC and GeC monolayers. Figure 2a,b shows that the energy gaps of SiC and GeC were 2.49 and 2.08 eV, respectively, which agrees with previous studies [23,36]. As the current bilayer is composed of two different monolayers, the stability of the bilayer is a crucial problem. In order to solve this problem, we calculated the binding energies of the SiC/GeC bilayer, defined as: where E T is the total energy of the bilayer, and E SiC and E GeC are the total energies of SiC and GeC monolayers. From Figure 3 it can been seen that the calculated binding energies changed along with the interlayer distance (d). According to the calculated binding energies, the SiC/GeC bilayer had the lowest value of −0.048 eV at d = 5.10 Å, which indicates that it reached its equilibrium state. To further explore the effects of the interlayer distance on the SiC/GeC bilayer, the plane-averaged charge densities and electrostatic potentials were calculated. From Figure 4a, the potential of the SiC monolayer was much deeper than that of GeC, which seemed to induce a charge shift from the GeC to the SiC monolayer. From Figure 4b, it is clear that more electrons indeed moved from the GeC to the SiC monolayer. These results are in agreement with each other. We now investigate the electronic properties of the bilayer under an E-field. In previous studies, Ni et al. studied the influence of an electric field (from +1.0 to −1.0 V/Å) on the structure of silicene and germanene [37]. Drummond et al. also studied the influence of an electric field (ranging from 0.0 to +0.5 V/Å) on the structure of silicene [38]. According to their results, if a vertical electric field (within limits) is applied to a system, it does not have a significant influence on its structure, but merely causes a redistribution of charges. In this paper, we follow their points and discuss bandgap engineering of the SiC/GeC heterostructure via an external electrical field, ranging from +0.5 to −0.5 V/Å. As shown in Figure 5, two directions were explored. The negative direction of the E-field (−ε) is from the GeC to the SiC monolayer. Accordingly, the opposite direction represents the positive E-field (+ε). Variations of the bandgap with electric fields are presented in Figure 5. For the SiC/GeC bilayer, the bandgap decreased as the strength and direction of E-field changed. An approximate symmetrical tendency appeared between the bandgap and the ε in our calculations. Under the −ε, the bandgap reduced monotonically from 1.90 to 0 eV and disappeared at −0.40 V/Å. When the +ε was applied, similar behavior was observed. The bandgap reduced to zero sharply when the +ε increased from 0 to +0.40 eV. Our results reveal that the E-field could regulate the bandgap of the system effectively. Particularly, the direction of the E-field had the same impact on the bandgap of the system, and the results are slightly different from our previous studies [39]. Figure 6 shows the band structures near the Fermi energy (EF) of the SiC/GeC bilayer under different E-fields. Under the E-field, the performances of the conduction band (CB) and the valence band (VB) were slightly different. From Figure 6a-d, applying a negative E-field (−ε) resulted in the whole CB moving close to the EF. Interestingly, no change was found at the top of the VB. As the Figure 6 shows the band structures near the Fermi energy (E F ) of the SiC/GeC bilayer under different E-fields. Under the E-field, the performances of the conduction band (CB) and the valence band (VB) were slightly different. From Figure 6a-d, applying a negative E-field (−ε) resulted in the whole CB moving close to the E F . Interestingly, no change was found at the top of the VB. As the strength of the E-field increased, the whole VB ultimately crossed the E F , which reduced the bandgap, and then it disappeared. By contrast, under a positive E-field (+ε) very significant differences were seen. As can be seen in Figure 6e-i, the whole VB showed almost no change as the E-field enhanced. However, the situation completely reversed as part of the CB moved close to the E F , which led to a decrease of the bandgap. To understand what happens in the band structure, we plotted the partial densities of states (PDOSs), as shown in Figure 7. From Figure 7, we can see the modulations of bandgaps due to the different atomic orbits of C, Si, and Ge. From Figure 7a,b, under the negative E-field (−ε), the states in the bottom of the CB were mainly dominated by C p-orbits, Ge p-orbits, and, partly, Ge d-orbits. States in the top of the VB were mainly dominated by C p-orbits. Under the positive E-field (+ε), as shown in Figure 7d,e, on one hand the p-orbits of Si and C contributed to the modulations in the bottom of the CB, definitively inducing the increasing variations of the bandgap. On the other hand, the states in the top of the VB were dominated by C p-orbits. geometries should be considered to discover the most stable matching model. In this paper, we only considered one of these, and more work needs to be done in the future. Conclusions In summary, electronic structure calculations were performed to investigate the stability and electronic properties of the SiC/GeC bilayer. We investigated the possible modulation of the bandgap of the SiC/GeC bilayer under the application of an external electric field. The calculation results showed that the SiC/GeC bilayer, in its natural state, had a direct bandgap of about 1.90 eV, and electrons in the SiC-GeC interface preferred to shift from GeC to SiC. Most importantly, the Finally, we would like to point out that there are several possible stacking orders of SiC/GeC. In order to gain a comprehensive result that approaches a real situation, all the major stacking geometries should be considered to discover the most stable matching model. In this paper, we only considered one of these, and more work needs to be done in the future. Conclusions In summary, electronic structure calculations were performed to investigate the stability and electronic properties of the SiC/GeC bilayer. We investigated the possible modulation of the bandgap of the SiC/GeC bilayer under the application of an external electric field. The calculation results showed that the SiC/GeC bilayer, in its natural state, had a direct bandgap of about 1.90 eV, and electrons in the SiC-GeC interface preferred to shift from GeC to SiC. Most importantly, the electronic structure could be effectively modulated under the E-field. It was found that the bandgaps of the SiC/GeC bilayer became tunable, and it switched from 1.90 eV to 0 eV as the E-field changed. On the basis of PDOS, such material alterations of the bandgap were due to the migration of different atomic orbits of C, Si, and Ge. Our results indicate that the SiC/GeC bilayer might be a promising candidate for future spintronic device applications.
3,612
2019-05-01T00:00:00.000
[ "Materials Science" ]
Performance Improvement of Working Electrode Using Grafted Polymer Modified with SiO 2 Nanoparticles A new modified glassy carbon electrode (GCE) with grafted polymer (GP)/SiO2 nanoparticles (SiO2 NPs) were prepared using mechanical attachment method to produce a new sensor in cyclic voltammetric technique. The new working electrode GP/SiO2 NPs/GCE was characterized by a standard solution of 1 mM K4[Fe(CN)6] with 1 M K2HPO4 as an electrolyte to study the redox current peaks of FeII/FeIII ions at different concentrations such as scan rate, pH, determination of diffusion coefficient (Df), reliability and stability of the modified GCE. It was found that the new modified electrode enhanced the redox current peaks of FeII/FeIII from 12 μA to 20 μA and -5 μA to -15 μA for oxidation and reduction peaks in GCE, repectevely. So, the current ratio (Ipa/Ipc) for the new modified electrode was 1, and the potential peak separation (ΔEpa-c) was 100 mV, which indicated good electrochemical properties as an irreversible electrode and heterogeneous reaction. Good reliability and stability of modified GCE was obseved with low detection limit. Scanning electron microscopy (SEM) and atomic force microscopy (AFM) analysis of the nano-deposit was also studied. Introduction The new study of nanoparticles with different types of polymers is an important subject in the field of electrochemistry, especially in the conversion of insulators into conductive or semi-conductive materials [1][2][3][4][5]. C y c l i c v o l t a m m e t r y c h r o n o a m p e r o m e t r y, electrochemical impedance spectroscopy and differential pulse voltammetry were used to identify the electrochemical behavior of mitoxantrone at the sulfonic acid-functionalized SiO 2 nanoparticles (SiO 2 NPs) using glassy carbon electrode (GCE).The determination of mitoxantrone were optimized by the oxidation current peak which was proportional to mitoxantrone concentration in the range of 0.5-173 μM, while the detection limit was 36.8 μM (S/N = 3) [6].A modification of grafted polymer (GP) with carbon nanotubes was fabricated as a new working electrode.The working electrode was characterized by K 3 [Fe(CN) 6 ] solution in KCl as supporting electrolyte at different concentrations, scan rates and temperatures using cyclic voltammetric technique [7]. Electrochemical sensors were used for the detection of heavy metals such as lead, cadmium, mercury, arsenic, etc. Stripping voltammetry techniques were applied on electrodes (mercury, bismuth) or electrodes modified at their surface by nanoparticles or nanostructures, carbon nanotube (CNT, graphene).Special attention would be paid to strategies using biomolecules (DNA, peptide or proteins), enzymes or whole cells [8].A new grafting technique for the functionalized silica particles with anionically produced new polymers was reported.First, the silica nanoparticles were modified with multifunctional chlorosilanes where the original Si−OH surface groups wwere replaced by Si−Cl groups.Then, the anionically synthesized polymers were linked to the Si−Cl functionalized nanoparticle surface.The polymer linking event was accompanied by termination reactions, most likely due to residual Si−OH groups [9]. Cyclic voltammetry, differential pulse voltammetry and linear sweep voltammetry were used to evaluate some electrochemical aspects of the nanohybrid materials of poly-proline-amino functionalized magnetic mesoporous silica-beta cyclodextrin nanohybrid on GCE [10]. A new nanocomposite based on O-aminophenol (OAP) was prepared by the electropolymerization of OAP at the surface of GCE in the presence of SiO 2 nanoparticles.The cyclic voltammetry and electrochemical impedance spectroscopy (EIS) studies confirmed that the poly O-aminophenol (POAP) nanocomposite films had a higher capacitance than the pure POAP films.The presence of SiO 2 led to an obvious improvement in the overall electrochemical performance of the GCE surface covered by POAP films [11].CNTs have excellent properties such as small size with larger surface area, high electrical and thermal conductivity, high chemical stability, high mechanical strength, and high specific surface area.They are now used in the fabrication of nanostructured electrochemical sensors, immunosensors and DNA biosensors [12]. In this work, GP was modified with SiO 2 NPs and doping the GCE with these polymer to study in cyclic voltammetric technique. Experimental Equipments and electro-analytical analysis methods The potentiostat used in this work was EZstat series (Potentiostat/Galvanostat, NuVant Systems Inc., USA).The cyclic voltammetry experiment was performed by connecting the potentiostat to the electrochemical cell and a computer with special software.The refrence elctode was silver/silver chloride (Ag/AgCl in 3 M NaCl) with 1 mm diameter platinum wire as counter electrode.Two modified GCEs were prepared and used as working electrodes.Before using any solutions in the cyclic voltammetryic cell, the electrolyte solution was treated with nitrogen gas for 10-15 min to remove oxygen.The cyclic voltammetry cell experimental set up is shown in Fig. 1. In this work, the surface morphologies and dimeter of sample nanoparticles were investigated by scanning electron microscope (SEM) SEM-JEOL operated at 20-30 kV and atomic force spectroscopy (AFM), respectively. Procedure Cyclic voltammetric cell was used in this technique by adding 10 mL of electrolyte in the quartz cell and immersing three electrodes in the electrolyte medium, GP-SiO 2 NPs/GCE as working electrode, Ag/AgCl as reference electrode and platinum wire as counter electrode.Then these three electrodes were connected with potentiostat to find the results by the cyclic voltammogram using personal computer. Reagents Silicon oxide nanoparticles (20-30 nm) was purchased from Hongwu International Group Ltd, China.Potassium ferreous cyanide, dipotassium phosphate and potassium chloride were from Sinopharm Chemical Reagent Co, Ltd. (SCRC), China, Potassium perchlorate and potassium nitrate were from British Drug Houses (BDH), Engiland.The deionize water was used to dilute all the solutions.All materials were with purity of 98-99.9% and were thus used without any further purification process. Synthesis of grafted polymer (GP) By gamma-irradiation technique, polystyrene was grafted with acrylonitrile using chloroform as a solvent and ferrous ammonium sulphate as a catalyst, respectively.Different percentages of the GP were collected and studied [13]. Preparation of GP/SiO 2 NPs The GP was first dissolved in chloroform and mixed with nano SiO 2 powder (20-30 nm) by weight ratio of 1000:1 (GP: nano SiO 2 ) for 72 h with constant stirring at 50 °C.After the evaporation of chloroform, the precipitate was ground with mortar and pestle into fine particles yield as GP/nanosilica. Preparation of modified GCE (GP/SiO 2 NPs/ GCE) The GCE was polished with alumina slurry (0.5 micron) and then ultrasonically cleaned for 10 min, followed by rinsing with distilled water and drying at room temperature (drying by air blower). The modification of the cleaned GCE with silica nanoparticles was done by mechanical attachment method [14].The GCE surface was tapped (doping) about thirty times onto GP/SiO 2 NPs powder placed on a filter paper as shown in Fig. 2. In this work, the modified GCE with silica nanoparticles would be used as working electrode and termed as GP/SiO 2 NPs/GCE. Characterization of different modified electrode Fig. 3 illustrates the cyclic voltammogram of different working electrodes (GCE and GP/SiO 2 NPs/ GCE) to characterize the potential area of the electrode in 1 M K 2 HPO 4 solution as a supporting electrolyte.A wide potential area of the new modified electrode (GP/ SiO 2 NPs/GCE) at -2 to +2 V without any current peaks was found, while the GCE had a range of potential at -1.5 to +1.8 V with current peak at 1-1.8 V. So, the modified electrode achieved good electrochemical properties for voltammetric analysis [15]. Normally, in voltammetric analysis, the K 4 [Fe(CN) 6 ] compound was chosen for standardization and calibration [16] of the new modified electrode GP/ SiO 2 NPs/GCE and for comparing it with SiO 2 NPs/ GCE as shown in Fig. 4. It was found from the oxidation-reduction current peaks of FeII/FeIII that the modified GCE with SiO 2 NPs had current at 12 µA and -5 µA respectively, while the current of modified GCE with GP/SiO 2 NPs was enhanced to 20 µA and -15 µA respectively.This means that GP polymer with nanoparticles acted as an electro-catalyst and increased the conductivity of the modified working GCE.Also, the current ratio of oxidation-reduction current peaks of FeII/FeIII was Ipa/Ipc ≈ 1.This demonstrates that the new modified electrode GP/SiO 2 NPs/GCE functioned as a reversible electrode [17], and the potential peak separation of ΔEpa-c ≈ 100 mV suggests that the reaction at the modified electrode was a homogenous process [18]. Effect of different electrolytes The effect of different supporting electrolytes was studied on the oxidation-reduction current peaks of FeII/FeIII ions to find the enhancement of the current on the modified electrode.It was found that the value of enhancement of the oxidation-reduction current in K 2 HPO 4 electrolyte was 1.313 and 1.22 respectively, as shown in Table 1.The modified working electrode (GP/SiO 2 NPs/GCE) was more sensitive to the K 2 HPO 4 electrolyte in the electro-analysis by cyclic voltammetric technique. In general, the degree of oxidation current enhancement in varying electrolyte varied in the following order: Furthermore, the reduction current enhancement was in the following order: Since K 2 HPO 4 produces the highest current output, it was used in the following studies. Effect of different concentrations The new modified electrode (GP/SiO 2 NPs/GCE) can be used to find and detect the low concentrations of ions in the aqueous solutions.This electrode is a good [8][9][10][11][12].Thus, the modified working electrode acted in acidic medium as electro-catalyst; also, the modified electrode could used in acidic and basic media [19]. Reliability and stability of the electrode The stability of modified materials on the GCE was studied for ten times of oxidation-reduction current peaks of K 4 [Fe(CN) 6 ] in K 2 HPO 4 solution as shown in Fig. 9.The relative standard deviation (RSD) of anodic and cathodic peaks was determined with good value as of ± 0.4349 and ± 0.1836, respectively (Table 2). Effect of different scan rates Different scan rates from 0.01 to 0.1 V/sec was studied for the oxidation-reduction current peaks of FeII/FeIII in 1 M K 2 HPO 4 solution as an electrolyte on the modified electrode (GP/SiO 2 NPs/GCE) as shown in Fig. 10.It was found that the redox peaks of FeII/ FeIII was enhanced with increasing the scan rate.Hence, the new modified electrode acted as electrocatalyst with the presence of silica nanoparticles in the structure of GP. Diffusion coefficient value was determined from the Randles-Seveik equation which describes it as reversible redox couple peaks [20,21]: where Ip is the current peak (μA), n is the number of moles of electrons transferred in the reaction, A is the area of the electrode (cm 2 ), D f is the diffusion coefficient (cm 2 /sec), and V is the scan rate of the applied potential (V/sec).solution on GP/SiO 2 NPs/GCE was determined as D fa = 6.03 ×10 -6 cm 2 /ec and D fc = 3.122 × 10 -6 cm 2 /sec, respectively. Atomic force microscpy (AFM) study The AFM image of the GP material modified with silica nanoparticles GP/SiO 2 NPs is shown in Fig. 12.The average SiO 2 NPs diameter was 50 nm as shown in Fig. 13. Conclusions In this study, high sensitivity to low detection limit of concentration in aqueous solution, good reliability with stability on the electrode and resistance to different pH were obtained using new working electrode GP/SiO 2 NPs/GCE.The diffusion coefficient of the redox current peaks from different scan rates of FeII/FeIII was determined by using Randles-Seveik equation.And SEM and AFM images confirmed the Table 2 RSD of the stability for modified working electrode (GP/SiO 2 NPs/GCE) of oxidation-reduction current peaks for K 4 [Fe(CN) 6 ] in 1 M K 2 HPO 4
2,655.8
2018-05-28T00:00:00.000
[ "Materials Science" ]
Preparing local strain patterns in graphene by atomic force microscope based indentation Patterning graphene into various mesoscopic devices such as nanoribbons, quantum dots, etc. by lithographic techniques has enabled the guiding and manipulation of graphene’s Dirac-type charge carriers. Graphene, with well-defined strain patterns, holds promise of similarly rich physics while avoiding the problems created by the hard to control edge configuration of lithographically prepared devices. To engineer the properties of graphene via mechanical deformation, versatile new techniques are needed to pattern strain profiles in a controlled manner. Here we present a process by which strain can be created in substrate supported graphene layers. Our atomic force microscope-based technique opens up new possibilities in tailoring the properties of graphene using mechanical strain. graphene does not get damaged in the initial phase of the indentation, while the SiO 2 substrate can undergo plastic deformation. Stopping the indentation before rupture of the graphene occurs, can leave the graphene membrane pinned to the deformed substrate. During indentation, the tip is lowered towards the sample surface until a pre-set cantilever deflection (see Methods). In the next step, the tip is either retracted or moved along a line on the sample surface (Fig. 1a). The procedure can be repeated with changing the tip location, resulting in an indentation dot or line pattern (Fig. 2a,b). No significant damage to the graphene has been observed either through AFM or Raman measurements up to a final indentation depth of 1.5 nm. With deeper indentation, the rupture of the graphene layer becomes increasingly likely (see Supplementary Figure S1). Imaging of the resulting patterns is done using the same tip in tapping mode, unless otherwise noted. Importantly, the crystallographic directions of the graphene can be revealed before patterning, by imaging the surface using a softer cantilever (typically 0.1 N/m force constant) in contact mode. In this case the frictional forces experienced by the tip are modulated by the atomic lattice (see inset in Fig. 1b). To determine the magnitude of the strain, we have measured Raman spectroscopy maps of the indentation patterns, with the help of a confocal Raman microscope, using a 532 nm or 633 nm excitation laser. If graphene is subjected to tensile strain, both the G and 2D peak positions shift down in wave number, by a factor determined by the respective Grüneisen parameter 33,34 . These parameters are in the range of ω ε for uniaxally applied strain, as measured by Mohiuddin et al. 34 . The G peak has two Grüneisen parameters, because if the strain has a uniaxial character it will split into two subpeaks called G + and G − . From these parameters it is clear that the 2D peak shows much more shift as a function of strain than the G peak, making possible the detection of strains in the range of 0.01% 34 . Because of this property we choose to plot the 2D peak wave number in our Raman maps, to make the strain variations induced by the AFM tip clearer. In Fig. 1c a plot of the 2D peak position can be seen across a sample area containing 2 μm × 2.5 μm arrays of line patterns similar to the one in Fig. 1b, each array being composed of 50 indent lines of 2 μm length (AFM images of the patterns: supplementary Figure S4). As the indentation depth is varied from array to array, from 0.15 nm to 0.5 nm, the downshift in G and 2D peak position becomes stronger, meaning increased strain (see Fig. 1f,g). Of course it has to be noted that the strain distribution within the indentation lines will be far from constant 15 and Raman spectroscopy only probes the average of the strain in the graphene inside the laser spot of roughly 500 nm diameter. Within these limitations we will quantify the average strain in these structures. In Fig. 1d we show a correlation plot of the G and 2D peak positions, measured with 532 nm excitation. Colors of the data points correspond to the colors in c. Blue slope corresponds to the ratio of the Grüneisen parameters for the 2D and G peaks 33 , while the red slope is the shift due to p doping. The maximum average strain relative to the pristine graphene is 0.1%. (e) Raw Raman spectra in a single point measured on the various line patterns. (f,g) Plots of the G and 2D peak (colors correspond to the colors used in (c). The spectra are offset in intensity with respect to each other for the sake of clarity. Data in this figure was measured, using 532 nm excitation. If the G and 2D peak shifts are due to strain effects, their shift is only determined by their respective Grüneisen 34 . Thus, the measurement points in the correlation plot will lie along a line, the slope of which is determined by the ratio of ω ω ∆ ∆ / D G 2 (blue line). This ratio lies within a range of 2.2 to 2.8, depending on the anisotropy in the strain distribution and the crystallographic direction of the strain in the pure uniaxial case 33 . In addition to strain, the change in the graphene chemical potential can also shift the peak positions. If doping effects are significant, the data points will show a deviation from the blue slope. If purely doping is the source of the peak shifts the G peak is more strongly affected than the 2D peak and the slope of the line corresponding to it is 0.75, as shown by the red line in Fig. 1d. Following the evolution of the Raman peak positions with increasing indentation depth, the data points move along the blue line. The largest 2D peak shift of 8 cm −1 , with respect to the unperturbed graphene is observed for the 0.5 nm deep indentation marks, corresponding to an average strain of 0.1%, using the Grüneisen parameter shown above. Although splitting of the G peak can be expected, we do not observe this due to the small overall Raman shift. Examples of raw Raman spectra, used to create the map and correlation plot in Fig. 1c,d can be seen in Fig. 1e-g. Notice the absence of any disorder (e, f) 2D peak shifts of the strain pattern (red) and unperturbed graphene (blue). Gray lines and dots at the center of the plot show the orientation of the patterns with respect to θ. Spectra were measured using 633 nm, linearly polarized excitation. The polarization angle was rotated in 12° increments. induced peak around 1350 cm −1 , indicating that the number of lattice defects introduced during indentation is negligible. Raman spectroscopy also gives us the means to demonstrate that not only can we tune the magnitude of the strain in the patterns, but also to influence the direction and symmetry of the strain. In graphene, the crystal momentum of the scattered electron is selected by the polarization of the excitation laser 35 . This means that changing the polarization of the laser we can probe the strain in the graphene in various directions. Keeping the laser light in the same spot on a line pattern (40 nm line spacing) and a hexagonal dot pattern (40 nm nearest neighbor dot distance), we have measured the dependence of the 2D peak shift with rotating the polarizer of the incident laser beam. In Fig. 2e,f we show a polar plot of the resulting peak shift (red dots), compared with the same measurement performed on the unperturbed graphene next to the patterns (blue dots). The data points on the unperturbed graphene form a circle, meaning the average strain distribution within the laser spot is isotropic, as would be expected for graphene on SiO 2 . On the other hand, the measurement on the line pattern shows a 2D peak position that is up to 1 cm −1 smaller if the polarization vector of the laser is perpendicular to the indent lines (at θ = 15°). Thus, the strain has a uniaxial character, being larger in the direction perpendicular to the indent lines 36 . In the case of the dot patterns, the 2D peak shift has a slight hexagonal character, which is aligned with the dot pattern (inset in Fig. 2d). In this case the peak is shifted to higher values by up to 2 cm −1 , if the polarization is perpendicular to the close packed direction of the indentation dots. Therefore, selecting the crystallographic orientation of the pattern, the direction of the strain with respect to the graphene lattice can be set. The remarkable observation that graphene stays in the strained configuration after the AFM tip is retracted, leads us to explore the energetics of adhesion. The pinning of graphene onto a corrugated substrate can be achieved if the adhesion energy due to van der Waals forces (E vdW ) is larger than the elastic energy (E el ) induced in the graphene. To be able to compare the two quantities in the present experiment it is necessary to know the exact geometry of the graphene in the pinned configuration. AFM probes with a nominal tip radius of curvature of 2 nm have been used to image indentation patterns (see Fig. 3a). Gaussians of the form: , with a variance σ in the 7 nm range and depths (h 0 ) from 0.7 to 1 nm, fit the AFM height data very well (Fig. 3c). In estimating E el for the present graphene geometry, the bending energy can be safely disregarded, so that the elastic energy is assumed to be dominated by the in-plane stretching of the graphene membrane. In this regime we can apply the calculations of Kusminskiy et al. 37 for graphene adhered to a Gaussian depression, where the ratio of the Gauss depth to the variance determines the onset of depinning from the substrate. For a conservative assumption of graphene-SiO 2 adhesion energy 38, 39 of 2 meV/Å 2 the h 0 /σ < 0.28 ratio is needed for stable pinning of the graphene to the substrate. In the case of the dot patterns prepared here, this ratio is up to 0.14. From a mechanical stability point of view, this means that the graphene in the dot patterns is still well within the pinned configuration. Estimating the strain from the geometry, one obtains for this dot pattern 0.15%. As the strain in the deformation also scales with h 0 /σ, an increase in the possible strain by a factor of 2 could be achieved if AFM tips with smaller tip radius are used for patterning. The above calculation assumes that the graphene is adhered by van der Waals forces to the whole surface of the Gaussian shaped hole 37 . This is a reasonable assumption, since the graphene is pushed into close contact with the support during indentation. Therefore, it is expected that the adhesion is improved with respect to exfoliated graphene on SiO 2 , where the graphene layer is partially suspended 40,41 . The effect of strain on the orbital motion of electrons in graphene can be described using a vector potential, corresponding to a time reversal symmetric pseudo-magnetic field 2 . This vector potential is of the form: ae xx yy xy 2 , where β ≈ 2, a is the lattice constant, e is the elementary charge and u ij is the strain tensor 2, 27 . The resulting pseudo-magnetic field is given by , it's effect on graphene's electronic states having been measured previously by scanning tunneling microscopy 22,41,42 . In order to calculate the pseudo-magnetic field induced by the indentation, we need to quantify u ij . Since, displacements in the z direction (perpendicular to the graphene plane) are much bigger than displacements in-plane, we can safely neglect the in-plane component 41,43 , resulting in a strain tensor: , where h is the out of plane displacement of the graphene layer. We can measure h by AFM topography maps, as long as the AFM tips used for imaging the indentation patterns are much sharper than the ones used to prepare the patterns (see Methods). As an example, the AFM topography of an indentation hole pattern (see Fig. 3a) has been used to calculate the strain tensor and the resulting pseudo-magnetic field (Fig. 3b) by numerical differentiation of h. The resulting pattern of B ps is largest around the indentation marks (see dashed circle in Fig. 3b) and forms a petal-like structure with alternating positive and negative values of B ps ≈ 4 T. This flower-like B ps pattern is characteristic of circularly symmetric deformations 15,41,44,45 and we can compare this to the B ps of an ideal Gaussian, because the indentation dots are well fitted by Gaussians (Fig. 3c). Figure 3d shows the calculated B ps pattern of a Gaussian having a depth of 1 nm and a variance of 7 nm. The maximum pseudo-magnetic field in this pattern is 5 T, in good agreement with the B ps map calculated from the AFM topography data. To put the ~4 Tesla pseudo-magnetic field induced by the indentation into perspective, it is instructive to compare it to the pseudo-magnetic field fluctuations resulting from the substrate induced rippling of graphene on SiO 2 . From magneto-transport measurements, such fluctuations were estimated to be in the 1 T range 46 . Therefore, AFM indentation can be used to significantly perturb in a tunable fashion the electronic properties of graphene. In summary, scanning probe based techniques have demonstrated remarkable versatility in lithographically cutting nanostructures into graphene 4,47 . Here we have shown that in an analogous fashion, strain can be induced in For the indentation experiments the NanoMan lithography software of Bruker has been used. Between indentation steps the tip was moved in tapping mode. At the begin of indentation the tip was lowered towards the sample surface with a z velocity of 400 nm/s, until deflection of the cantilever has taken place. In the case of the dot patterns the tip was retracted with the same z velocity and moved to a new position for the next indentation step. In the case of the line patterns after moving towards the sample the tip was dragged across the surface in contact mode without feedback with a velocity of 200 nm/s. Finally the tip was retracted and moved in tapping mode to the new line location. The z displacement of the tip was controlled either by setting a cantilever deflection threshold or by moving the tip towards the sample by 40-100 nm. The final indent depth was used as a control parameter during indentation experiments because of the variability in cantilever spring constant and tip sharpness. The typical cantilever spring constant was 40 N/m, with a tip radius of ~15 nm (Tap300DLC, Budget Sensors). However, due to large variability in these parameters, the z movement was incrementally adjusted. An indentation experiment was always followed by imaging the patterned location for the onset of plastic deformation of the SiO 2 . Raman measurements. Raman measurements were carried out using a Witec 300rsa+ confocal Raman spectrometer, using a 532 nm or 633 nm excitation laser.
3,533.6
2017-02-16T00:00:00.000
[ "Physics" ]
Autism and Moral Responsibility: Executive Function, Reasons Responsiveness, and Reasons Blockage As a neurodevelopmental condition that affects cognitive functioning, autism has been used as a test case for theories of moral responsibility. Most of the relevant literature focuses on autism’s impact on theory of mind and empathy. Here I examine aspects of autism related to executive function. I apply an account of how we might fail to be reasons responsive to argue that autism can increase the frequency of excuses for transgressive behavior, but will rarely make anyone completely exempt from moral responsibility in general. On this account, although excuses may apply more often to autists than to others, the excuses that apply to autists are just the same excuses that can apply to anyone. concern. … [because] their ends may be less about doing the right thing or taking others' interests as reason-giving and more about 'their need to abide by whatever rules they have been taught…'^ [5] Shoemaker ties this in with the claim that holding someone accountable involves demanding Backnowledgment and a certain emotional experience and transformation^that, on his description, is not possible for autists. [5] The result, on Shoemaker's view, is that autists should not be held morally responsible for their actions. Shoemaker's argument indicates a global exemption from responsibility, not just mitigated responsibility or excuses from responsibility for particular actions. 2 Shoemaker's account seems to depend on a specific and limited account of autism as ToM impairment. Here, I look at autism understood as an executive function disorder. To apply a reasons responsiveness theory of responsibility to the case of autism understood as an executive function disorder, I describe several ways in which someone could fail to be reasons responsive. I argue that autism can increase the frequency of excuses for transgressive behavior, but will rarely make anyone completely exempt from moral responsibility in general. Furthermore, although excuses may apply more often to autists than to others, the excuses that apply to autists are just the same excuses that can apply to anyone. In developing this position, I attempt to consider autism apart from other features or conditions that often accompany autism. We know that some autists are unable to engage socially or use language. General intellectual disability (low IQ) [9] and specific cognitive challenges such as working memory [10] can both contribute to this. When people are profoundly affected by these challenges, responsibility is not in question. There will, of course, be liminal cases, but the discussion that follows will apply to those who are able to engage socially, who appear generally to be among those it seems natural to blame or praise. These include, for example, the many people who come to organizations such as the National Autistic Society [11] in the UK and the Asperger/Autism Network (AANE) [12] in New England for social opportunities, support, and political action. Autism and Executive Function Autism involves several areas of neuropsychological function. Researchers have emphasized various of these as one or the key driver of the differences seen in autists. Researchers have emphasized theory of mind [13], central coherence [14], executive function [15], empathy disorder [16,17], extreme male brain [18], and other factors. [19,20] Here we will focus on executive function. 3 Executive function (EF) processes are Btop-downĉ ontrol mechanisms. They are B…mental processes needed when you have to concentrate and pay attention, when going on automatic or relying on instinct or intuition would be ill-advised, insufficient, or impossible… [ 22] The core domains of EF are understood to be inhibitory control, working memory, and cognitive flexibility. [22] Inhibitory control involves behavioral inhibition, selective attention, and cognitive inhibition. Working memory includes Bkeeping active incoming information for further processing… the number of elements that can be held online simultaneously… [and] updating the content of working memory…^ [23] Cognitive flexibility involves the ability to shift attention, and to adapt to changing environments or rules. [22] Higher order EFs such as Breasoning, problem solving, and planning^are Bbuilt^from the core domains. [22]. According to Robinson, et al., the theory that autism is, at its core, an EF disorder can account for the following typically atypical autistic tendencies: Ba need for sameness, a strong liking for repetitive behaviours, lack of impulse control, difficulty initiating new non-routine actions and difficulty switching between tasks.^ [15] Robinson, et al. claim that these challenges Bare not successfully accounted for by the theory of mind deficit hypothesis … weak central coherence accounts … or the extreme male brain theory.^ [15] Other groups have also put EF at the center of their explanations of challenges faced by autists. Ibrahim, et al. write that BCognitive flexibility, planning, organization, and set shifting may represent vital processes that are necessary in expanding the repertoire of restricted, repetitive behaviours and meaningful social interactions.^ [24] Similarly, from Leung, et al.: BDeficits in inhibition, information recall, flexibility, and the ability to monitor, update, and select socially appropriate responses-all aspects of executive functioning-may contribute to the social impairments that characterize autism…^ [3] Thus the executive function account is a reasonable way to understand autism. Furthermore, the fact that a number of researchers accept and employ this account shows the importance of exploring the implications for moral responsibility. Neuropsychologists have a variety of instruments for assessing distinct and overlapping domains of executive function. [25] One instrument, the Wisconsin Card Sorting Test (WCST), asks subjects to sort cards into categories. The subject is expected to infer the active sorting rule (sort by shape type, shape color, or number of shapes) based on indication from the tester of whether the last sorting action was correct. The tester changes the rule at several points, requiring a new inference and a shift from following the old rule to following the new one. [26] The WCST engages a variety of EFs: B…success on the WCST requires that an individual be able to stop a current behavior, remember and keep active the rules and objectives of the task, and change strategies in order to sort by new, incompatible rules. [ 23]. In the Stroop Test, subjects are shown color words printed in an incongruous color. For instance, the word 'green' might appear in red ink. Subjects are asked to name the color of the ink. This is understood to test the subject's ability to inhibit the responses triggered by the meaning of the word in the prompt. [27]. Isolating which domains are affected by autism is tricky. [23] This becomes apparent in the literature on moral judgement. Data show that, compared to controls, autists tend to take less account of an agent's intentions when assigning blame to actions that cause harm. [28] Buon, et al. recognize two explanations for this. [28] One explanation cites ToM difficulties that impair autists' ability to take into account what the agent is thinking and the interpersonal norms being violated by the agent. The other explanation is that autists are less likely to inhibit the automatic, emotion-based response to the unintended harmful consequences of an action in favor of a judgement that reflects consideration of the agent's neutral intentions. This inhibition is expected under the Bdual process model^of moral judgement. [29] Isolating the involvement of various EFs is further complicated by data suggesting that inhibition ability and ToM skills are actually correlated. [30]. Russo, et al., however, argue that BAmong persons with autism, inhibition abilities appear to be intact…^ [23] They draw this conclusion in part based on comparison between performance on the WCST, which shows higher errors in autistic subjects compared to appropriately matched controls, and performance on the Stroop Test which, though testing primarily inhibition, shows no difference attributable to autism for those above age six. Russo, et al. conclude that autists' lower performance on the WCST is due to lower function in updating short-term memory and in Bset shifting/ cognitive flexibility,^and is not attributable to lower inhibitory control. [23]. Data show subtle differences attributed to autism in the EF called verbal fluency (generativity), which falls in the category of cognitive flexibility. Verbal fluency requires subjects to generate words in a certain category (beginning with a certain letter, or naming members of a class such as animals). Contra Russo, et al., autists' comparatively high repetition of previously given answers seems to point toward inhibition difficulties. [15] In a study by Carmo, et al., [31] autists produced fewer words at the start of verbal fluency tasks compared to neurotypical controls, with production converging as the tests progressed. Carmo, et al. found that this difference was not present when an Binitial word cue^was given, supporting their Bimpairment of initiation hypothesis^ [31] about autism. What can we make of this complicated scene? The familiar features of autism, such as focus on narrow interests, challenges with social interactions, insistence on sameness, and difficulty initiating new activities can be explained in terms of a few of the EFs described above. These are: shifting/cognitive flexibility, initiation, generativity and working memory. These EFs are important for considering counterfactual situations, identifying alternative paths, and adjusting to new information about people and circumstances. In the next section, I describe the reasons responsiveness approach to moral responsibility. I will show that multiple aspects of reasons responsiveness are relevant to assessing the impact of executive function differences on moral responsibility in autists. Responsibility and Reasons Responsiveness I am interested here in moral responsibility as accountability. Theories of accountability tell us when it is appropriate to hold a person to account-to blame someone, e.g.-for an action. Although some philosophers are skeptical about attributing moral responsibility at all, [32,33] I take it as given that typical adults do things for which they are morally accountable, and also do things for which they are not morally accountable. I might deserve blame for carelessly causing a forest fire; I might be blameless for stepping on an endangered snail that I did not see on the sidewalk. As with autism, moral responsibility has been subject to multiple theoretical accounts. [34] Here I will focus on a family of theories that dominates current philosophical treatments of the topic: reasons responsiveness theories. Roughly, reasons responsiveness accounts tell us that an agent is responsible for an action if she has the capacity to alter her behavior in the face of relevant reasons to do so. As with the executive function theory of autism, the reasons responsiveness approach to responsibility is widely discussed and defended [5,21,32,35], seems reasonable, and helps to explain the relevant phenomona. These are reasons to take seriously the implications of the reasons responsiveness approach for autism. Fischer and Ravizza give a canonical reasons responsiveness account. According to Fischer and Ravizza, moral responsibility requires reasons receptivity and reasons reactivity. These are explained in terms of Bmechanisms^in an agent: In the case of receptivity to reasons, the agent (holding fixed the relevant mechanism) must exhibit an understandable pattern of reasons-recognition, in order to render it plausible that his mechanism has the Bcognitive power^to recognize the actual incentive to do otherwise. In the case of reactivity to reasons, the agent (when acting from the relevant mechanism) must simply display some reactivity, in order to render it plausible that his mechanism has the Bexecutive power^to react to the actual incentive to do otherwise. [35] Being a responsible agent, on this theory, requires that one have access to the reasons that are relevant to choices about how to act, and the ability to adjust action in response to those reasons. We cannot set as the standard for being a responsible agent that a person always responds to relevant reasons, as that would mean that anyone who acts wrongly in any instance would fail to be a responsible agent, eliminating the possibility of culpable wrongdoing. This is why Brink and Nelkin note that B…responsibility must be predicated on the possession, rather than the use, of such capacities. We do excuse for lack of competence. We do not excuse for failures to exercise these capacities properly.^ [36]. For Fischer and Ravizza, moral responsibility is inseparable from experience of appropriate moral sentiments or, in the case of a detached consideration, the recognition that such sentiments would be appropriate in the circumstance. [35] When we feel moral sentiments, we are feeling the moral character of reasons for acting (or at least for making moral judgments). The moral sentiments include reactive emotions that give us a sense of the moral significance of a fact. Importantly, this encompasses what Strawson and followers call the reactive attitudes. [2,37] The reactive attitudes (resentment or gratitude, e.g.) are, on many views, integral or even constitutive of holding someone responsible. [2]. We could have a reasons responsiveness theory that did not involve moral feelings at all. For some, however, such a theory would not be a theory of moral responsibility. [2,5] On these theories, responses to reasons are moral (rather than merely prudential) only if they involve moral feelings. Fischer and Ravizza are not, of course, the only philosophers to have developed a reasons responsiveness account that presents feelings as central to the character of moral engagement. Shoemaker leans heavily (as he puts it) on anger and gratitude as the Bfittingŝ entiments for holding a person accountable. [5] Wallace describes responsibility in terms of Ba distinctive kind of normative competence: the general capacity to grasp moral reasons, and to guide one's conduct by the light of such moral understanding^ [38], where this moral understanding is Bdistinctively connected to the reactive emotions.^ [38] McKenna admits that reactive emotions Bmight well only be contingently related to our moral responsibility practices,^yet treats them as Bbedrock ingredients in an accurate and informative characterization of what moral responsibility is.^ [39]. The role of moral feelings in reasons responsiveness accounts is, as already suggested, to mark the moral valence of an action. The idea captured in these accounts is that a person who does not generate reactive emotions when contemplating relevant reasons does not have access (is not receptive) to moral facts, and therefore cannot be held morally responsible. These accounts cannot allow that just any response will capture the moral scene accurately. Reactive emotions must be: Bfitting^ [40], justified and Baimed at a sensible target^ [41]; natural and universal [42]; Bnatural or reasonable or appropriate...^ [2] Fittingness is how these theories distinguish between correct and incorrect judgments. Atypical responses will not count as fitting. Stout's suggested revision of Fischer and Ravizza's reasons responsiveness requirement calls for systems that are Bfunctionally typical^ [21], which indicates the connection between these ideas and concepts of health. 4 Brink and Nelkin's account of Bthe architecture of responsibility^describes reasons responsiveness in terms of what they call Bnormative competence.^Normative competence B…requires responsible agents to be able to recognize and respond to reasons for action. [ 36] 'Recognize' refers to what Fischer and Ravizza call receptivity to reasons, and is the basis of Bthe cognitive dimension of normative competence.^ [36] 'Respond' refers to what Fischer and Ravizza call reasons reactivity, and is the basis of Bthe volitional dimension of normative competence.^ [36]. Someone without normative competence is not morally responsible for her actions. But how might we fail to have normative competence? On the cognitive side, Brink and Nelkin start with the ability to Brecognize wrongdoing.^ [36] On Shoemaker's theory of accountability, the key is the capacity to have regard for others, where my regard for you involves considering your projects to be meaningful and valuable because you find them so. [5]. However, failure of normative competence in its cognitive component (failure to recognize wrongdoing, failure of regard, failure of reasons receptivity) seems to be possible in multiple ways. So, too, does failure of normative competence in its volitional component (failure of reasons reactivity). What we know about autism can help tease these out in a way that may, perhaps, be useful for understanding both reasons responsiveness and the ways in which autism can impact normative competence. In the next section, I describe several ways in which a person could fail to be reasons responsive. I understand these to be compatible with a wide variety of reasons responsiveness theories. Reasons Responsiveness in Detail Agent A could fail to respond to reasons in a particular case in several ways. It is possible to fail in some of these ways and be blameworthy for this failure. For instance, A could fail to perceive a relevant, available fact F 5 because she was not paying attention when she should have been. 6 Alternatively, A could have been unable to receive or attend to F due to a non-culpable failure of her reasons responsiveness mechanisms. This failure could be global and permanent. Alternatively, it could be limited in time and/or scope. Nonculpable failure to be reasons responsive that is merely situational or partial would be an excuse for relevant actions or omissions. Non-culpable failure to be reasons responsive that is pervasive and regular would make for global exemption from responsibility. Consider the following five types of failure: 1. A does not perceive relevant, available fact F. 2. A perceives F, but does not perceive F as a moral reason. 3. A perceives F as a moral reason but does not find this motivating (can't be bothered). 4. A is motivated in a general sense but fails to conform her will to respond. 5. A can conform her will generally, but is unable to select an appropriate response. My primary concern here is to consider when A would be not accountable. Therefore, in what follows I will focus on non-culpable failures to be reasons responsive. Failures of types 1, 2, and 5 are failures of cognitive competence. 7 Leaving aside cases where a reason is not in any sense available to agents (for instance, because the only evidence is behind a locked door), type 1 failures might include not noticing that a friend is sad or not recognizing a serious threat to a child's safety. Where a typical observer or participant in the scene would have picked up on these facts, someone else might in a sense be blocked from them. Here A would fail to be reasons receptive because she experiences a sort of reasons blockage. But perceiving a fact is not the same as recognizing it as a moral reason to act. In one kind of type 2 failure, A might perceive F but not as a reason to act at all. In a second case, A might perceive F as a reason to act, but not as a moral reason for action. For instance, A might perceive a safety hazard, know that it would be inconvenient if her son were injured, and act for that reason rather than on the moral ground that she has an obligation to protect her child. These are the failures of reasons responsiveness attributed to the hypothetical psychopath. 8 The psychopath may be clever and observant. She might recognize the relevant facts and know how to manipulate the world, [5] but she lacks a moral sense that reacts to some facts as moral reasons. [5,44,46] This kind of type 2 failure is lack of reasons receptivity. The psychopath is reasons blocked because of failure to apprehend the moral relevance of F. In a third sub-type of failure 2, A might perceive F as morally relevant, but fail to perceive F as a moral reason for action due to failure to recognize the availability of possible responses. The fact that someone is suffering is not a reason to act if I am actually powerless to do anything about it. If I mistakenly believe I am powerless to address suffering because of an inability to identify responses that are in fact available, that would also be a type 2 case of failing to be reasons receptive. In this type of failure, A perceives F, and perceives the moral salience of F, but this does not trigger action because A is blocked to possible responses. This is different from the psychopath's type 2 failure in that (assuming a nonpsychopath A) A would have perceived F as moral reason for acting had she been able to identify the alternative actions that were available. 9 It may also be possible to recognize and react to relevant reasons, but choose poorly because of prioritizing wrongly among relevant reasons. This would be a fourth subtype of type 2 failures. For instance, A might have time to run quickly into a burning house just once before it becomes too unsafe to do so. Suppose A uses this opportunity to save her comic book collection (the subject of her special, intense interest) rather than to check that all of her family members are safely out of the house. Suppose also that A was aware that checking for other people was a possible and morally relevant alternative action, that prioritizing the comic book collection over checking for family members was the wrong choice, and that this is not an instance of akrasia. On a reasons responsiveness account, assessment of culpability for this action depends on whether A's reasons receptiveness mechanisms were working properly. If so, A could be blameworthy for her choice. On the other hand, the intense, Brestricted and repetitive interests and activities^ [1] associated with autism, and the associated challenges with inhibition may interfere with A's ability to Bexhibit an understandable pattern of reasons-recognition^ [35], so that she would not be blameworthy. In this case, A would be blocked from seeing the primacy of the morally right choice. Type 5 failures are different from type 2 in that, in type 5, A has recognized F as a morally relevant reason to act and A's motivation is aligned, but action is not forthcoming due to difficulty choosing an appropriate response. This would be a limitation in A's practical reason-phronesis. She has a goal (to act in response 7 5 could be understood to overlap with volitional competence under Brink and Nelkin's description. [36] 8 I will use the term 'psychopath' here to refer to the character referred to in this way in the philosophical literature. [7,[44][45][46] I will not be concerned for whether this represents any clinically accurate account of any actual person. 9 Kalis and Meynen identify three stages of decision-making: option generation, option selection, and action initiation. They argue that B… assessing option-generation dysfunctions is highly relevant for judgments of moral and criminal responsibility.^ [47] BVarious mental disorders decrease a patient's capability to see those options for action that most people can easily see, or they lead one to see options for action that most people would not see as options.^ [47] They use psychosis, schizophrenia, depression, compulsive disorders such as pyromania, and ADHD as examples. Kalis and Meynen argue that option generation is most relevant because we have the least control over it, and because option generation frames the possibilities for option selection and action initiation. As a result, they claim, those whose option generation is impaired should not be held responsible for poor option selection or action initiation. to a moral reason) and the will to pursue that goal. She is able to martial the volitional forces, but has, as Grisso and Appelbaum put it, Bdifficulty in the processing of information to make a decision.^ [48] This would not be reasons blockage, or A would not have perceived F as a moral reason for action. However, challenged cognitive flexibility might interfere with A's ability to compare the consequences of competing potential actions. Initiation (getting going, as described by Carmo, et al. [31]) could also be a factor. 3 and 4 would be failures of volitional competence. It is not clear, however, whether failure 3 is possible. On one reading, recognizing F as a moral reason involves feeling a moral reactive emotion, and moral reactive emotions are inherently motivating. On this view, it is not possible for A to perceive F as a moral reason and fail to be motivated by this to act in some way, because being motivated just is what it means to perceive a fact as a moral reason. I do not think a lot rides on whether 3 is possible. 4 is a more familiar sort of failure, involving an experience of being motivated to act but doing something else instead, something that is not endorsed by a second-order desire. That would be akrasia. Autism, Executive Function, and Failure of Reasons Responsiveness We can see how EF challenges in the area of generativity could contribute to non-culpable failures of type 2. Someone might fail to be reasons responsive because she is blocked from the fact that she has options available for responding to a relevant fact. This could cause her not to perceive the fact as a moral reason to act (a type 2 failure). Cognitive flexibility challenges could lead to non-culpable failures of type 5. Type 1 failures could possibly connect to EF in cases where A fails to notice F due to slowness in updating working memory. For instance, A might not adjust her behavior to a new facial expression indicating that her friend has become sad or frightened. However, we know that autism is also characterized by a general tendency to miss social cues that trigger reactive emotions in neurotypicals. These might better be explained by ToM challenges, and by relatively low orientation to social circumstances. Anecdotal evidence suggests that the type 2 failures expected of the psychopath are rare for autists. An autist who perceives relevant facts (her friend is crying) may have less access to the emotion her friend is feeling, may feel that emotion through emotional contagion without realizing that it originates with the friend [49], or may not identify the possible responses that are available, but this is not the same as the psychopath's indifference to the friend's sadness. 10 So it looks like the executive function challenges characteristic of autism can affect a person's reasons responsive mechanisms. Reasons receptivity/cognitive competence and reasons reactivity/volitional competence were not specific enough categories to allow us to assess the impact of autism-related EFs on reasons responsiveness. Breaking down reasons responsiveness failures into types 1-5, with further distinctions within type 2, gives us more information about reasons responsiveness, and provides a framework for improved understanding of autism. Autism can cause reduced reasons responsiveness, typically involving some form of reasons blockage. Reasons blockage affects reasons receptivity (cognitive competence). Although there will be some autists whose impairments are profound, it is likely that few autists who are otherwise able to engage socially will be globally exempt from moral responsibility due to lack of reasons responsiveness. Reduced reasons responsiveness is more likely to ground excuses in particular situations. Again, reduced reasons responsiveness should not be confused with culpable failure to engage one's reasons responsive capacities. The particular excuses identified here as caused by autism will be the same types of excuse that sometimes apply to neurotypicals. Neurotypicals can simply not register facts or fail to perceive options even when they are paying appropriate attention. These phenomena can sometimes be excuses even for those who are quite reliably reasons responsive. Indeed, we know that executive functions are impacted in any person who is tired, stressed, or even lonely. [22] This means that autists are not different in kind from neurotypical people, just subject to more of the common sorts of moral frailties. Autists may also be less able to avoid these frailties through ordinary means (such as getting sufficient sleep). Knowing that a person is autistic would be a reason to look for particular types of excuses, and to accept these excuses more often than we would for those who are not autistic. Diversity among the neurodiverse makes it especially challenging to identify whether and how often a particular individual should be excused. Variation in degree of difference on various measures, the particulars of actual circumstances, and co-existing conditions such as alexithymia and intellectual disability are all relevant factors. Some adults on the autism spectrum have suggested approaching this on the model used in education. 11 All students can benefit from individualized instruction, but state-funded schools are expected to provide individualized educational plans (IEPs) to students who have atypical neuropsychology profiles. IEPs are generated on the basis of extensive assessment, and in consultation with teachers, parents, clinicians, and, where appropriate, the student. It might also make sense to construct an Bindividualized responsibility plan^(IRP). Because of the resources needed, an IRP would most likely be written in response to a specific incident or legal charge. However, it would be generalizable, and could be used as a guide to help an autistic adult avoid potentially problematic situations in the future, such as might arise in school or the workplace. It is also worth considering that differences characteristic of autism might enhance the ability to be responsible in some circumstances. This could be parallel to how monotropism (narrow focus of interest [50]) is understood as an advantage in certain careers, such as computer programming or database management. [51]. Respect and the Responsibility to Engage in the Moral Community Participating in the moral community is not just another activity. I might be poor at playing darts or folding origami, and may be frustrated by this. However, these activities are not good in themselves. I can find other meaningful activities without missing out on anything profound. However, there is a duty of moral engagement, to live life as a moral agent when possible. Failing to step up in this way, or otherwise opting out of moral engagement means failing to treat people (perhaps including oneself) as ends in themselves. Fulfilling the duty to realize one's potential for moral engagement will be harder for autists given the challenges identified above. Although this is a reason to excuse autists for falling short, there still appears to be a duty to try even though trying will generally take more effort for autists than for neurotypicals. This may seem troubling. Human diversity may contribute to different levels of success on a variety of measures, but autism should not make it harder for someone to be a good person. Three responses to this worry are available. We might see this as a reason to re-visit the concept of moral responsibility and look for an account of responsibility under which autists can be fully responsible without needing to work too much harder than others. Kennett takes an approach like this. [7] We might bite the bullet, though it may seem unjust, and say that autists are obligated to work harder than neurotypicals at moral engagement because it is a duty to do so. Alternatively, we might hold that autism simply reduces the obligation for moral engagement in some way. The first response is outside the scope of this paper, which is intended to draw out the implications of combining reasons responsiveness as an approach to moral responsibility with an understanding of autism as driven primarily by EF challenges. The second and third responses capture the horns of a significant dilemma, or at least two opposing forces that need to be balanced. Although excusing someone for a transgression can be a kind, empathetic response that recognizes the individual's vulnerability, it can also have a flavor of infantilizing parentalism. 12 Holding someone accountable is part of treating that person as an adult, as a full member of the moral community. There is every reason to believe that autistic adults want to be held accountable, and also want to be excused when autism has made them reason blocked. 13 This suggests, quite reasonably, that the default approach should be to treat adult autists as accountable, with adjustments for particular circumstances. Of course, this will not apply to those with profound impairments that keep them from engaging in social interactions generally. Nelkin has argued that Bdifficulty is a factor in determ i n i n g d e g r e e s o f b l a m e w o r t h i n e s s a n d praiseworthiness.^ [53] If we accept this, we can say that autism can be a factor that reduces the scope and degree of moral responsibility in some individuals. When autism causes non-culpable failure of reasons responsiveness as outlined above, this will put an action outside the scope of moral responsibility for the affected person. Where autism does not cause failure of reasons responsiveness, but does make it atypically difficult to employ an individual's reasons responsive mechanisms, this can reduce the degree of blameworthiness should this person fail to respond to reasons in the particular circumstance. When we are responsible for an action, we have an obligation to take responsibility for it. On a practical level, there might also be situations in which it makes sense for someone to take responsibility even when she was not strictly speaking responsible. (Cf Enoch [54]) This could be important to help an offended party make sense of the circumstance. It might be an opportunity to practice behaviors and reinforce habits of attention to promote an agent's ability to be responsible when similar circumstances arise in the future. (Cf Pickard [52]) We might also want to take responsibility in order to maintain trust (within ourselves or others) in our agency. Conclusion Determinations of whether and how autism might be a reason to excuse or even exempt someone from being morally accountable for behavior appear to hinge on how we choose among models of autism and among approaches to responsibility. The current paper looked at this question using an executive function model of autism and a reasons responsiveness approach to moral responsibility. We saw that executive function challenges may cause partial reasons blockage. Reasons blockage is not a failure of reactive attitudes or moral sensibilities. Like visual or hearing impairments, reasons blockage can keep some relevant facts from being available to some agent A. Reacting to those facts cannot therefore be Bproperly morally demanded of . [55]. Unlike Shoemaker, who argues in Responsibility from the Margins that autists are exempt from accountability [5], I concluded that autism will in most cases just increase the frequency of the types of excuses that sometimes apply to neurotypicals. This is because autism challenges executive function, but does not eliminate executive function. This understanding of how autism impacts moral responsibility provides a context for addressing the challenge of balancing respectful blame and compassionate exculpation. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
8,051.4
2017-08-12T00:00:00.000
[ "Psychology", "Philosophy" ]
Characterization of micro pore optics for full-field X-ray fluorescence imaging Elemental mapping images can be achieved through step scanning imaging using pinhole optics or micro pore optics (MPO), or alternatively by full-field X-ray fluorescence imaging (FF-XRF). X-ray optics for FF-XRF can be manufactured with different micro-channel geometries such as square, hexagonal or circular channels. Each optic geometry creates different imaging artefacts. Square-channel MPOs generate a high intensity central spot due to two reflections via orthogonal channel walls inside a single channel, which is the desirable part for image formation, and two perpendicular lines forming a cross due to reflections in one plane only. Thus, we have studied the performance of a square-channel MPO in an FF-XRF imaging system. The setup consists of a commercially available MPO provided by Photonis and a Timepix3 readout chip with a silicon detector. Imaging of fluorescence from small metal particles has been used to obtain the point spread function (PSF) characteristics. The transmission through MPO channels and variation of the critical reflection angle are characterized by measurements of fluorescence from copper and titanium metal fragments. Since the critical angle of reflection is energy dependent, the cross-arm artefacts will affect the resolution differently for different fluorescence energies. It is possible to identify metal fragments due to the form of the PSF function. The PSF function can be further characterized using a Fourier transform to suppress diffuse background signals in the image. ABSTRACT: Elemental mapping images can be achieved through step scanning imaging using pinhole optics or micro pore optics (MPO), or alternatively by full-field X-ray fluorescence imaging (FF-XRF).X-ray optics for FF-XRF can be manufactured with different micro-channel geometries such as square, hexagonal or circular channels.Each optic geometry creates different imaging artefacts.Square-channel MPOs generate a high intensity central spot due to two reflections via orthogonal channel walls inside a single channel, which is the desirable part for image formation, and two perpendicular lines forming a cross due to reflections in one plane only. Thus, we have studied the performance of a square-channel MPO in an FF-XRF imaging system.The setup consists of a commercially available MPO provided by Photonis and a Timepix3 readout chip with a silicon detector.Imaging of fluorescence from small metal particles has been used to obtain the point spread function (PSF) characteristics.The transmission through MPO channels and variation of the critical reflection angle are characterized by measurements of fluorescence from Copper and Titanium metal fragments.Since the critical angle of reflection is energy dependent, the cross-arm artefacts will affect the resolution differently for different fluorescence energies.It is possible to identify metal fragments due to the form of the PSF function.The PSF function can be further characterized using a Fourier transform to suppress diffuse background signals in the image.KEYWORDS: X-ray fluorescence imaging; Micro pore optics (MPO); Point spread function (PSF); Hybrid pixel detector Introduction Micro X-ray fluorescence spectrometry (XRF) is a widely used non-destructive technique for performing elemental analysis down to the micrometre scale of length.Micro-XRF is capable of providing elemental distribution information in many fields, such as materials science, cultural heritage [1], [2] and planetary surface analysis [3], [4].XRF imaging is typically performed using scanning or projection methods [5]. In the scanning method, XRF images are obtained by conducting a two-dimensional scan of a beam on the sample and collecting the fluorescence X-rays at every point of the map [6], [7], [8].The scanning approach provides high chemical sensitivity and high spatial resolution with the help of a polycapillary focusing lens or synchrotron [9].However, the spatial resolution of the obtained XRF image is limited by the spot size of the X-ray probe and scanning step size.It takes a significant amount of time to obtain elemental images for large areas measured with high spatial resolution. Full-field X-ray fluorescence imaging (FF-XRF) is a projection method that allows for statically resolved X-ray spectroscopy [10], [11].It can be used to map full sample areas with fair position and energy resolution.Primary X-rays are irradiated on a large area of the sample.The emitted XRF from the sample is collimated by X-ray optics (or a pinhole collimator [12], [13]) then guided to the pixel detectors.Several XRF imaging systems have been developed using Xray optics.X-ray optics can be manufactured with different micro-channel geometries, such as square [14], hexagonal [15] or circular channels [16].Although X-ray focusing optics have been significantly refined, they usually suffer from relatively low collection efficiency.In our previous research, a comparison study between a circular-channel polycapillary array and a pinhole collimator was conducted.No significant transmission intensity gain was observed when using a lead glass polycapillary array [17].Moreover, circular-channel optics are not considered true focusing devices due to the fact that the circular channels are mirrors with very short focal lengths and, hence, rays reflecting from a single channel diverge.Square-channel micro pore optics (MPOs), sometimes referred to as 'square multi-channel plate optics', 'multi-pore optics' or 'lobster-eye optics' are an attractive option because of their efficiency, which results from the corner cube effect, when compared to other possible channel cross sections [14].For squarechannel MPOs, X-rays that reflect off the square pore sides form a central focus (odd number of reflections in one plane) or line focus (even number of reflections in one plane), giving a crossarm point spread function (PSF).Ideally, one reflection each in two orthogonal planes will contribute to a focused central spot.The understanding of the square MPO properties relies on the modelling of ideal structures, as shown by one study carried out already in 1991 [14].XRF imaging using a perfect planar square-channel MPO has been further investigated through the modelling of the PSF, whereby Monte Carlo simulations were developed to evaluate the effects of instrument geometry and MPO characteristics, including various types of defects [18].However, the MPOs have imperfections that might add complexity to applications and create difficulties for understanding the performance of MPOs in practice. In a collaborative study, we investigated the usage of a square-channel MPO in an FF-XRF imaging measurement system.The imaging system consists of a commercially available squarepore MPO and a Timepix3 readout chip with a silicon detector.Using this setup, the influence of geometrical parameters and X-ray energy on the PSF profile is shown.Transmission through MPO channels and variation of the critical reflection angle are characterized by the measurement of metal fragments with different fluorescence energies.The hypothesis is that the energy sensitivity of the MPO can provide useful energy information directly from artefacts in the image caused by the PSF profile.An alternative method for indirect energy measurement is to vary the angle between two MPOs mounted in series [19].MPOs act as a filter for X-ray energy since the angle of total reflection for X-rays is energy dependent.Our purpose is to identify elemental imaging applications and spatial resolution limitations using MPOs.The experimental setup of the FF-XRF imaging system using a flat square-channel MPO is shown in Figure 1.An X-ray tube with a silver target was used.The MPO was mounted at the midpoint between the sample and the pixel detector.Spectral resolved images of the fluorescence signal passing though the MPO are retrieved using a Timpix3 system.X-rays that enter the MPO at angles to a channel axis smaller than the critical angle will either pass through the channel or be submitted to one or a series of reflections with the channel walls.X-ray photons undergoing an odd number of reflections in one plane will be focused on the image plane, while direct transmission or double reflection will deteriorate the spatial resolution.With equal distances Ls and Li, between sample and MPO and between MPO and detector, the conditions of the image plane to obtain true focusing without magnification are fulfilled. Timepix3 pixel detector The Timepix3 readout chip is developed by the Medipix3 group at CERN.This ASIC is equipped with eight data channels that are data driven and zero suppressed making it suitable for particle tracking and spectral imaging [20].The chip is designed in 130 nm CMOS and contains a 256 × 256-pixel matrix (55 × 55 μm 2 ).The Timepix3 can record time of arrival (ToA) and time over threshold (ToT) simultaneously for each pixel.As channel responses can never be identical, it is necessary to perform an energy calibration for each pixel.The calibration procedure is performed by taking measurements of X-ray fluorescence from five known elements: titanium, iron, copper, zirconium and silver plate.Moreover, a Timepix3 readout chip can be combined with different sensor materials, such as Si, GaAs or CdTe.A Timepix3 device with a 300 μm thick p-on-n Si detector was used in this study.After calibration, the energy resolution at 8.04 keV was 1.12 keV [17]. Micro pore optics plate The MPO is a multichannel reflective X-ray lens of leaded glass composed of an array of microscopic square-section channels whose walls act as imaging mirrors, as seen in Figure 1 (a).X-rays that enter the optic at an angle smaller than the critical angle to a channel axis will either pass through the channel or undergo one or a series of reflections [1], [4], [19].The MPO is shaped as a square with 20 mm sides and 1.2 mm thickness.The channel width is 20 µm square with 25 µm channel pitch giving an open area ratio of 60 %.The channel walls are coated with a 25 nm layer of iridium, a heavy reflecting material, to achieve a flat surface and hence improve optical properties [21].The approximate critical angle (θc) for total external reflection is given by the surface material atom number (Z), mass (A), density (ρ) and energy (E) of the X-rays using the following equation [22]: (1) Point spread function In a realistic imaging system, each point of a measured image is blurred by the effects of neighbour points compared to an ideal image with signal in only one single pixel.That blurring is called a point spread function (PSF).A PSF is widely used to characterize the spatial resolution of imaging systems.The main features of a square-channel MPO are a central spot (which is desirable for image formation), two perpendicular lines forming a cross, and weaker intensity in the quadrants delimited by the cross.The central spot intensity, or focusing reflection in Figure 1, is the result of an odd number of reflections in one plane [18].A second effect is from the diffuse signal from direct transmission.Moreover, an even number of reflections in one plane will contribute to the de-focusing reflection [23] and hence to the outer wings of the cross-arms. A flat-field image of a Cu plate was obtained and implemented to cancel the effects of image artefacts caused by variations in the pixel sensitivity of the detector.Measured PSFs after flatfield correction for different distances are shown in Figure 2. At a long working distance, the cross size is increased due to projection magnification (see Figure 2 (a)-(c)).However, when the MPO is placed non-symmetrically in the setup, the result is a blurred image, as shown in Figure 2 (d).This can be explained by the magnification effect due to non-true focusing. To achieve a better spatial resolution, the Ls and Li distance should be as small as possible and the MPO located at the midpoint between the sample and pixel detector.However, a few centimetres of space need to remain for the X-ray tube tip to irradiate a large area of the sample.An alternative method to reduce the working distance is to tilt the sample table at 45° to the axis, as described in [24].When tilting the sample stage, the XRF image is distorted on the image plane.Software can correct those distortions by warping the image with a reverse distortion. Energy dependence Measurements with a point source are useful for showing the modulation of the PSF with the MPO position, but it is also worth knowing the PSF response for different energy levels with respect to fluorescence imaging.A sample with two point sources (a mixture of Ti and Cu particles) has been analysed.This sample was exposed at 13 kVp with 500 μA emission current for two hours.Depending on the fluorescence energy, we plot the image for the energy ranges 2.5 keV to 5.5 keV and 6 keV to 9 keV, thus an elemental map of the distribution of Ti and Cu is obtained, as shown in Figure 3 (b) and (c).These images reproduce the 'true' sample shown in the photo in Figure 3 (a). An important feature of these PSFs is the presence of a cross centred on the main spot.These crosses have a detrimental effect on the image resolution.Thus, the intensity and reach of the cross-arms are worth characterizing.Both the object material and the system parameters influence these PSFs.The intensity of the cross-arms in the region around the PSF central spot depends on geometric factors, and reflectivity, which changes with the material and X-ray energy (c) (d) [18].For a fixed geometry, the material and the X-ray energy will mainly affect the length of the cross-arms.To exemplify how PSF profiles look, we extracted the full 2.5 keV to 9.0 keV spectrum intensity of the horizontal arm and the vertical arm from one Ti particle and one Cu particle, and plotted the central spot profiles in Figure 3 (d) and (e).Each profile is achieved by averaging three nearby row or column profiles.The PSF centre peak is due to focusing by an odd number of reflections in one plane in the MPO walls.At low energy, the critical angle will be wider.Hence, a higher PSF centre peak is expected for Ti since the signal can be reflected by more MPO channels.The Kα emission line is 4.5 keV for Ti and 8.0 keV for Cu.The calculated critical angle is ~1.10° for Ti, and ~0.62° for Cu fluorescence, using the density of the iridium coating in Equation 1.Using the same argument, the Cu PSF centre peak intensity is expected to be lower than for Ti.As expected, the signal-to-noise ratio is worse for Cu.When it comes to arm length, wider spreading is expected for photons from Ti particles that experience double reflections in one plane in the channels, as illustrated in Figure 1, since the critical angle is greater for lower energy.In Figure 3 (d) and (e), the expected FWHM of photons undertaking de-focusing reflection is marked.The detector is also exposed to direct transmission beams with no interaction with the MPO walls.These direct transmissions through the optic is energy independent since it is limited only by the channel geometry.The expected FWHM for direct transmission is also marked to allow it to be compared with the measurement.In summary, the measurements shown in Figure 3 indicate that the increased arm length of Ti compared to Cu can possibly be used to distinguish these two elements in case a non-energy-resolving detector system were to be used.However, the identification is complicated by the fact that lower energy does not only increase the arm length but also increases the PSF centre intensity.Hence, the actual PSF profiles for different elements might be difficult to distinguish.Another problem is that the possibility to accurately measure the arm lengths will vary if the background level increases or decreases. Fast Fourier transform To suppress the diffuse background signal in the measured images, a fast Fourier transform (FFT) and its inverse were applied to the measurements.Previous works have applied a deconvolution algorithm [25] to a square-channel MPO image to reduce the impact from a non-ideal PSF and thus enhance the resolution [1].Another motivation to further process the measurements is the expectation to see a pattern hidden in the diffuse background, along the arms of the PSF.A Gaussian-shaped window function was applied to individual PSFs of Cu from Figure 2 and of Ti from Figure 3, followed by an FFT for each PSF.It should be noted that the phase information from the FFT output is neglected such that the spatial position of the impulse response is suppressed.Hence, the achieved amplitude transfer functions from known metal fragments can be averaged.Figure 4 shows FFT results for Cu and Ti, averaged from three PSFs.Figure 4 (a) shows that along the cross-arm there is a difference in the shape, indicating that Ti has higher resolution.Along the diagonal between cross-arms there is no significant difference between the two materials seen in Figure 4 (b).After applying inverse Fourier transform to the achieved amplitude transfer functions, idealised PSFs with supressed background will be achieved.The achieved resolution is slightly below 1 lp/mm.The relatively high background level seen in Figure 3 can be reduced to close to zero in Figure 4 (c).In this study, it could not be verified if an oscillation pattern occurred along the cross arms, although some vague indication to a repetitive pattern is seen in the amplitude transfer function for Cu. Discussion Although the features of the PSF have a negative impact on the XRF image when using a squarechannel MPO, we have demonstrated that the cross-arm length of the PSF is energy dependent, using fluorescence from Ti and Cu.This feature is similar to a wavelength dispersive XRF spectrometer.The different energies of the characteristic radiation emitted from the sample are reflected in different directions and locations by the MPO walls.By measuring the arm length, we might be able to calculate the photon energy even when using non-energy-resolving detectors.One problem is that direct transmission and de-focusing reflection are merged with the peak signal due to focusing reflection.The direct transmission is determined by energy-independent MPO parameters, such as thickness, pore size and channel pitch; this will create a diffuse background patch and affect the determination of the de-focusing reflection arm length. To improve the imaging performance of an FF-XRF imaging system, the MPO parameters can be optimized during manufacturing for a dedicated application.Lens thickness, channel size, and internal coating are the main parameters that can be adjusted in the MPO design.In previous ray tracing studies, two different methods have been proposed to reduce the impact on the MPO [18].The first approach involves an array of square pores with a random orientation of the square cross section.This has been simulated for large distances (10 cm) and short distances (0.5 cm).The image obtained using the randomly oriented squares is not very different at a short distance to the image obtained using a regular MPO [18].This can be explained by the limited number of channels involved for each point of the extended source, which is also the origin of the periodicity of the PSF discussed above in the case of a regular MPO.However, this presents practical difficulties when manufacturing the MPO.The second approach involves the precise rotation of a regular MPO to negate the effect of the PSF, which provides good results at a short distance.However, distances down to 0.5 cm are a challenge for the mounting setup.Our setup needs at least ~3 cm of space for the X-ray tube tip between the sample and collimator for the large irradiation area of the sample. To further characterize the channel transmission and energy dependence of total reflection of the square MPO, post-processing using a Fourier transform has been applied.The relatively high diffuse background signal from the original measurements can be reduced to close to zero using this methodology.It is possible to reduce the background due to averaging several PSFs.Different PSFs can vary, however, due to fragment size and orientation; we might not measure true the PSF for all fragments.This can explain why, in Figure 4 (c), the difference between Ti and Cu PSFs is supressed after averaging. Conclusion The purpose of the current study was to identify capabilities and limitations using MPOs.A fullfield XRF camera setup with a square-channel MPO and Timepix3 readout chip was characterized in this study.The instrument was validated against theory using X-ray fluorescence imaging experiments with varying parameters for X-ray fluorescence energy, distances and readout energy discrimination.We have investigated the influence of these parameters on the intensity and spatial resolution of an X-ray fluorescence imaging experiment.One interesting feature of the single square-channel MPO setup is the ability to obtain spatially resolved images, where the arm length in the PSF indicates energy even if the imaging sensor does not resolve energy.Post-processing using a Fourier transform is a promising method for background suppression if the PSFs are generated from small enough fragments. Figure 1 . Figure 1.Setup: (a) SEM image of the square-channel MPO, (b) Sketch diagram of the XRF imaging setup with a square-channel MPO, containing different reflection components and their influence on the resolution. Figure 3 . Figure 3. Example of PSF's from two materials resolved in one exposure image: (a) sample of metal particles; (b) PSF of Ti particles at 4.5 keV; (c) PSF of Cu particles at 8.0 keV; (d) central spot profiles of the MPO with Ti particle; (e) central spot profiles of the MPO with Cu particle.The expected FWHM due to components from two reflections and zero reflections are indicated in (d) and (e). Figure 4 . Figure 4. Amplitude transfer functions achieved from a fast Fourier transform of PSFs, along (a) and diagonal to (b) the cross-arms.PSFs (c) achieved from inverse Fourier transform average of three PSFs, where background is strongly suppressed.
4,849.6
2022-12-21T00:00:00.000
[ "Materials Science", "Physics" ]
ON SOME GEOMETRY OF PROPAGATION IN DIFFRACTIVE TIME SCALES . In this article, we develop a non linear geometric optics which presents the two main following features. It is valid in diffractive times and it extends the classical approaches [7, 17, 18, 24] to the case of fast variable coefficients. In this context, we can show that the energy is transported along the rays associated with some non usual long-time hamiltonian . Our analysis needs structural assumptions and initial data suitably polrarized to be implemented. All the required conditions are met concerning a current model [2, 3, 8, 9, 10, 11, 19, 21] arising in fluid mechanics and which was the original motivation of our work. As a by product, we get results complementary to the litterature concerning the propagation of the Rossby waves which play a part in the description of large oceanic currents, like Gulf stream or Kuroshio. Detailed introduction This article is divided in two parts. The first (Section 2) is devoted to the WKB analysis associated with some class of quasilinear systems. The second (Setion 3) explains how this approach can be implemented to solve concrete questions coming from quasi-geostrophic models. 1.1. Presentation of the framework. In this paragraph, we introduce the equations, the difficulties, and some significant results. ON SOME GEOMETRY OF PROPAGATION IN DIFFRACTIVE TIMES 3 1.1.2. About geometrical optics. The geometrical optics approximation [15] simplifies the description of the evolution of a wave by considering that propagation takes place along rays. The simplest model corresponds to the situation where the high frequency oscillation is along one phase only: (1.5) u ε (t, x) = U ε ε t, x, ϕ(ε t, x) ε , U ε (τ, x, θ) , θ ∈ T := R/Z . The amplitude U ε (τ, x, θ) is assumed to be in C ∞ (R 2 + ×R d ×T; C n ) while the phase ϕ(τ, x) is in C ∞ (R + × R d ; R). Hence, the wave u ε is roughly constant on the level surfaces of ϕ (called wave fronts) and has strong variations (at a speed of the order of ε −1 ) in the directions of ∇ϕ. The monophase WKB constructions are devised to prove the existence of solutions to (1.1) having the form (1.5). This goes back to the works of Choquet-Bruhat [5] as well as those of Hunter, Majda and Rosales [14]. A rigorous justification of such developments was performed by Guès [12,13]. Classical results are only valid on a finite time interval, of the form t ∈ [0, T ] with T ∈ R * + . After such a time, two phenomena may prevent from going further in time: the creation of shocks and the appearance of caustics. On shocks. Discontinuities of order zero on solutions of (1.1) may be caused by the nonlinearity of the coefficients of the matrices S j . This difficulty can be managed [1] in one space dimension (d = 1) but it seems out of reach in higher space dimensions. Then, it may be avoided through some linear degeneracy assumption on the coefficients. In this article, we ensure that no shocks appear in times t 1. For this we require that the hermitian matrices S j depend little on the state variable. More precisely, we impose that for all j ∈ {0, · · · , d}, (H1) S j (ε, τ, x, u) = S 0 j (τ, x) + ε S 1 j (τ, x) + ε 2 S 2 j (ε, τ, x, u) with S 0 j and S 1 j in C ∞ (R + × R d ; S n ), and S 2 j ∈ C ∞ ‫;ג(‬ S n ). Those curves may focalize, or even cross [15,16]. This mechanism prevents from solving (1.6) in the class of C 1 functions. It does not occur when i) The matrices S 0 j and Λ 0 do not depend on the space variable x ; ii) The initial data ϕ 0 is linear in x , meaning that there exists a direction η ∈ R d such that ϕ 0 (x) = η · x. Condition i) yields parallel rays. It implies that λ(0, x; ξ) ≡ µ(ξ) so that Ξ(t, x; ξ) = ξ and X(t, x; ξ) = x + ∇ ξ µ(ξ) t. From ii) one gets ∇ϕ 0 ≡ η and one recovers plane phases ϕ(t, x) = −µ(η) t + η · x. The two restrictions i) and ii) appeared in the pionneering work by Donnat, Joly, Métivier and Rauch [6] where they were used to propagate the WKB analysis all the way to times t ε −1 or τ 1. They have since been considered as prerequisites in contributions dealing with diffractive nonlinear geometrical optics [17,18,24]. Let us also mention [20] where the long time semiclassical evolution involving non classical phenomena is studied for the linear quantum dynamics. 1.1.5. The analysis in diffractive times. In this article, we will consider diffractive times τ 1 without assuming conditions i) and ii). We will allow variable coefficients S 0 j and Λ 0 , along with nonlinear phases. Some attemps in this direction have been performed in [7,14] but (after rescaling) it was in the context of almost planar phases, meaning in particular that ϕ is in the form ϕ(t, ε x) instead of ϕ(t, x). In order to get to times τ 1, one still needs a degeneracy assumption on the curvature of the characteristic variety. The stronger version of that property consists in requiring (after an adequate change of variables in ε, t, x and u) the existence of a spectral value such that (H2) λ(0, x; ξ) = 0 , ∀ (x, ξ) ∈ R d × R d . At first sight, condition (H2) seems to be of no interest. Indeed, as long as t 1, nothing happens. The phase and the principal profile U 0 remain both unchanged. One finds ϕ(t, ·) = ϕ 0 (·) and U 0 (t, ·) = U 0 (0, ·). On the other hand, for t ε −1 or τ 1, one expects that the dispersive effects (and the production of Schrödinger equations) which motivate the articles [6,17,18,24] do not appear. Indeed, the hypothesis (H2) implies that the Hessian matrix D 2 ξ λ(0, x, ·) is zero. However, precisely because we do not assume i) and ii), other phenomena can occur. Without i) and ii), the discussion concerning oscillating solutions of (1.1), like in (1.5), is in fact rather complex as soon as diffractive times τ 1 are reached. The corresponding study is new. It is motivated both by mathematical and physical issues. In the linear case (S j independent of u), we assume that conditions (H1) to (H8) hold. In the nonlinear (more general) case, we have to complete these prerequisites with conditions (HN1) to (HN5). These are structural assumptions on the expressions S j , Λ and F appearing in the system (1.1), which are made precise further in the text. Consider a phase ϕ 0 ∈ C ∞ (R d ; R), which is non stationary in the sense that . Look at an oscillatory initial datum of the form Then, for all N ∈ N, there is a family {u a ε } ε∈ ]0,1] , involving monophase oscillations of the form which is an approximate solutions to the system (1.1) in diffractive times. More precisely, the functions u a ε (t, x) are defined on a time interval [0, T /ε] for all ε ∈ ]0, 1] with T ∈ R * + . They satisfy (1.9) and In addition, the presence of variable coefficients can induce a modification of ϕ 0 in times t 1/ε or τ 1, via some (non trivial) eikonal equation To our knowledge, Theorem 1 cannot be derived, after a change of scalings (in ε, t and x) or a change of variables (in u) from well-established results. Section 2.1 introduces our notations and our strategy. The hierarchy of equations is presented in Section 2.2 and initiated in Section 2.3. As already explained, the main effect (for small times t 1) of the penalization term is to polarize ∂ θ U 0 in the kernel of P 0 (having dimension p ∈ N * ). Then, comes the question of the propagation in diffractive times. Because the coefficients S 0 j (τ, x) and Λ 0 (τ, x) may depend on the variable x, the discussion must be organized differently from what is usually done. In fact, it needs a refinement of the analysis inside the kernel of P 0 (τ, x, ξ). CH. CHEVERRY AND T. PAUL A crucial step (Lemma 2.3) is to get the Hamiltonian h(τ, x; ξ) of (1.12). It corresponds to the eigenvalue of some hermitian matrix H(τ, x, ξ) exhibited in Section 2.4, see (2.31). One notices that there can be (in diffractive times) as much as p different geometries (or p different types of rays). The transport equation on U 0 is solved in Section 2.5. The induction giving access to the other profiles U k with k ≥ 1 is presented in Section 2.6. This concludes the formal WKB analysis. 1.2.2. Stability issues. Due to (H3) and (H5), the system (1.1) is compatible with energy estimates in the space L 2 . Therefore, in the linear case, we can infer from the preceding construction the existence of exact solutions u ε close (in the sense of L 2 ) to the approximate solutions u a ε . This is what says our next Theorem, proven in Section 2.7. Theorem 2. (The linear case) Let us assume conditions (H ) and suppose that QS is independent of u. Consider a family {u a ε } ε∈ ]0,1] of approximate solutions of order N to (1.1), given by Theorem 1. Then, for all ε ∈ ]0, 1], the exact solution u ε of the Cauchy problem sup The nonlinear situation is more delicate to deal with. It requires uniform estimates in L ∞ on the family {u ε } ε . Such an information can be obtained only through Sobolev estimates. Theorem 3. Select any approximate solution u a ε given by Theorem 1, with 2 + d < N . Suppose that Then, the exact solution u ε of the Cauchy problem (1.13) is defined on the domain [0, T /ε] × R d . It remains close to the approximate solutions u a ε in the sense that, for all s and N with 1 + d/2 < s < N/2, one has (1. 16) sup The condition (1.15) is rather restrictive but it seems necessary. Still, it allows variable coefficients at the level of the matrix Λ 0 (τ, x). The proof of Theorem 3 relies on energy estimates performed in the weighted space H s ε 2 , where for ι ∈ N : The control (1.16) hides a lost of ε −2 by spatial derivative ∂ j . As we will see in the next Section 1.4, this information H s ε 2 is not always sure to be optimal. Nevertheless, it is sufficient to justify our WKB analysis. Although all hypothesis (H ) and (HN ) will be introduced in context in the following Sections, for the comfort of the reader, we state them below. 1.3. Statement of the hypothesis. Note that our statements and the following assumptions could be localized in a conic region (in ξ) containing the set (x, ∇ϕ 0 (x)) ; x ∈ R d . For the simplicity of exposition, we will not take into account such a refinement. Let us first state the assumptions necessary for the linear results. (H4): There is a positive integer p ∈ N * such that dim ker P 0 (τ, x; ξ) = p , There is a compact set K ⊂ R d such that the fields S j , Λ and F evaluated at (ε, τ, x, u) are constant if τ ∈ [0, T ] and x ∈ K. CH. CHEVERRY AND T. PAUL Given m ∈ Z and some application f (τ, x; ξ) defined on [0, T ] × R d × R d , we use the shorthand notation which simply amounts to replacing everywhere the vector ξ by m∇ϕ. For instance |f 0 (τ, x) = f τ, x; 0 . Let us denote by Π h 0 (τ, x) the unitary projector onto the kernel of H(τ, x; 0). Introduce the projector Q := I − Π and the matrix G 0 := |(QP 0 Q) −1 0 F 0 . (H8): The source term F 1 must be well prepared in the sense that We can now state the assumptions necessary for the nonlinear results. (HN1): There is an integer p 0 ∈ N satisfying p 0 ≥ p and dim ker P 0 (τ, Consider a solution ϕ of the eikonal equation (1.6). Associated with ϕ, define the set (HN3): There is a constant c ∈ R * + such that for every m ∈ H A : (HN4): There is a constant c ∈ R * + such that for every m ∈ H A : These assumptions (H ) and (HN ) are not so restrictive. They are verified in various contexts including the propagation of Rossby waves (Section 3) and of electromagnetic waves. They do not imply i) and ii). The point i) can be lifted since the matrices S 0 j (τ, x) or Λ 0 (τ, x) may well depend on x while the function λ(0, x; ξ), in view of (H2), does not. The restriction ii) can also be lifted as one can start with an arbitrary phase ϕ 0 and no caustics will appear (at least as long as ε t or τ remain small enough). ON SOME GEOMETRY OF PROPAGATION IN DIFFRACTIVE TIMES 9 1.4. A model arising in fluid mechanics. The present work is motivated by physical considerations. Indeed, our WKB construction allows to account for some wave-like features of oceanic circulation, called Rossby waves, which are produced by the variations of the Coriolis force with latitude. The link between this problem and our discussion is presented in detail in [2,3,8,9,10]. Basically, we have to deal with a two-dimensional system of compressible Euler type. The space variable is x = (x 1 , x 2 ) ∈ R 2 . The state variable is u = t (p, v 1 , v 2 ) ∈ R 3 and it must satisfy represents a state of rest. It is a smooth function of (ε, x) ∈ R + × R 2 . The introduction of b(ε, x) is due to the Coriolis force. In basic models, we have to deal with the choice b(ε, x) = sin x 2 . The source term F r := t (F r 0 , F r 1 , F r 2 ) allows to take into account other influences (like wind, · · · ). It is a smooth application which can depend on the variables (ε, τ, x, u) ∈ R + × R + × R 2 × R 3 . The notation d τ is for the particular derivative d τ : In contrast to (1.1), the system (1.19) involves directly the diffractive time variable τ , explaining the singular power ε −2 in front of b. The modeling work leading to (1.19) is done in Paragraph 3.1. In the context of (1.19), a version of Theorem 1 is the following : Theorem 4. Consider a family of oscillatory initial data, like in (1.9). Suppose that, for all θ ∈ T, the profile U ε (·, θ) is supported in a domain D ⊂ R 2 adjusted such that Assume moreover that the phase ϕ 0 is nonstationary in the sense of (1.8) and that it satisfies assumptions (Hi) or (Hii) given below : Then, there is a family {u a ε (τ, x)} ε∈ ]0,1] , as in (1.10), made of approximate solutions to the oscillatory Cauchy problem (1.9)- (1.19). For all ε ∈ ]0, ε 0 ] with ε 0 ∈ R * + , it is defined in diffractive times τ ∈ [0, T ] with T ∈ R * + . More precisely, those approximate solutions oscillate with a phase solving the eikonal equation (1.12) where the function h must be replaced by the Rossby Hamiltonian h r given by The energy of Rossby waves is transported along the rays which are associated with h r , up to some explicit damping and source terms. Moreover, there are exact solutions {u ε (τ, x)} ε∈ ]0,1] of (1.19) corresponding to the approximate solutions u a ε in the sense of (1.16). This statement should be compared with the results announced in [3] and proved in [2]. In those papers, the discussion is essentially based on the study of a linearization of the system (1.19) around a particular stationary solution, and the methods come from semi-classical analysis and dynamial systems. They consist indeed in diagonalizing the linearized system using ε-pseudodifferential symmetrizers and in obtaining dynamical information, in terms of the wave front set of the initial data. Theorem 4 concerns more restrictive initial data, but the preparation of the data (the polarization on Rossby waves) has the advantage of giving rise to a more precise description. It allows to catch quantitative informations and to point out nonlinear mechanisms influencing the propagation. In Paragraph 3.2, we check that the structure of (1.19) is compatible with Assumptions (H1), · · · , (H7) and (H8) required by Theorem 1. The paragraph 3.3 is devoted to the description of rays transporting Rossby waves. The hamiltonien h r exhibited in (1.21) is a generalization of what is produced in [3]. Moreover, its domain of validity is proved to be the whole cotangent space T * (R 2 ) \ {0}. On the other hand, the detailed analysis of the corresponding bicharacteristics, and in particular the reasons why it is posible to find trapped trajectories, is performed in [2]. The Part 3.4 aims to make sure that the requirements (HN1), · · · , (HN4) and (HN5) are satisfied. It means, in the context of (1. 19), a precise study of harmonics. In comparison with what is obtained in [2], the present analysis allows to catch more nonlinearity. The size of the oscillating parts can be larger by a factor ε −1−η , with η > 0. Another specificity of the current text is that it includes a discussion about quasilinear transparencies. In the Paragraph 3.5, we show (see Lemma 3.1) that the obstructions to take arbitrary large times T in Theorem 4 are not coming from the nonlinearities but only from the restriction (1.20) or from the possible formation of caustics when solving the eikonal equation (1.12). 1.5. About the propagation of electromagnetic waves. Our approach can bring useful information in other physical contexts. For instance, it can be used to explore questions linked with light propagation in inhomogeneous media and with lasers in a plasma [17,24,23]. In the Paragraph 4, as an illustration, we explain the case of ferromagnetism. The WKB Analysis. This Section 2 is devoted to the proof of Theorems 1 and 3. Parts 2.1 up to 2.6 explain the construction of the approximate solutions u a ε . Part 2.7 deals with nonlinear stability issues. Assumptions and notations. We are interested in the system (1.1) for t ε −1 . For the sake of simplicity, we will manipulate matrices S j and Λ which do not depend on the variables t and ε x. The influence of t and ε x could be incorporated in the analysis but it would induce technicalities which are not central in what follows. For this reason, we deal only with x ∈ R d and only with the slow variable τ ∈ R + . Reasoning directly with τ ∈ R + , we are faced with a system (containing singularities both in ε −1 and ε −2 ) : We denote by T * the cotangent space in x, and its elements by (x, ξ). The null section of T * is denoted by T * 0 , and its complement is T * \0 . Thus To shorten the notation, we define, for k ∈ {0, 1}, the following differential operators in x, using the notation introduced in (H1) : x) . With that notation, and expanding (2.1) in powers of ε, we find The three main constraints to keep in mind are the following: -The matrix S 0 is divergence free, in the sense that This assumption ensures that the differential operator S 0 (τ, x; ∂ x ) is antiselfadjoint, which guarantees conservation laws. CH. CHEVERRY AND T. PAUL -There is a positive integer p ∈ N * such that we denote by Π(τ, x; ξ) the unitary projector onto the kernel of P 0 (τ, x; ξ). We also denote by |Π 0 the operator |Π 0 := Π(τ, x; 0), see (1.18) for the notation. Then Π • Π ≡ Π and one has One notices also that Assumption (H4) implies that Π is smooth on T * \0 , which will be useful in the discussion, when it comes to differentiating in x and ξ. In the special case p = 1, assumption (H4) may be stated equivalently in the following way. There is a C ∞ vector field X(τ, Then, one deduces the explicit formula for the projector Π : Assumption (H4) specifies (H2), except at points (τ, x, ξ) with ξ = 0. Since the map (τ, x, ξ) −→ dim ker P 0 (τ, x; ξ) is upper semi-continuous, it is natural to supplement (H4) with : -There is an integer p 0 ∈ N satisfying p 0 ≥ p and The reason why the assumptions are different for points of T * \0 and of T * 0 is due to physical applications, in which it often happens that p 0 > p. As usual, one requires above constant multiplicity. That is a serious constraint but it is inevitable if one seeks WKB expansions at any order in ε. If p = p 0 , the Assumption (HN1) is nothing but the continuation of (H4) to points (τ, x, ξ) of [0, T ] × T * 0 . This allows to extend the regularity of the projector Π(τ, x; ξ) to the whole cotangent space [0, T ] × T * . For ξ = 0, one has P 0 (τ, x; 0) = Λ 0 (τ, x) and the constraint (2.2) becomes For monophase, linear geometrical optics, the phase ϕ is non stationary in x and the oscillations are pure (carried by harmonics of the type m ϕ with m = 0 fixed), so one does not need to consider the set T * 0 ; hence (HN1) is not relevant. However, in a nonlinear situation that is no longer the case. That accounts for the denomination (HN1) for that assumption. hal-00514861, version 1 -3 Sep 2010 ON SOME GEOMETRY OF PROPAGATION IN DIFFRACTIVE TIMES 13 -Looking at the eigenvalue λ(τ, x, ξ) ≡ 0, we can observe that the solutions of (1.7) are simply X(t, x, ξ) = x for all t ∈ R + . Thus, the constraint (H4) can be viewed as the strong form of a capture condition on the rays. The speed of propagation associated with (2.1) is of the order O(ε −1 ). But waves which are approximately polarized in the kernel of P 0 (like those on which we will focus) remain located at a fixed distance of X, as long as τ 1, and this uniformly with respect to the parameter ε ∈ ]0, 1]. In this article, we focuss on such waves. Thus, we will be able to localize the discussion on a set (τ, x) ; |x|+c τ ≤ C for constants c and C independent of the parameter ε ∈ R + . The following assumption is therefore natural and does not reduce generality: There is a compact set K ⊂ R d such that the fields S j , Λ and In the following, we shall sometimes denote ∂ 0 := ∂ τ . Profiles U ∈ L 2 (T) are decomposed into Fourier series Given m ∈ Z and some application f (τ, , m ∈ Z , which simply amounts to replacing everywhere the vector ξ by m∇ϕ. For example, |Π 0 (τ, x) is the unitary projector on the kernel of Λ 0 (τ, x) and replacing ξ by m∇φ in condition (2.2) yields The symbol |· (without the index m) is to designate the action on L 2 (T) associated with the Fourier multipliers |· m with m ∈ Z. For instance Note that the norm of |Π m is bounded by 1 for all m, so the operator |Π is defined and continuous on H s (T; C n ) for all s ∈ R. We define Therefore, it has a partial inverse (right and left) denoted by (Q P 0 Q) −1 and characterized by the relations 2.2. The hierarchy of equations. To simplify, we will work in the whole space R d and postpone the discussion about the localization of the solutions. We look for approximate solutions u a ε (τ, x) to (2.1) as monophase oscillations like in (1.10), where the phase ϕ(τ, x) is smooth with bounded derivatives. More precisely, we impose We are interested in non stationary phases in the sense that We will denote by T the torus R/Z, and elements of T are denoted θ. The solutions u ε of (2.1) are therefore sought under the form In particular, it may be Taylor-expanded in ε near ε = 0, under the form Plugging the expansion (2.7) into (2.1) and re-ordering in terms of powers of ε yields a hierarchy of equations which starts at order ε −2 . One gets Introduce the following differential operators as well as the nonlinear expression Easy computations allow to find Γ . For k ≥ 1, one obtains Γ k by linearizing the nonlinear terms in Γ 0 . The equations are therefore of the same type as in the case of Γ 0 , up to a source term, denoted B k , which only depends on the profiles U j for j going from 0 to k − 1. Some computations allow to deduce that Let us now describe briefly the strategy. To obtain an approximate solution it is enough to solve the system We shall deal with the constraints Γ j ≡ 0 one after the other (for j going from −2 to N − 1). The cases j = −2 to j = 0 are dealt with in detail in Parts 2.3 to 2.5 respectively. This gives an algorithm providing successively pieces of the U k , as presented in the induction property (P k ) in Part 2.6. In the end one recovers all the U k for k ≤ N + 1. 2.3. The preliminary polarization condition. The equation Γ −2 ≡ 0 is the same as At this stage, the part |Q U 0 ≡ G 0 is entirely determined, while |Π U 0 may yet be chosen arbitrary. For m ∈ Z * , assumption (H4) and (2.6) yield One has p degees of freedom on U 0 m . In particular, one can demand that the oscillation u ε be non trivial, by choosing a coefficient U 0 1 in such a way that In the case when p = 1, using (2.4) one finds that for m ∈ Z * , Up to shrinking T ∈ R * + , we can obtain (2.6). Finally additional information on U 0 and U 1 is also deduced. The statement requires some additional notation so we postpone it to Paragraph 2.4.7. 2.4.2. A preliminary algebraic computation. The following lemma, which is classical in this context, will be very useful in the following. Proof. Differentiating the relation (2.2) with respect to the direction ξ j gives, for any (τ, x, ξ) in [0, T ] × T * \0 : One then applies the operator Π to that equation and one uses again (2.2), to find Finally noticing that m ∇ϕ(τ, x) is nonzero due to (1.8) and to the fact that m = 0, we get directly (2.16). 2 This result 2.1 is due to the fact that the group velocity (∇ ξ λ)(τ, x; ξ) which is associated with a trivial eigenvalue λ ≡ 0 is simply zero. Note that |Π 0 is defined using only Λ 0 . Since the matrices S 0 j and Λ 0 have been chosen independently, there is no reason for (2.16) to be satisfied when m = 0. This leads to the following supplementary assumption : The mode m = 0 is not sollicited in a linear situation. Thus, the condition (HN2) is useful only in a nonlinear framewok. Remark 2.1. The condition p = p 0 clearly relates the behaviours of P 0 on T * 0 and T * \0 . In particular, the map Π : T * −→ S n becomes continuous. 2.4.3. Some polarization constraints. Since the constraint Γ −1 ≡ 0 is linear, it may be decomposed into conditions on the Fourier coefficients Γ −1 m . For every m ∈ Z * , one gets Let us introduce the following matrix, for each point (τ, x, ξ) ∈ [0, T ] × T * : The application G(τ, x; ξ) inherits properties which are stated below and which are proved in the Appendix 5, paragraph 5.2. Lemma 2.2. For all (τ, x, ξ) ∈ [0, T ] × T * , the operator G(τ, x; ξ) can be identified with the action of some hermitian matrix which is of size p 0 × p 0 when ξ ∈ T * 0 and of size p × p when ξ ∈ T * \0 . The application G is of class The fields of matrices S j and Λ are real-valued. Let us also define the matrix Now, we can come back to the study of (2.19) and (2.20). • The case m = 0. We recall (2.12) which yields we can apply (right and left) the operator |Π m . Using Lemma 2.1, we find the polarization constraint One sees, in the formula (2.24) defining G m where ∂ j |Π m is replaced as indicated in (2.27), the quantities |∂ j Π m for j ∈ {1, · · · , d}, which would not appear if the matrices Λ 0 and S 0 j were constant. One also notices in (2.27) the presence of second order derivatives of ϕ, which would not appear if the phase ϕ was linear. Under conditions i) and ii) of the Introduction, those contributions would therefore disappear, and we would simply have to deal with G m ≡ 0 for all m ∈ Z. Due to (2.27), one has G 0 ≡ −i |G 0 with G as in (2.21). When defining G 0 , the contributions ∂ 2 jk ϕ(τ, x), which are multiplied by the factor m = 0, play no role. Although that is not the case at first sight for G m when m ∈ Z * , it turns out that they also vanish. This fact is pointed out in the next statement, where it appears that G m only depends on ∇ϕ(τ, x), and can easily be deduced from G. As the proof of that statement (which relies on algebraic computations) is rather technical, we postpone it to the appendix 5, paragraph 5.3. where G is defined in (2.21). 2.4.4. Long-time hamiltonians, and the eikonal equation. In this section we shall concentrate on the equation (2.26) in the case when m = 1. This will allow to deduce an equation on the phase ϕ. Lemma 2.3 implies that one is considering the equation In view of (1.2), the matrix Π S 0 0 Π is positive definite on Π(C n ). Therefore In particular, the map M is invertible as an operator from Π(C n ) to itself. For (τ, x, ξ) ∈ T * , we can introduce the matrix In view of the definition of H and due to the Lemma 2.2, the matrix H is hermitian. Hence, it is diagonalizable with real eigenvalues. Let us denote by spec H ⊂ R the set of its eigenvalues. We assume that (H7) There is an eigenvalue h of H whose multiplicity From now on, we assume (H7) and we select accordingly some eigenvalue h of H which is thus defined on [0, T ]×T * \0 . For (τ, x, ξ) ∈ [0, T ]×T * \0 , we denote by Π h (τ, x; ξ) the unitary projector onto the kernel of (H − h I)(τ, x; ξ). For (τ, x, ξ) ∈ [0, T ] × T * \0 , we denote by Q h (τ, x; ξ)) the unitary projector onto the kernel of (Π − Π h )(τ, x; ξ). The Assumption (H7) implies that the maps h and Π h are C ∞ on [0, T ]×T * \0 . The field H(τ, ·) is in fact defined on the whole of T * . It is continuous on T * \0 . However, when p 0 > p, it is possible that H(τ, ·) is not continuous on T * since the behaviour of Π(τ, ·) near T * 0 is not known. The unitary projector onto the kernel of |H 0 (τ, x) ≡ H(τ, x; 0) is denoted by Π h 0 (τ, x). The spectrum of H(τ, x; 0) may have nothing to do with that of the matrices H(τ, x; ξ) for ξ = 0. This is the reason why the function h has not been defined on T * 0 . However, when p 0 = p, both maps H and h may be continuously extended from T * \0 to T * , in which case h(τ, x; 0) may be defined without ambiguity. Before going further, we put aside the following result which will be proved in the Appendix 5, paragraph 5.4 Lemma 2.4. Suppose (2.22) and that p 0 = p = 1. Then, the function h is continuous on [0, T ] × T * and it is odd with respect to the variable ξ. In particular, it satisfies From now on, the phase ϕ is required to satisfy the Cauchy problem (1.12) which has a smooth, C ∞ solution locally in time. Up to shrinking T ∈ R * + , the function ϕ satisfies (2.6). The Proposition 2.2 is proved. Solving (2.26) therefore reduces to imposing (1.12) and (2.33). The scalar, nonlinear evolution equation (1.12) can be interpreted as an eikonal equation corresponding to a long time propagation (τ 1) of oscillatory quantities of the type e i ϕ(τ,x)/ε , polarized according to (2.33). One can consider the function h(τ, x; ξ) to be some long-time hamiltonian associated with the eigenvalue λ ≡ 0. There are, with this formulation, as many long-time hamiltonians as there are eigenvalues (counted without their multiplicity) in the spectrum of H. These are at most p. When p = 1, The discussion is easier. The matrix H can ve viewed as a scalar real-valued function and we can talk about the long-time hamiltonian. Besides, the assumption (H7) is necessarily verified (withp = m = 1). When p = 1, we have simply Let us define Q h := Π − Π h . Then, retain the following relations The application (H − h I)(τ, x; ξ) is linear and bijective if we look at it as acting on the vector space Q h (τ, x; ξ)(C n ). It has a partial (left and right) inverse which is denoted by Q h (H − h I) Q h −1 and which is characterized through the identities 2.4.6. Study of the harmonics. Recall that the phase ϕ has been determined through (1.12). In the present dispersive context, the harmonics m ϕ with m = 1 are not sure to be still solutions to (1.12). Nothing guarantees that Let us define Due to (1.12), one has 1 ∈ H A. In (2.35), one imposes also 0 ∈ H A. This convention must be commented. Recall that p 0 ≥ p ≥ 1 meaning that det P 0 (τ, x; 0) = det Λ 0 (τ, x) = 0, implying that the trivial phase ϕ ≡ 0 is characteristic. Thus, it is natural to incorporate the mode m = 0 inside H A. On the other hand, in the context of Lemma 2.4, the relation (2.34) is obvious for m = 0. Since U 0 (0, ·) is real valued, assumption (2.13) requires that at time τ = 0 both harmonics m = −1 and m = 1 have nontrivial contributions. If the matrices S j and Λ are real valued, one expects that oscillations in e − i ϕ/ε and e i ϕ/ε will propagate and interact due to the nonlinearity of the equation. Generically, even if U 0 0 (0, ·) ≡ 0, an average mode U 0 0 (τ, ·) ≡ 0 will therefore be produced for τ ∈ R * + . The question of the creation and propagation of that mode (and the others) is a delicate matter. To deal with it, we introduce the following assumptions: There is a constant c ∈ R * + such that for every m ∈ H A : as well as: There is a constant c ∈ R * + such that for every m ∈ H A : The assumption (HN3) is necessarily verified when p = 1. Indeed, when p = 1, there is no spectral value µ ∈ spec H such that µ = h. Therefore, there is nothing to check concerning (HN3). Polarization constraints on the unitary projector onto the kernel of |H 0 (τ, x). Define also Q h 0 := |Π 0 − Π h 0 and the following action (as a formal series): Introduce the projectors In this section, we shall prove the following proposition. -And finally Combining the Propositions 2.1 and 2.3, we can see that the mean value U 0 0 of the principal profile U 0 must be adjusted according to Proof. Let us go back to equation (2.26), which for the moment has only been studied in the case when m = 1. Using (2.12) and the previous notation (2.31), one gets If m ∈ H A, assumption (HN4) implies that the matrix |H m − m ∂ τ ϕ I is invertible. In view of (2.12), the condition (2.39) reduces to : On the opposite if m ∈ H A \ {0}, one gets |H m − m ∂ τ ϕ I = |H − h I m and (2.39) becomes the polarization condition The case m = 0 must be dealt with separately. Taking into account the notation (2.36), the relation (2.25) is the same as This enforces the compatibility condition Under condition (C1), one has the expected result on U 0 0 . The remaining relation (2.37) comes from studying the equation |Q Γ −1 ≡ 0. 2 Smoothness of the profiles. More is needed than just identifying the coefficients |Q m U 1 m through (2.37). To complete the analysis, one needs to give a meaning in H s ([0, T ] × R d × T; C n ) to the profiles U j (t, x, θ), and this requires understanding how the various operators introduced in this construction act on H s . To deal with |(Q P 0 Q) −1 , we introduce the following assumption: which is enough to ensure the boundedness of the linear map Remark 2.5. Assumption (HN5) amounts to the same thing as requiring the existence of a a constant c ∈ R * + such that for every m ∈ Z : For m ∈ Z fixed, the corresponding minoration with a constant c m ∈ R * + is a consequence of (H5), (H4) and (HN1). Thus, the problem lies for large values of |m|, where (2.42) may be difficult to check. Remark 2.7. Propositions 2.1 and 2.3 imply that at this stage, the phase is known, as well as a large part of the profile U 0 (it remains to find |Υ h U 0 ), and part of the profile U 1 (namely |Π U 1 ). The transport equation. In this Section 2.5, we determine |Π U 0 and part of the expression |Π U 1 . We start by translating the equation F m (Γ 0 ) ≡ 0. We get m which is the part of U 0 still unknown to us. We shall define it by computing the equation it satisfies. Apply |Π h m to (2.46) and use (2.37) to replace |Q m U 1 m accordingly. By developing the induced expression, that is m . we get the following system of constraints (indexed by m ∈ H A): The operator involved above is anti-selfadjoint, hence the matrices D ij m and D k m are hermitian. One has precisely hal-00514861, version 1 -3 Sep 2010 ON SOME GEOMETRY OF PROPAGATION IN DIFFRACTIVE TIMES 25 The first line in (2.47) is clearly compatible with energy estimates (it has the structure of a quasilinear symmetric system), however that is much less apparent for the second line (due to the presence of D m ). That question is examined in the next Paragraphs 2.5.2 and 2.5.3. 2.5.2. Erasing the second order terms. In fact, the influence of the second order terms is reduced to zero, as is apparent in the following statement. Lemma 2.5. For all m ∈ H A, one has Proof. This is an adaptation of arguments which are classical in diffractive nonlinear geometric optics, see for instance [7]. Let us explain how the general procedure can be adapted in the current context. One differentiates the relation Π S 0 j Π ≡ 0 in the direction ξ i and one applies the projector Π on both sides. This leads to On the other hand one has (recalling that Q ≡ I − Π) which one also differentiates in direction ξ i and composes with Π (right or left). This gives By definition, one has Use (2.50) in order to recognize (2.49), giving rise to (2.48). It is clear that the matrix S m0 is hermitian and that it is positive definite when restricted to |Π h m (C n ). On the other hand, the matrices S mj with j ∈ {1, · · · , d} are by construction hermitian on |Π h m (C n ). CH. CHEVERRY AND T. PAUL With the preceding conventions, the equation (2.47) becomes The first line of (2.51) is a quasilinear symmetric hyperbolic system. Thus, it is compatible with energy estimates in H s . We can obtain more information about it, when assuming the following simplified setting (implying that only one eigenvalue h of H is at play and that Π h M ≡ Π h ≡ Π ≡ M ) where we recall that the constant µ is the one appearing in (H7) : Assume (2.52). Then, the energy (meaning the L 2 norm in θ of the profileŨ 0 m ) is propagated along the group velocity associated with the Hamiltonian h(τ, x; ξ). This is due to the fact that Proof. Let us differentiate (2.18) (considered for the index j = k) in the direction ξ j . One gets Relation (2.17) can be written or taking the adjoint Π S 0 j (Q P 0 Q) −1 = i Π (∂ ξ j Π). One can use (2.52), (5.2) and (2.55) to simplify S mj into In the last line, the derivatives contained in S 0 (τ, x, ∂ x ) can act either on |∂ ξ j Π m or on |Π m . Using (2.16) and (5.5), we obtain Using (2.52), the relation H Π ≡ Π H ≡ h Π can be written Taking an ξ j derivative of that relation and composing on both sides by Π gives, using again (5.2), Replacing ξ by m ∇ϕ, one sees that the two first lines in S mj are reduced to − |∂ ξ j h m |Π m . To recover (2.53), it is therefore enough to show that the last line in S mj vanishes. But separating in the sum the index couples (i, k) for which i ≤ k and k ≤ i, that line is nothing but . The lemma 2.6 is proved. It is supplemented by an initial condition: The equation (2.57) is a quasilinear hyperbolic system which can be viewed as acting on the functional space |Π h H s ([0, T ] × R d × T; C n ). In this framework, all operators involving derivatives are antiselfadjoint. Moreover, S 0 is definite positive. It follows that the Cauchy problem (2.57)-(2.58) can be solved by applying the standard theory. We can find some T ∈ R + * and a unique solutionŨ 0 ∈ H ∞ ([0, T ] × R d × T; C n ) to (2.57)-(2.58). i) The determination ofŨ 0 (and therefore of U 0 = Υ 0Ũ 0 ) which has been performed in the Paragraph 2.5.4. At this stage, we can write Observe that, due to the spectral assumptions (HN3) and (HN4), the map iii) The obtention ofȖ 1 . When m ∈ H A, the relation (2.34) along with the definition of h imply that the linear map |H m − m ∂ τ ϕ |Π m is not one-to-one on the vector space |Π m (C n ). However, it has a right and left partial inverse |H − h I −1 m . Applying |H − h I −1 m to (2.46), we can obtain all components |Q h (Π M Π) m U 1 m with m ∈ H A. On the other hand, when m ∈ H A, the hypothesis (HN4) says that the matrix |H m − m ∂ τ ϕ |Π m is invertible on the whole space |Π m (C n ). Thus, applying the corresponding inverse, we can get |Π m U 1 m . iii) The link between |Q U 2 andŨ 1 . Applying the map |(Q P 0 Q) −1 m to (2.45) yields (for k = 1 in our case) where K 1 m is known. When m ∈ H A, this relation (2.60) does not allow to conclude to the value of |Q m U 2 m (because the termŨ 1 m is still unknown). 2.6. The induction. To pursue the analysis, we shall resort to an induction procedure. Let us define the following property, indexed by k ∈ N: i) The profiles U are known for all ∈ N with l < k ; ii) The function |Q U k is known ; iii) The functionȖ k is known ; iv) The relation (2.60) holds (with K k m known) for all m ∈ H A. The property (P 1 ) is exactly what has been obtained in Paragraph 2.5.5. Let us suppose that the properties (P l ) hold for l ≤ k. We shall show that it is possible to deduce step (P k+1 ). This amounts to studying the following system of equations : F m Γ k (τ, x; U 0 , · · · , U k+2 ) ≡ 0 with m ∈ Z. ON SOME GEOMETRY OF PROPAGATION IN DIFFRACTIVE TIMES 29 In other words, the equations under study are where the contribution F m (B k ) is known and can be handled as a source term. The analysis of (2.61) takes place along the same lines as that developed in Paragraph 2.5 so we shall not write all the details but rather point out the new aspects to be taken into account. as it is indicated in the point iv) of (P k ). It remains an expression involvingŨ k m . In the third and fourth line of (2.61), decompose U k m as in (2.59). The informations coming from ii) and iii) of (P k ) allow not to be concerned with the parts |Q mȖ k m andȖ k m . In fact, we have U k = Υ kŨ k for some smooth (known) map Υ k . Finally, we get the following system whereH k is known. Let us complete (2.62) with any initial data The system (2.62) is issued from (2.57) by a linearization procedure. It is still hyperbolic symmetric on the functional space |Π h H s ([0, T ] × R d × T; C n ). Therefore, it is locally wellposed in time τ . In other words, we can find some T ∈ R + * and a unique solutionŨ k ∈ H ∞ ([0, T ] × R d × T; C n ) to the Cauchy problem (2.62)-(2.63). Knowing |Q U k ,Ȗ k andŨ k , we can piece together again U k through (2.59). The point i) of (P k+1 ) is verified. The line iv) of (P k ) furnishes the relation (2.60) for the index k. Since the profileŨ k has just been identified, we have the assertion ii) of (P k+1 ). is known. The matrix on the left is partly invertible. By exploiting this fact, we can have access toȖ k+1 . This is iii) of (P k+1 ). Finally, we apply |(Q P 0 Q) −1 m to (2.61). By grouping all known expressions inside K k+1 m , we just recover (2.60) written with k+1, which is precisely the point iv) of (P k+1 ). CH. CHEVERRY AND T. PAUL To conclude, properties (P l ) with l ≤ k allow to recover (P k+1 ) by solving Γ k ≡ 0. It is sufficient to stop at step N − 1 to obtain (2.9). The profiles U j with 0 ≤ j ≤ N + 1, are in H ∞ if the initial data belong to H ∞ . Note that one can also work with restricted (large) regularity but in that case one expects a loss of derivatives at each step of the procedure. In conclusion, one has where the remainder R is a C ∞ function in (ε, τ, x, θ). Since the integer N may be chosen arbitrary, this implies that u a ε is an approximate solution to (1.1) in the sense given in Theorem 1. Exact solutions. In this Paragrah 2.7, we prove Theorem 3. We are looking in diffractive times τ 1 for families {u ε } ε∈ ]0,1] of solutions to the oscillatory Cauchy problem (1.13). The aim is to show that u ε remains close to u a ε , in the sense of (1.16). To this end, we decompose QS into two parts: QS(ε, τ, x, u; ∂) u = QS (ε, τ, x; ∂) u + QS n (ε, τ, x, u; ∂) u where QS collects all (main) linear parts: while one finds in QS n all the other contributions, and in particular the nonlinear ones. One has . In the following we shall first concentrate on the easier situation, that is when the contribution QS n is linear. We will state an improved version of Theorem 3 in that case. The paragraph 2.7 deals with the general case. Note that we shall only give the functional setting and the main estimates necessary in each case to conclude to stability, as the arguments giving rise to the existence of an exact solution are very standard [4,12,13,17,18]. 2.7.1. The linear case (∇ u QS n ≡ 0). Since the matrices Λ 0 and Λ 1 are anti-hermitian, condition (H3) implies that the action of the operator QS is compatible with L 2 −energy estimates, uniform in ε ∈ ]0, 1]. In other words, there is therefore a constant C, independent of ε ∈ ]0, 1], such that for any u ∈ C ∞ (R + ; H ∞ (R d ; C n ) and for all times τ ∈ R + , one has Notice that the advantage of the linear case it that assumptions (HN ) are not required, and the matrix coefficients S 0 j may depend on x. The existence of a local (in time) solution u ε is not a problem since the equation is linear. Knowing (1.11), the estimate (1.14) can be obtained just by applying (2.64) to the equation satisfied by the difference u ε − u a ε . This completes the proof of Theorem 2. 2.7.2. The nonlinear case. The framework is described in Theorem 3. The discussion is here classical and inspired from [12,13,15]. Due to the nonlinearity, one needs to control uniformly u ε and ∂ j u ε in the space L ∞ ([0, T ] × R d × T; C n ). To get around this difficulty, one uses the weighted Sobolev spaces H s ε ι which were introduced at the level of (1.17). The choice ι = 2 is sufficient to handle the penalization term ε −2 Λ 0 (τ, x) because we have (for |α| = 1 below) When estimating ε 2 ∂ k u ε , difficulties come from the matrices S 0 j . There is indeed a loss in powers of ε ∈ ]0, 1] coming from the contributions , k ∈ {1, · · · , d} . This is the reason why (1.15) is needed. It is to obtain contributions of the order O(1) instead of being O(ε −1 ). Just use (1.1) and (1.2) in order to replace the time derivative ε 2 ∂ τ accordingly. Even if the condition (1.15) is very restrictive, it does not imply that the matrix P 0 (τ, x; ξ) is constant in x ∈ R d , since Λ 0 (τ, x) may depend on x. The combination of the preceding arguments gives easily (for all s ∈ N * ) Observe that the approximate solution u a ε oscillates at frequency ε −1 , and not ε −2 (as it could be the case for an arbitrary polarization). On the other hand, the linearization of the system (1.1) along u ε a gives rise to functions of u a ε with ε in factor. These two remarks are crucial. They imply (for instance when |α| = 0) that Another aspect of the analysis is to deduce L ∞ −bounds from H s ε 2 −estimates on u ε . For s > d 2 + 1, this point can be obtained by using the injections CH. CHEVERRY AND T. PAUL As usual [4,12], the solution u ε is decomposed into u a ε + ε 2+d w ε . The equation on w ε involves a source term which is of size O(ε N −2−d ) in H s ε 2 . Since all nonlinear contributions are of the form ε S 2 j (ε, τ, x, u a ε + ε 2+d w ε ), and therefore can be uniformly controled in the Lipschitz norm through (2.66), it is compatible with energy estimates in the space H s ε 2 . The corresponding bound on w ε leads directly to (1.16). This ends the proof of Theorem 3. Application to the propagation of Rossby waves. Rossby waves can be found in the ocean. They are due to the variations of the Coriolis force. They propagate on very long time scales. For example, they can take months or even years to cross the Pacific. To see the underlying Physics, the reader can refer to [8,11,19,21]. In Section 3.1 we present the various equations and scalings useful to our study. We explain how to relate the various terms of the equations to the general model (1.1) studied in Theorem 1. In Sections 3.2, we check that the linear Assumptions (H ) of Theorem 1 are indeed satisfied. The eikonal equation is computed at the level of Section 3.3. The non linear aspects linked with Assumptions (HN ) are considered in Section 3.4. Something special happens in the present situation. As explained in Section 3.5, due to transparencies, the expected non linear effects are not present. Combining the preceding informations, we can prove Theorem 4. 3.1. The equations. The number of state variables is n = 3. The vectorũ = t (ũ 1 ,ũ 2 ,ũ 3 ) ∈ R 3 symbolizing those variables is decomposed into pressureũ 1 ≡p ∈ R * + and the two velocity componentsũ 2 ≡ṽ 1 ∈ R andũ 3 ≡ṽ 2 ∈ R. The velocity field isṽ ≡ t (ṽ 1 ,ṽ 2 ) ∈ R 2 . The space dimension is d = 2. We work with where the coordinates (x 1 , x 2 ) ∈ R 2 represent respectively the longitude and latitude, in a Cartesian approximation. We are interested in describing the large-scale structure of the oceans, using the following equation of the compressible Euler type where t ∈ R + denotes the fast time. The number ε ∈ ]0, 1] comes from writing the physical equations in adimensionalized form. It is is supposed to be very small. The function f ∈ C ∞ (R; R) plays the role of a state law. The terms F s j (ε, x) are the components of a field F s = t (F s 1 , F s 2 , F s 3 ) belonging to C ∞ ([0, 1] × R 2 ; R 3 ) which represents exterior forcing (like wind for instance). Finally, the function b(ε, x) satisfies The contribution ε −1 bṽ ⊥ is considered as a term of fast rotation. It is due to the influence of the Coriolis force. The choice b 0 (x) = sin x 2 is adapted to applications in oceanography. Recall that the choice b 0 (x) = x 2 , with limited validity in the vicinity of the equator, corresponds to the so-called betaplane approximation. From now on, we restrict our attention to some connected domain D ⊂ R 2 satisfying (1.20). When b 0 (x) = sin x 2 , this restriction means that we avoid the equatorial zone E := {x ; x 2 = 0} while focussing on a region D placed at midlatitudes. Moreover, we suppose that the flow under study is close to a stationary solution u s (ε, x) to (3.1), which we choose to be of the following form: . This implies selecting the source term F s as follows: Up to a rescaling of the type (x,ũ) → (λ x, λũ) with λ := f (p) −1 , one can assume that f (p) = 1. One then changes the time variable to a slow time τ = ε t ∈ R + , and finally one modifies the source term . One expects the new solutionũ, corresponding to the new source term, to be of the form , we find that u must solve the system (1.19). The next paragraphs are devoted to the proof of Theorem 4, which amounts principally to checking that the hypotheses made imply that the Assumptions of Theorem 1 are satisfied. Assumptions (H ). With the notation of the general system (1.1), one has of course here F ≡ F r , S r 0 (ε, τ, x, u) ≡ ε I and We emphasize the fact that we are working on a model for Rossby waves by adding a superscript r to all the operators. Since the functions p s , v s 1 and v s 2 are given, by Taylor-expanding f andp, we can obtain One has of course (1.2). Just take c = 1. On the other hand, the condition (1.3) is due to the choice for b. Retain that Restriction (1.4) is a consequence of the construction of F r . The nonlinearity only appears at order ε 2 . One has F r0 ≡ F p (0, ·) and F r1 ≡ (∂ ε F p )(0, ·). One also can find the symbol • The form required in (H1) is a consequence of the preceding expansions. • Consider (H5). Since the speed of propagation which is associated with the system (1.19) is clearly not uniformly controlled with respect to the parameter ε ∈ ]0, 1], we have to explain why such a localization process is possible. The reason is that our WKB analysis involves quantities which are polarized according to the 0 eigenvalue, where some uniform (in ε ∈ ]0, 1]) finite speed of propagation is available (as can be seen by looking at the transport equations). In practice, we can select any domain D. Then, to guarantee (H5), it suffices to extend all coefficients by a constant outside D. This manipulation does not affect what happens in a domain of propagation contained in D. hal-00514861, version 1 -3 Sep 2010 • Since p = 1, the matrix H can be identified to a scalar. There is only one eigenvalue of H which is of constant multiplicity 1. The Assumption (H7) is obviously verified. 3.3. Description of the geometry. Rossby waves are by definition waves which are polarized along X r . As announced in [3], one can identify their trajectories through semiclassical arguments as given by the integral curves of a Hamiltonian h r . Of course, our WKB analysis detects the same trajectories. It shows also that the rays under question are quantitatively associated with an energy transport, up to a damping and a source terms that can be computed (in the spirit of the Lemma 2.6). Since p = 1, the matrix H r (τ, x; ξ) can be identified to a real scalar function h r (τ, x; ξ). In order to find h r : T * \0 −→ R, we shall simply compute |h r 1 in terms of ∇ϕ, and extrapolate the values of h r (τ, x; ξ) by replacing everywhere ∇ϕ by ξ. Equation (2.19) for m = 1 translates into Let us collect the following identities: One gets directly Since S r1 0 ≡ I, the formula (2.21) giving H leads simply to (1.21), that is Remark 3.2. When b 0 does not depend on x 1 and when v s ≡ 0, one can recognize the hamiltonian h r (τ, x; ξ) exhibited in [2] and [3]. 3.4. The nonlinear assumptions (HN ). In this paragraph 3.4, we go over the assumptions (HN ) in the framework of the system (1.19). • We start with (HN1). The situation can be delicate. To understant why, look at what happens when b 0 (x) = sin x 2 . In this special case, one has P r0 (τ, x 1 , 0; 0, 0) ≡ 0 , dim ker P r0 (τ, x 1 , 0; 0, 0) = 3 , ∀ x 1 ∈ R while for x 2 = 0, one gets dim ker P r0 (τ, x 1 , x 2 ; 0, 0) = 1 , ∀ x = (x 1 , x 2 ) ∈ R × R * . Thus, there is a jump (from 1 to 3) in the multiplicity of the 0 eigenvalue along the equatorial zone E := {x ; x 2 = 0}. The purpose of the restriction (1.20) is precisely to avoid the related difficulties. Basically, it requires a localization far from equatorial zones. Since b 0 is supposed to be non zero on D, it may be extended by some non zero constant outside D. Following the same localization argument as before, we can always assume that Under that assumption, the smooth vector field is for all (τ, x, ξ) ∈ [0, T ] × R 2 × R 2 a basis of ker P r (τ, x; ξ). In other words, we have p = p 0 = 1. In particular, the Assumption (HN1) is satisfied. hal-00514861, version 1 -3 Sep 2010 • To be able to ensure assumption (HN4), one first needs to identify the set H A r given by (2.34) and (2.35), using the explicit formula (3.5) obtained for h r (τ, x; ξ). More precisely, we have to identify the elements m ∈ Z such that the following expression is equal to zero: One can distinguish the two following situations: i) The phase ϕ is constant on the level lines of the function b 0 . This can happen in particular when we impose the condition (Hi) of Theorem 4. Formula (3.4) then gives access to ϕ(τ, ·) ≡ ϕ 0 . In that case, one has H A r i := Z. Since there are no m ∈ Z outside H A r i , there is nothing to check concerning (HN4). ii) The level sets of b 0 are a foliation of R 2 (or of the domain D where the solutions are localized) by curves on which ϕ 0 is monotonous. More precisely, we require the condition (Hii) of Theorem 4. Notice that the phase ϕ is C 1 on [0, T ] × R 2 with ∇ϕ bounded because of (1.8). In view of (3.4), the derivative ∂ τ ϕ is also bounded. So up to shrinking T ,we can deduce that Hence the set of harmonics is reduced to H A r ii := {−1, 0, 1}. Now, combining (3.6), (3.7) and (3.8), we can obtain ∃ c ∈ R * + ; c |m| ≤ |m ∂ τ ϕ − h r τ, x; m ∇ϕ(τ, x) | , ∀ m ∈ H A r ii . It follows that (HN4) is satisfied. Finally, since the two matrices S r0 1 and S r0 2 are constant, the preliminary (non linear) stability condition (1.15) is obviously satisfied. 3.5. About transparencies. As already explained in the Paragraph 2.5.4, the transport equation on the componentŨ 0 of the main profile is in general quasilinear, see (2.47). The nonlinearity comes from the contribution NL which is defined line (2.8) with U 0 given by (2.56). Proof. This property is due to transparency relations which we exhibit below. In the present case (1.19), denoting U = t (P, V ) with V = t (V 1 , V 2 ) and introducing h(P ) := f (p) P + 1 2 f "(p) (p s ) 2 , we find On the other hand, we have here In what follows, the discussion depends on the symbol which may be i or ii, depending on the choice of the set H A r i or H A r ii . Recall the convention where U m := F m U . To prove the Lemma 3.1, it suffices to show that, for all choice of m in H A r (with the symbol being i or ii) and for all profile U ∈ C ∞ ([0, T ] × R 2 × T; C 3 ), we have (3.9) |Π r m F m NL r (τ, x, |Π r U + G 0,h r + G r0 ) ≡ 0 . Because p = 1, Π r is the unitary projector onto the direction X r . We find It follows that we can find functions f r j (τ, x) with j ∈ {0, 1, 2} such that One notices that the nonlinear contribution inside (3.9) can be decomposed into a quadratic part (denoted Q r ) and a linear part (denoted L r ) : NL r (τ, x, |Π r U + G 0,h r + G r0 ) = Q r (τ, x, U ) + L r (τ, x) U . hal-00514861, version 1 -3 Sep 2010 Observe that the velocity component of |Π r U is polarized in the direction ∇ϕ ⊥ . It follows simplifications when computing Q r . One gets indeed Q r (τ, x, U ) := f (p) P ∂ θ P t (0, ∂ 1 ϕ, ∂ 2 ϕ) . Finally, we have to look at the quantity It means that all terms coming from NL r are in fact linear in U . The transport equation (2.51) deals with the profileŨ r0 m ≡ U r0 m which can be here identified with the scalar coefficient α m . It follows that the equation (2.51) translates into a constraint on α m . Lemmas 2.5 and 2.6 imply that the constraint under question is a linear transport equation of the form Integrating the preceding equation with respect to the variable θ ∈ T, one recovers here a general principle in geometrical optics which staes that the energy is propagated, up to a damping coefficient (c m ) and a source term (d m ), along rays resulting from the eikonal equation (3.4). Application to the propagation of electromagnetic waves. Models coming from electrodynamics (Maxwell-Ampère, Lorentz, Bloch, · · · [17,24]) yield quasilinear symmetric systems involving some skew-symmetric penalization term and having λ ≡ 0 as eigenvalue of high multiplicity. It is therefore natural to study in this context how to describe the propagation in long time of electromagnetic waves which are polarized along the eigenspace E 0 which is associated with λ ≡ 0. The waves which, inside E 0 , correspond to conserved quantities (like the divergence · · · ) lead often to a trivial geometry: the rays remain parallel even in diffractive times. What happens in the other directions depends on the model which is selected, together with the involved parameters such as the inhomogeneity of the medium or the presence of forcing terms. For instance, the work of R. Sentis [23] on the interaction of three waves in plasma physics reveals the system of Boyd-Kadomtsev whose structure is very close to what has been studied in Section 3.1. In this Paragraph 4, we study another typical situation. We explain the mechanisms underlying the propagation in a ferromagnetic medium. 4.1. The equations. In ferromagnetic models [22,24], the relevant quantities are the electric field E(t, y), the magnetic field B(t, y) and the magnetization field M (t, y) which are functions from R × R 3 into R 3 . The physical features at a point are described through the vector V (t, y) := t E(t, y), H(t, y), M (t, y) ∈ R 9 . The assumption (H5) can be obtained by selecting applications α and Ξ which are constant outside a compact set. The matrices S f j being constant, we recover (H3). We compute the matrix We remark that We recall (2.18) which amounts here to the same thing as Π f S f 0 j Π f ≡ 0. This information allows to eliminate, when computing G f , the contributions which (in the right hand side) are polarized according to Π f . It remains We have the following block structures On the other hand, we have The relation (2.18) allows to remove the terms which point in the directions of Π f (R 9 ) (those implying derivatives of coefficients). The contributions which are likely to persist are in fact those which are polarized according to the magnetic component (those implying derivatives of the directions). However, these terms are not detected by the matrices S f 0 j . This last property expresses some kind of weakness of the coupling between the electric and magnetic field on the one hand, and the magnetization field on the other hand. Consequently, we find G f ≡ 0. Of course a strengthening of the coupling would lead to a very different conclusion. The matrices S f 1 j are all zero so that P f 1 ≡ Λ f 1 . It is easy to see that the terms involving Π f 1 and Λ f 1 0 play no part when computing H f . Indeed, we find Other simplifications occur which are due to the way we have adjusted the stationnary solution V s ε (manner which is in fact imposed by the skewsymmetry of Λ). They furnish tZ f Finally, we obtain H f ≡ 0 which means as expected that the geometry is trivial even in diffractive times (τ 1). Note again that we have selected here a very basic model, just to illustrate the type of discussion which can happen. By taking into account other aspects (more inhomogeneities, more forcing terms, · · · ), we can find h f ≡ 0. Appendix. In this Section 5, we shall first prove a lemma giving some general algebraic identities of frequent use in WKB analysis. Due to their generality we choose to state the lemma in an abstract framework. Then, we provide a proof to the Lemmas 2.2 and 2.3. 5.1. Algebraic identities. We denote by : R d −→ R d a linear mapping which depends smoothly on parameters chosen in an open subset Υ of R g with g ∈ N * . We denote these parameters by υ = (υ 1 , · · · , υ g ) ∈ Υ ⊂ R g . Those operators are characterized by the property In the context of this article for instance, one can choose equal to Π, |Π m or Q, and υ equal to τ , x or ξ. One denotes by ∂ j the differentiation in direction υ j . The result is the following. Besides, for any couple of integers (j, k) ∈ {1, · · · , g} 2 , one has Proof. To obtain (5.2) and (5.3), one just needs to apply ∂ j to (5.1). Then (5.4) follows. Similarly (5.5) is a combination of (5.2) (which is written for j = k and composed to the left by ∂ j Π) and of (5.3). 2 5.2. Proof of the Lemma 2.2. By construction, we have G ≡ Π G Π. The fact that G corresponds to the action of some matrix of size p 0 × p 0 or p × p (depending on whether ξ ∈ T * 0 or not) is a direct consequence of (H4) and of (HN1). The regularity of G on [0, T ] × T * \0 comes from the one of Π which itself is issued from (H4). To see why the matrix G is hermitian, it is necessary to compute G − G * = i d j=1 Π S 0 j ∂ j Π Π + Π ∂ j Π S 0 j Π . The function h is continuous in the variable ξ ∈ R d and odd. In particular, it must be zero when ξ = 0. This is precisely (2.32).
17,614.8
2011-09-01T00:00:00.000
[ "Physics" ]
Skyrme RPA description of γ-vibrational states in rare-earth nuclei The lowest γ-vibrational states with K = 2+γ in well-deformed Dy, Er and Yb isotopes are investigated within the self-consistent separable quasiparticle random-phase-approximation (QRPA) approach based on the Skyrme functional. The energies Eγ and reduced transition probabilities B(E2)γ of the states are calculated with the Skyrme force SV-mas10. We demonstrate the strong effect of the pairing blocking on the energies of γ-vibrational states. It is also shown that collectivity of γ-vibrational states is strictly determined by keeping the Nilsson selection rules in the corresponding lowest 2qp configurations. Introduction The modern self-consistent mean-field methods have demonstrated a great success in description of multipole giant resonances [1][2][3] but so far got a modest progress in reproduction of lowest vibrational states in well deformed medium and heavy nuclei.To our knowledge, there are only a few studies of this kind.They are performed in the framework of the quasiparticle randomphase-approximation (QRPA): with Gogny interaction for multipole states in 238 U [4] and with Skyrme forces (SkM* [5] and SLy4 [6]) for K π = 2 + γ and 0 + β states in rare-earth nuclei [7]. In the framework of this activity, we have recently performed a systematic Skyrme QRPA investigation of K π = 2 + γ states in rare-earth and actinide nuclei [8].Both early SkM* and recent SV-bas [9] Skyrme parameterizations were used.The main attention was paid to i) the impact of the pairing blocking effect (PBE) on the properties of K π = 2 + γ states and ii) explanation of the nuclear regions with a low and high collectivity of these states.It was shown that PBE strongly affects the energies of 2 + γ states but almost does not change their B(E2) γ -values. The domains with a high collectivity were conditioned by keeping the Nilsson selection rules [10] in the lowest 2qp configurations with K π = 2 + .In this paper, we additionally check these results by using the Skyrme force SV-mas10 [9].This force has a large isoscalar effective mass m 0 /m = 1 and so should generate the mean field similar to the phenomenological Woods-Saxon one [11].Numerous early studies of K π = 2 + γ states were performed within schematic QRPA with the Woods-Saxon single-particle (s-p) scheme [12,13].In this connection, it would be interesting to see how well the simia e-mail<EMAIL_ADDRESS>scheme works within the present self-consistent QRPA procedure. Method The calculations have been performed within the separable QRPA [15] based on the Skyrme functional [1].The method is fully self-consistent since i) both the mean field and residual interaction are obtained from the same Skyrme functional, ii) the residual interaction includes all terms of the functional as well as the Coulomb direct and exchange terms.The self-consistent factorization of the residual interaction dramatically reduces the computa- tional effort for deformed nuclei while keeping high accuracy of the method. The large configuration space is used.The 2qp excitations range up to 55-80 MeV and exhaust 95-98% of the energy-weighted sum rule.The equilibrium deformations are obtained by minimization of the total energy of the system.As seen from Fig. 1, the calculated deformation parameters β 2 are in a good agreement with the experimental data. The volume δ-force pairing is treated within BCS [16].The problem is solved separately for the ground state Ψ 0 and excited 2qp states where â+ j ( α+ j ) creates the particle (quasiparticle) in the state j, Ψ HF is the Hartree-Fock ground state, and u k (i j) and v k (i j) are Bogoliubov coefficients.In axial nuclei, the 2qp configurations with K π = 2 + are non-diagonal (i j) and so composed by unpaired states.Since the pairing operates solely by nuclear pairs, the unpaired states i and j are blocked for the pairing process and come to the BCS equations as pure single-particle states.This is so called the pairing blocking effect (PBE) [12,[17][18][19]. Following early schematic QRPA studies [12,13], the PBE noticeably affects K π = 2 + γ -energies in even-even axial nuclei.Indeed, 2qp states form the QRPA configuration space.Moreover, first 2qp states are main contributors to the lowest QRPA excitations.So the accurate (with PBE) treatment of the first 2qp states is important for a correct description of the lowest QRPA solutions.Since PBE mainly affects the 2qp energies [12,13], one may correct only them and keep unchanged the g.s.set of Bogoliubov coefficients.Being economical, this prescription takes into account the major PBE impact.Besides it leaves the 2qp basis ortho-normalized and allows to avoid the problems with conservation of the proton and neutron numbers. Of course this approximate scheme makes QRPA calculations somewhat inconsistent.However, since a fully consistent QRPA scheme with blocked pairs is still a demanding and unsolved problem, we prefer to use an approximate scheme rather than neglect the PBE at all. The PBE formalism for the volume δ-force pairing can be found elsewhere [8,20].In the present calculations, we replace first five 2qp energies by the PBE-corrected values where E 0 and E(i j) are energies of the system in the ground and 2qp (i j)-states. Results and discussion In Fig. 2, results of our calculations for Dy, Er and Yb isotopes are exhibited.In the upper (middle) panels, the 2qp energies calculated with (without) the PBE are shown.The QRPA energies of K π = 2 + γ -states are compared with the experimental data.In the bottom panels, the B(E2)-values obtained in QRPA with and without PBE are presented.Figure 2 shows that collectivity of calculated 2 + γ states decreases from Dy to Yb isotopes.In Dy, we have the largest collective shifts (the difference between QRPA and first 2qp energies) and B(E2) values.In all the nuclei, the PBE considerably decreases the 2qp and QRPA energies.In Dy isotopes, this leads to a nice agreement with the experimental energies.The B(E2) is also well described.In Er and Yb, the PBE also noticeably improves description of 2 + γ energies.However here E QRPA still remain considerably higher than E exp , calculated B(E2) are accordingly underestimated.Altogether we see that PBE significantly changes QRPA energies E QRPA but slightly affects B(E2) values. The next point to be clarified is a different collectivity of 2 + γ states, which is maximal in Dy and minimal in Yb isotopes.This difference can be explained in terms of the Nilsson selection rules for E2(K=2) transitions in axial nuclei [10,12].The rules read where N is the principle quantum shell number, n z is its fraction along the z-axis, Λ is the orbital momentum projection onto z-axis.As seen from Table 1, the selection rule Δn z = 0 is most important.It is kept in 164 Dy but not in 172 Yb.As a result, the quadrupole matrix element f 22 i j in 164 Dy is much large than in 172 Yb.Following our analysis, just the quadrupole matrix for the lowest 2qp state is decisive for the collectivity of the corresponding QRPA state (see more analytical and numerical arguments in [8,20]).The large f 22 i j leads to a high collectivity and vise versa.This finding is valid for all considered nuclei.Thus we get a simple recipe for prediction of the collectivity (weak or large) of the first QRPA state: it suffices to check for the Nilsson selection rule Δn z = 0 for the lowest 2qp state. In conclusion, the present QRPA calculations for 2 + γ states with the Skyrme force SV-mas10 confirm the main results [8] obtained with the forces SkM* and SV-bas.Namely, a robust description of the deformation in rareearth nuclei is certified.A strong PBE impact on the 2 + γenergies is demonstrated.A simple explanation of nuclear domains with a high and low collectivity is verified.From comparison of present SV-mas10 and previous SV-bas [8] calculations, it is seen that these two Skyrme parameterizations give very close results. Figure 1 . Figure 1.Parameter β 2 of the axial quadrupole deformation in rare-earth and actinide nuclei.The values calculated with the Skyrme force SV-mas10 (full symbols) are compared with the experimental values [14] (open symbols). DOI: 10 7KLV 6 Figure 2 . Figure 2. (Color online) The lowest 2qp and QRPA energies as well as B(E2) values of 2 + γ -states in Dy (left), Er (center) and Yb (right) isotopes, calculated with the force SV-mas10.The 2qp (filled blue triangles) and QRPA (filled red circles) energies obtained without (a-c) and with (d-f) PBE are compared with the experimental data (filled black squares)[14].In the plots (g-i), the QRPA B(E2) values calculated without (empty blue diamonds) and with (filled red diamonds) PBE are given versus the experimental data (filled black squares) [14].
1,993.6
2016-01-19T00:00:00.000
[ "Physics" ]
Is the World Becoming a Better or a Worse Place? A Data-Driven Analysis : Is the World becoming a better or a worse place to live? In this paper, we propose a tool that can help to answer the question by combining a number of global indicators belonging to multiple categories. The proliferation of statistical data about various aspects of the World performance may suggest that it should be “easy” to evaluate the overall success of human enterprise on this planet. Moreover, it also points out the intrinsic importance in the selection of indicators. However, people have different values, biases, and preferences about the importance of various indicators, making it almost impossible to find an objective answer to this question. To address the variety and the heterogeneity of available indicators and world views, we present the analysis of global World performance as a multi-criteria decision problem, making sure that the assessment method remains as transparent as possible. By dynamically selecting a set of indicators of interest, defining the weights that we attach to various indicators and specifying the desired trends associated with each indicator, we make the assessment adaptive to individual values. We also try to deal with the inherent bias that may exist in the set of indicators that are chosen. As a study case, from various data sets that are openly available online, we have selected several that are most relevant and easy to interpret in the context of the question in the title of the paper. We demonstrate how the choice of personal preferences, or weights, can strongly change the result. Our method also provides analysis of the weights space, showing how results for particular value sets compare to the average and extreme (optimistic and pessimistic) combinations of weights that may be chosen by users. Introduction Is the World becoming a better or a worse place? Answering such a question from a scientific perspective is probably impossible, at least in general terms. This is primarily because we do not know what is "better" and "worse". What is better for one person may be worse for another. Additionally, as there are many aspects of life that can be considered, whichever "assessment" strongly depends on the aspects selection. There is an unprecedented amount of statistical data, including local and global indicators, currently available. The internet has made such data available to everyone; mass and social media provide powerful channels for an effective, although sometimes ambiguous, communication in which authentic and fake news, bias, interpretations, and misinterpretations are often undistinguishable. This also makes "objective" analyses more complicated. As any statistical data, an indicator presents the unquestionable advantage of synthesizing reality or a phenomenon in some single metrics. This strength can become a weakness any time a given indicator is not considered in context or when approximations and limitations are not clear and not properly taken into account or communicated. There are clear examples of misuse of indicators (e.g., [1]). Issues may be related to different aspects included, but not limited to, accuracy, original intent and extent. Governments are progressively embracing open-data philosophy [2] and evidence-based policy [3], pushing further the development of a World of indicators [4]. Last but not least, indicators are extensively used in several relevant contexts to measure performance, even individual performance (e.g., in academia). When indicators explicitly aim at a performance evaluation, they are often referred to as Key Performance Indicators. Looking at the World holistically, while a number of indicators seem to clearly show negative trends (e.g., climate change), others (such as those selected in a recent article in The Conversation (http://theconversation.com/seven-charts-that-show-the-world-is-actually-becoming-abetter-place-109307)) point to a much more optimistic vision of world evolution . All global indicators measure certain aspects of the World performance. Each one does it from a single, unique and independent perspective. Likewise, a given category of indicators may provide a measure related to a certain aspect of life to support focused analysis. Throughout the paper, we generically refer to "World performance" without a clear or formal definition. Such a terminology reflects the intrinsic difficulty to define a transparent method, eventually supported by quantitative or qualitative metrics, to assess the different aspects of life as an unique one according to a classic decision making philosophy. The most common approaches to deal with World performance and its definition are briefly discussed in Section 2.2. Assuming available and reliable data, the problem of measuring the World performance depends on the selection of a number of indicators, on the weights associated with indicators and the wrapping algorithm that combines multiple heterogeneous measures into one. All phases are evidently very subjective but while the choice of indicators/weights is relatively straightforward and easy to communicate, the wrapping algorithm may turn out to be quite complicated, confusing and non transparent, as well as it may contain some implicit assumptions that are not obvious. In this paper, we propose a computational method inspired by classical MCDA techniques [5] that enables a systematic combination of semantically enriched heterogeneous indicators into integrated measures that can estimate the performance of the World, and provides a simple, yet consistent, interpretation of the results. We also propose a visualization technique that puts individual results in a context and allows users to compare their output with some of the possible extreme evaluations. Methodology and Approach We aim at supporting the arbitrary and dynamic selection of key indicators. The resulting indicator framework is enriched by relatively simple semantics (such as the expected trends), which defines how users interpret the meaning of particular indicators and how they want them to change. Indicators are combined to define an unique performance metric, whose interpretation takes into account the input dataset, the semantics associated and the input weights that define the "importance" of the different metrics in the analyses. We prefer to address the problem from a perspective very close to the classical Multi-criteria Decision Analysis-MCDA [5]. That is because, regardless of formal specifications, understanding the performance of the World is intrinsically the result of a multi-criteria analysis. Design decisions that are taken at any stage to define the computational method, including indicators selection, semantic enrichments and method to associate weights with indicators play a critical role. Indeed, those decisions enable effective analysis and interpretations of a complex reality according to a data-driven approach; however, as they combine heterogeneous criteria, they are also likely to introduce significant simplifications, biases, and uncertainty. Our approach is based on adaptivity in the attempt to facilitate analysis that considers data (with no constraint or limitation in the indicator selection) in the context (weighted computation). In such a way, we hope that our technique can mitigate some of the bias that may affect the selection and the weighting of the different indicators. Structure of the Paper The introductory part of the paper is completed by Section 2 which aims at an overview of background concepts and provides a brief explanation of main design decisions. Section 3 provides a conceptual overview of the method, while Section 4 formally describes the computational method, including also some examples of parameters tuning. Section 5 focuses on the description of the indicators selected to provide a significant proof-of-concept of the computational method. A number of computations based on this set are explained in Section 6. They include random experiments as well as computations adopting specific sets of weights. Current limitations are discussed in Section 7. Finally, as usual, the paper ends with a conclusions section that also outlines possible future work. Related Work This section provides an overview of indicators characteristics that may be relevant to measure World performance and a brief review of MCDA methods in the context of different disciplines. Statistical Data And Indicators As previously discussed, indicators are statistical data that capture or measure some kind of reality. For instance, indicators for human rights and global governance [6] are becoming more and more popular. The standardization of indicators in a given domain normally plays a key role-i.e., the ISO 14031 standard on environmental performance evaluation [7]. Generally speaking, most indicators are defined along the classical spatio-temporal dimension and may be produced at a different granularity. Depending on the resolution or granularity, although at a very theoretical level, we normally distinguish between global indicators, which refer to the World as a whole, regional indicators, that refer to macro geographic regions such as continents, country-level indicators and city-level indicators that are associated with countries and cities respectively. More fine-grained indicators may be defined in the context of a specific application domain (e.g., urban planning [8]). The method proposed in this paper is completely independent of the kind of indicators chosen, and of their resolution. At a semantic level, the "meaning" of each indicator in the computational method is specified as an input parameter: the current model associates indicators with one or more classes and with a wished trend (Section 4). Indicator semantics are expected to be enriched in the next future, for example by defining the dependencies existing among the different indicators. For the experiments reported in this paper we have selected an indicator framework that includes only global indicators (Section 5). How to Measure World Performance? Assuming available and reliable data and a method to combine heterogeneous data, the problem of measuring the World performance depends on the selection of indicators and, of course, on the associated weights. Both phases are evidently very subjective and may depend also on the analysis context. While the selection of indicators provides at least some clear quantitative data to support data-driven analysis, the relevance assigned to the different categories or indicators may be volatile even in a very small community as it reflects personal subjective opinions. Looking at indicators selection, a consistent and flexible approach is proposed by the portal Our World in Data (Our World in Data-https://ourworldindata.org. Accessed: 20 March 2019.), that groups the different indicators by category and discusses and analyses their interdependence, pointing out eventual relationships and links among the different indicators. We have adopted a sub-set of those indicators to perform the experiments reported in this paper (Section 5). OECD (OECD-http://www.oecd.org. Accessed: 29 March 2019.) adopts a comparative approach to create a "Better Life Index" (OECD Better Life Index-http://www.oecdbetterlifeindex.org. Accessed: 29 March 2019.). This approach aims at comparing the different countries based on a set of indicators and weights defined by users. Our method is quite different in this sense as our approach is supposed to be adaptive and completely agnostic with respect to the indicator set. Additionally, we want to provide a unique metric to understand the World performance. A survey-based approach is followed to produce the World Happiness Report (World Happiness Report 2019-http://worldhappiness.report/ed/2019/. Accessed: 29 March 2019.) which creates an indicator for "happiness" based on a number of criteria weighted by people. Countries are then ranked accordingly. Within our method, we definitely recognise the importance to establish weights and proportions among indicators based on large scale surveys that reflect the opinion of large communities. As this paper focuses exclusively on the computational method, we will perform such a survey as part of future work or we will consider results from reputable sources that are performing similar researches. That is, for example, the case of the United Nations, which is performing a large scale survey (http://vote.myworld2015.org) on a number of macro-issues to understand which ones matter the most to people. Multi-Criteria Decision Analysis (Mcda) As shortly explained early in the paper, we have proposed our research question as a Multi-Criteria Analysis problem. MCDA techniques [5,9] have been defined, advanced and consolidated within different application domains during the past years. We consider MCDA a reasonable approach for the intent and the extent of this work. First of all, analysis criteria are an input for the target problem; therefore, different indicators may be selected by different users in different contexts. That makes somehow "natural" to consider multi-criteria analysis that is based on the combination of multiple criteria by definition. Furthermore, MCDA is intrinsically simple and may be easily adapted or particularised as the function of the application domain. In this specific case, simplicity allows us to optimize the trade-off between analysis capabilities and usability. At the same time, we can particularize the method to meet more specific requirements or to address expert needs in a given domain. A Comparative Study A concise discussion about the choice of the MCDA approach in comparison with other approaches is proposed below. • Big Data. Looking holistically at the research question and at the current technological trends, the availability of large amounts of heterogeneous data and of significant computation capabilities intuitively points to techniques based on Big Data analytics [20]. Such an approach is already in use for scientific applications [21]. While the direct adoption of Big Data analytics can probably result complex and expensive to answer the research question object of the paper, it's combination with MCDA [22] provides an interesting approach to prioritise association rules and identified patterns. The proposed method is based on a number of simplifications as it considers the selected criteria (indicators) independently without considering possible dependencies or other relationships among them. • Fuzzy decision making. "It consists in making decisions under complex and uncertain environments where the information can be assessed with fuzzy sets and systems" [23]. As extensively discussed in [23], solutions based on fuzzy logic are extremely valuable. However, by definition, they are very effective in presence of uncertainty. The decision to adopt indicators within our method, which are in most cases global indicators, leads to the adoption of criteria that are formally specified and measured reducing consistently possible uncertainty associated. In such a context, the problem is more related to the criteria selection and the importance associated with them. We believe MCDA may be a relatively unbiased model and may support better the model of the target problem. • Naturalistic decision making. This exciting approach relies on the capability to understand how people make decisions in real-world settings [24]. Despite the unquestionable advances of artificial intelligence techniques and automatic reasoning, it's currently not easy to develop a systematic and completely generic method based on the naturalistic approach. Genericness is indeed absolutely guaranteed by MCDA. • Semantic decision making. Increasing the representation capabilities by adopting formal semantics (i.e., ontologies) can significantly empower the decision making process trhough authomatic resoning by inference [25]. However, ontology development is a critical tasks that requires specific skills. In this specific case, the ontology-based approach could result less effective than MCDA in terms of usability in practice. Adopting knowledge-graph (e.g., [26]) presents the clear advantage to introduce visualizations. However, the interpretation of such networks may be very subjective even if supported by network analysis techniques. A more systematic analysis of the techniques previously introduced is reported in Table 1. Looking at the specific research question object of this paper, four different parameters have been considered: 1. Realistic/Data availability. This is an estimation of how realistic is the adoption of a technique to address the target problem with a focus also on data availability. 2. Easy to apply/use. This second selection criterion is related to the usability from a final user perspective. 3. Unbiased. It refers to the biases that the method may potentially introduce along the decision making process. 4. Low cost. Cost and/or complexity. We consider all techniques reported as realistic approaches in practice. However, solutions based on MCDA and Fuzzy Logic can work directly on abstracted data (e.g., global indicators) with the support of limited semantics. We believe that solutions adopting Big Data analytics might be unfriendly to non-expert users, especially in terms of results interpretation. Likewise fuzzy decision making that is very effective to model uncertainties. If well designed, techniques based on a naturalistic, semantic or MCDA approach should result very easy to use. In terms of biases, we consider the naturalistic approach the most critical. Finally, fuzzy decision making and MCDA should result less complex and, therefore, less costly. Cost/complexity is indeed a serious concern for all other techniques. Based on the analysis conducted, MCDA is considered the most adequate approach to adopt as it is relatively simple, it can work directly on global statistical data with the support of minimal semantics, it is easy to use and to generalise. Last but not least, in this specific case, a multi-criteria approach models in a natural way the problem and, therefore, it may facilitate the interpretation of the results in context. The Method In Concept The proposed method aims at supporting data-driven analysis to answer the target research question. It takes into account a number of heterogeneous indicators in context, meaning that interpretations are a function of the semantics and the weights associated with the different indicators. Key Features The philosophy underlining the method is summarized by the following points: • Uniform representation of data preserving original numerical ranges. Considering a given time range, we represent indicators uniformly as a percentage of variation with respect to the initial state (Section 4, Equation (2)). It allows an explicit representation of tendencies and trends in the target time-range and preserve the original numerical differences existing among indicators. • Explicit specification of semantics. Exactly as weights, the semantics associated with indicators are an input for the method. At the moment, the semantics are limited to the specification of the expected trend for a given indicator. • Tuning. The method is based on parameter tuning. Such a mechanism is represented in Figure 1. The non-tuned environment assumes the evaluation metric α, resulting in the linear combination of the considered indicators, able to assume positive or negative values. Such values are associated with positive and negative performance respectively. Details about tuning and the notation adopted in the figure are provided later on in the paper. • Contextual interpretation of results. As extensively discussed in the final part of the paper (Section 6), answering the question object of this paper requires the contextual interpretation of results. The method provides an environment for analysis that relies on the numerical evaluation of performance, as well as on the estimation of the optimism/pessimism associated with a given set of weights. The method is ad-hoc designed to answer a specific research question as in the title of the paper. The method differs from traditional MCDA solutions as it introduces a number of mechanisms (neutral computation, parameters tuning, extreme computations) to allow interpretations in context. For instance, the combined use of neutral computation and extreme computations provides a clear understanding of user choices in terms of pessimism/optimism. Using Tuning vs. Not Using Tuning The method can be adopted either using or not using tuning. Not Using Tuning If tuning is not adopted, the proposed method is very sensitive to the selection of indicators and of the numerical differences eventually existing among indicators. Such a variant may be useful whenever the analysis is understood as completely data-driven and the impact of subjective interpretations on the result should, therefore, be very limited. Using tuning The method adopting tuning aims at a more balanced analysis in which numerical differences among indicators are considered in context. Indeed, the neutral computation of the tuned version of the method is by definition very close to null values of the evaluation parameter ( Figure 1) and, therefore, no actual decision is taken unless a clear decision on the relative relevance of indicators is not taken. Input/Output The input to the method is the following: • Indicators. They are the criteria taken into account. For the examples provided in the paper we have adopted the indicators reported in Table 2 and discussed in Section 5. Overall, we have taken into account of 17 different indicators from multiple categories measured yearly in the time range 2000-2015. We have not considered more recent years because data is not available. • Expected trends. An expected trend is associated with each indicator. Expected trends associated with the indicators considered in the paper are reported in Table 2. • Weights. A weight is associated with each indicator. It represents the importance or relevance of a given indicator in the considered context according to the user. The input interface of our prototype is reported in Figure 2. The method produces the following output: • Metric to assess performance. The method adopts an unique metric for performance evaluation as a result of a combination of the different criteria which are not visible at an output level. Such a metric is explained in Section 4. • Supporting metrics. The method produces a number of supporting metrics (neutral computation and extreme computation in order to facilitate interpretations in context. Such metrics are explained in detail in Section 4 as well as their use in practice is addressed in Section 6. Examples of outputs are reported in Section 6. Computational Method This section aims at a comprehensive explanation of the computational method proposed. The indicator space will first be defined with the introduction of a number of related key concepts. Then, the tuning of the method is addressed and some examples are provided. In order to facilitate the understanding, all symbols adopted are reported in Table 3 both with their formal definition. The meaning of each symbol is explained in the next sections. Indicator Space The indicator space is defined by a matrix, each line of which represents one of m indicators as a time series. Each column presents the vector of indicators at a given time step. As these indicators can have different time granularity/resolution, there is a concrete intrinsic risk to deal with a lack of data. In this version of the method we consider the higher resolution, which allows a less precise analysis but does not introduce uncertainty. A mechanism to deal with lack of data and the consequent uncertainty will be object of future work. As previously mentioned, we explicitly assume indicators defined in space and time. Matrix A refers to a generic space model S. However, as in the context of this study we only consider global indicators, we adopt the simplified notation A S=s x =World = A. The method implicitly targets heterogeneous indicators that are normally expressed in different units of measurement. In order to provide a uniform presentation of the indicator space, we consider indicators as a percentage of variation with respect to the initial state at time t = t 0 (Equation (2)). It results in A converted intoĀ. We assume that each indicator is associated with a certain weight that stands for its relative importance. Each member of the weights vector W (Equation (4)) presents two different components, w i and k i (β i ). While the former is understood as the numerical weight associated with each indicator, namely the "importance" they have in a given computation, the latter expresses the preferred behaviour or semantics of an indicator as in Table 4. As shown in the table, these semantics are defined as the function of the wished trend (β). As the proper name suggests, such an input parameter defines the tendency that an indicator should have to contribute positively within the model. In this context, β is expressed by a constant value from a finite number of possibilities that, in this specification, may be "increasing", "decreasing", "stable" or combinations as in the common meaning. For example, if we consider the crime rate as an indicator, it is reasonable to think that we would like it to decrease as much as possible. In that case, we would define β = DECREASI NG. The possible values of k i for computations are defined by Table 4. In general, "decreasing" and "increasing" trends are relatively easy to identify. Other wished behaviours may be not obvious and subject to interpretation. For instance, we are recording an increasing population globally; depending on the context of analysis, the wished behaviour for such a kind of global indicator could be β = STABLE − DECREASI NG, meaning that further increases of population could lead to additional issues in terms of sustainability. Table 4 only reports the semantics that have been adopted for the extent and intent of this work. Semantics can be extended in different directions to achieve different goals. An exhaustive discussion is out of the scope of this paper and will probably be object of future work. Neutral computation (Equation (6)) assumes all numerical weights w i equal to the average of all possible valuesw. Therefore, assuming weights in the range [0,..,10],w is 5. From a semantic perspective, neutral computation represents a computation that does not take into account the weights as an input. It plays a relevant role within this computational model as it is adopted in the tuning phase (Section 4), as well as to support the interpretation of computational outputs (Section 6). Tuning Indicators propose numerical variations that may have a contextual meaning. For instance, minor variations of a given indicator might have strong implications, while a big variation of others could be mostly irrelevant. In order to assure a bias-free model of analysis, we design our protocol to be intrinsically adaptive with respect to the set of considered indicators. We achieve that by balancing potentially positive and negative numerical variations and letting users decide the most "dominant" patterns. Therefore, tuning is needed to set the base weights to compensate different scales of importance characterizing the different indicators. This calibration makes the method adaptive by normalizing and balancing the potential numerical impact of the different indicators by enforcing a kind of "fair" computation. The tuning vector ( − → W ) is defined by Equation (7). It is the set of non-null weights that minimizes the absolute distance between α = [α 0 , α 1 , ..., α n ] and a vector ∅ n of the same size including only null values. In our representations (Figure 3 and Section 6) that is the distance between α and the x-axis. ∅ n represents computations of α in which positive and negative trends are completely balanced for all the time samples considered (α i = 0, ∀i). By computing the tuning vector, we identify those default weights which nullify the effect of the different scales characterizing the different indicators. α| min defines α that adopts the tuning vector Three examples of tuning vector computation are reported in Figure 3. The first example assumes three indicators; the tuning vector is [2,8,1] and determines a distance of 18. The second example refers to four different indicators; the tuning vector is [9,9,1,1], while the distance is 641. Finally, in the last example we consider six indicators; the tuning vector is [4, 8,1,1,1,1] with a distance of 488. When α is defined by adopting weights that are function of the tuning vector, it is considered to be tuned. In the context of this work, we consider the weights of the tuned α as in Equation (8), where w i represents the input weight and − → w i is the corresponding element of the tuning vector as previously defined. Space Spectrum and Ideal Computations Given a set of indicators and all possible combinations of allowed input weights k, α + i | k specifies the maximum value of the function in the considered range Equation (9), as well as α − i | k refers to the minimum value Equation (10). In general, those two extreme values reflect the most and less favourable combination of weights in terms of performance for a specific set of target indicators. Therefore, we consider them ideal computations which are, in this case, also boundaries. Indeed, at the generic time i, the condition in Equation (11) applies. All possible values of α within the boundaries are called spectrum. A final remark to point out that the boundaries for the tuned function α are different from the ones for the original function Equation (12). Figure 3 shows three different examples of tuning associated with 3,4 and 6 indicators respectively. In each example the tuning parameters, including the tuning vector and the value of the distance associated α| min , are reported too. The latter may be considered as a kind of metric to quantify the "precision" of the tuning, as lower values reflect a better result. Samples used for tuning are represented by dark green lines, so the dark green area represents the spectrum of α. In the figure, the neutral computation before tuning and the average value of the samples adopted for tuning are represented by a light green line and a blue line. α| min is the continuous red line very close to the horizontal axis. Finally, ideal boundaries are shown for α before and after tuning as dark green and red dashed lines respectively. Complexity and Cost The method proposed assumes in input the indicator set, a weigh and a wished behaviour associated with each indicator. Assuming m indicators and n samples for each indicator, the cost in space is O(n * m + 2m), while the computation of the global metric α has a complexity O(n * m). The tuning algorithm, which also defines the extreme computations, has a much higher cost as all possible combinations of weights should be considered. Considering m indicators, n samples for each indicator and p possible values for the weights associated with each indicator, the complexity of the tuning algorithm is O(p m * n * m). However, the tool has to be tuned just once and not at any run. Additionally, the tool allows to run a tuning algorithm that, instead of computing all possible weights, just consider a number of random runs − → p < p m . Such version of the algorithm provides an approximation of the optimal solution. The tuning accuracy increases by increasing − → p . The analysis of complexity and cost should be considered in the context of realistic numbers. In the case of global indicators, we can assume computations at a very low scale. For instance, in the examples presented in the paper, we have considered 17 indicators (m = 17) measured between 2000 and 2015 (n = 16) and integer weights between 0 and 10 (p = 11). Indicator Framework In this section we present a set of indicators that will be adopted as a case study. As will be briefly discussed, the intent of such a selection is the definition of a number of perspectives, supported by proper indicators, to world performance evaluation. Selection Criteria We have designed this study implicitly referring to an open context in which well-known statistics from different data sources are considered together within an unique framework understandable by the most. The selection criteria can be summarized as follows: • Indicator freely available online. • The semantics of the target indicator in terms of performance can be defined according to the model described in Table 4. • Priority to well-known indicators that are expected to be, in the limit of the possible, easy to understand for the most. In other words, we try to avoid indicators that are suitable only to experts in a given domain or that require some context to be properly understood. Categories and Data Sets The indicator framework adopts the categorization proposed by the prestigious portal Our World in Data (Our World in Data-https://ourworldindata.org. Accessed: 20 March 2019.). From that same source we have selected 17 different indicators according to the criteria previously described as reported in Table 2. At the moment, we have limited the semantics to desired trends. Target indicators have been converted as in Equation (2). They are represented by category in Figure 4. An extensive and critical discussion of such statistics is provided by the original source (Our World in Data-https://ourworldindata.org. Accessed: 20 March 2019.) and is, therefore, out of the scope of this paper. In our opinion, It's quite evident that a multi-perspective analysis, although very interesting, becomes impossible in practice if not supported by a proper analysis method. Visualizing the Framework: Parallel Coordinates In order to consider the indicator framework as a whole, we have adopted parallel coordinates [61]. Figure 5 shows target indicators. Such a representation has been produced by using XDAT (XDAT-A free parallel coordinates software. https://www.xdat.org.). Each line represents a year, while each dimension is associated with a different indicator. Indicators are represented as a percentage of variation with respect to the initial state (Equation (2)). For each indicator, we have highlighted the range of the variation. So, for example, the variation of the indicator Democracy (last dimension) is in the range between 0 and 19.4%. Although multi-dimensional data visualization techniques may facilitate the understanding as a whole, they cannot support an actual and proper multi-criteria analysis that requires the combination of indicators to produce some metrics. How Is the World Performing? In this section, we propose a number of experiments and the discussion of the related computation outputs. The MCDA framework proposed in this paper is purely quantitative as it is based on the combination of numerical indicators. In this section, we focus on interpretations in context to provide a balance between quantitative and qualitative aspects. In the first experiment proposed, we have adopted random weights, while the second one is based on two real opinions. The main purpose of this section is to show examples of analysis and possible interpretations. Some Random Experiments By adopting the indicator framework previously proposed, we have performed a number of experiments in which the weights associated to the different indicators are randomly generated. As an example, we discuss two experiments that consist of 5 and 50 random samples respectively. The former is the typical size of a small group; for instance, students are often organized in small groups for discussions and/or to perform their assignments or projects. The latter is normally the size of a whole class at a Master level. Average weights are reported in Figure 6. Regardless of the proportions, the first experiment presents averagely a higher trust in the indicators (5/10 against 4.5/10) as shown by the red bars in the figure. It means that in the first experiment the selected set of indicators is considered more relevant by users than in the second one to measure World performance. We consider the relevance of the selected indicators as a key factor to understand the analysis results in context. Neutral computation provides an understanding of the indicator framework assuming a neutral approach to indicators' weights. As shown in the picture, selected indicators are relatively balanced in the period 2000-2011 with some positive trend but they have a clearly negative trend starting from 2011. Looking at the input, the clear factors that are determining such a trend are the temperature anomaly and the terrorist attacks, which have been dramatically increasing in the period. We believe that is a realistic interpretation of the reality in which other factors (e.g., population living in democracy) point to a general improving of life in many aspects but cannot compensate such global phenomena. The method preserves numerical differences among indicators as it does not adopt normalizations or other kind of manipulations. In this specific case, the relevance of the temperature anomaly and of terrorist attacks is directly reflected in measures and such anomaly is re-proposed in the output. Computation and Interpretations Looking at single samples, they are distributed in a positive, neutral and negative range. The computation based on average values shows a negative performance since 2014 for the smaller experiment and since 2012 for the other one. Both experiments point out a negative trend. Despite that, the computation shows in this case a moderate optimism. Indeed, much more pessimistic performance is possible in theory looking at that set of indicators (right-side charts) and, above all, average computation shows better performance than neutral computation. Figure 7. Computation results related to 5 and 50 random samples. Computing Real Opinions We report two additional experiments based on the weights from two different persons (Figure 8). The purpose of this experiment is to compute weights that reflect real opinions. Real opinions on a large scale to perform statistically relevant experiments will be collected in future research. Both of them show a fundamental lack of interest for economy-related indicators focusing mostly on socio-environmental aspects. Regardless of the differences in the proportions proposed, overall one relies much more on the indicator set proposed. According to those inputs, computations show an evident negative trend for the World performance ( Figure 9) with a fundamental pessimism that is much more marked for the person that trust indicators the most. Again, the temperature anomaly plays an important role. In this case, it becomes the main driver factor as it is considered a top-priority (10/10) in both samples. Moreover, unlike the previous example, there is a total lack of interest in aspects that propose a clearly positive trend, such as the GDP and life expectancy. It determines a further decreasing of performance. However, the actual difference between the two users is made by the terrorist attacks that are not considered relevant at all by the second user. Current Limitations Current limitations can be summarized as follows: • The method does not consider relatedness and redundancy among variables. As discussed in several contributions, e.g., in [62], dependencies and other relationships among variables may play a significant role in terms of correctness and accuracy. In this specific context that assumes global indicators, we believe that the impact of such a limitation is very limited, probably null in most cases. • Limited semantics. As previously discussed, the semantics associated with the different indicators are very limited and uniquely consist of a wished behaviour. That is a functional requirement rather than a proper semantic. As a consequence, the method is purely numeric and doesn't allow semantic analysis. • The method does not deal with uncertainty. The kind of analysis performed is likely to face some uncertainty, such as lack of data and non-numerical indicators. The current development is explicitly oriented to optimize the trade-off between analysis effectiveness and usability. • Tool implementation. Simplicity is considered the key factor to make the method usable in practice. This philosophy is reflected in the implementation of the tool, whose interface is designed to be suitable to any user. As a consequence, certain kind of analysis technically supported (e.g., multi-dimensional) cannot be performed automatically but needs the manual selection of the input and the generation of the output associated. At a more practical level, the attempt to simplify a complex analysis in a way that can be suitable to most users is an intrinsic risk. indeed, the method adopts an unique metric resulting from the combination of the different criteria. Although the tuning mechanism makes the method adaptive to the input dataset, the method is still very sensitive of the weights associated with the different indicators. Generally speaking, all indicators are considered reliable and accurate and it is very unlikely that users are expert in the different areas. The method properly works and supports the decision making process under the assumption that clear decisions in input are taken by users weighting the indicators. Conclusions and Future Work We have proposed a method based on classical MCDA techniques that systematically combines heterogeneous indicators into a unique metric adopted to estimate the performance of the World according to a number of weighted selected criteria. As extensively discussed in the paper, such a method allows to arbitrary select whichever set of indicators, to weight each indicator and to enrich it by associating relatively simple semantics. Such a performance metric provides a combined understanding of performance, whose interpretation takes into account of the input dataset, of the semantics associated and of the input weights. The experiments performed show examples of analysis in context. First of all, from a simple analysis of the input weights, it is possible to infer the estimation of the relevance of the considered indicators by users to answer the research question object of this paper. Furthermore, because of the adaptive approach, the interpretation of numerical computations is performed in terms of pessimism/optimism rather than in absolute terms. In conclusion, we consider the research question object of this paper hard to be answered without a clear understanding of the context in which such a question is formulated. We believe that our adaptive approach can mitigate possible negative effects of pure data-driven analysis by providing a reasonable tool for analysis and interpretation in context. Future work will extend the computational method to explicitly model biases [63] and uncertainty [64]. Additionally, it should support lack of data and extended semantics (e.g., dependencies among indicators), as well as indicators with non-numerical values. We have not included those advanced features in the base version of the method proposed in the paper because, although the analysis capabilities would result probably extended, they add a significant complexity. Such a complexity would strongly affect the input parameters in such a way to be suitable for experts only. Indeed, we believe that one of the strengths of our tool is its simplicity which allows almost any user to select target indicators, to associate with each of them a wished trend and a weight and to compute the result. We consider this delicate trade-off between analysis capabilities and usability as a key issue for the practical application of the method. A survey on selected communities-i.e., university staff and students -as well as on a large scale will be conducted to estimate the weights associated with the different indicators within the different communities.
9,630.6
2019-12-20T00:00:00.000
[ "Computer Science", "Economics" ]
On the Parallel Design and Analysis for 3-d Adi Telegraph Problem with Mpi —In this paper we describe the 3-D Telegraph Equation (3-DTEL) with the use of Alternating Direction Implicit (ADI) method on Geranium Cadcam Cluster (GCC) with Message Passing Interface (MPI) parallel software. The algorithm is presented by the use of Single Program Multiple Data (SPMD) technique. The implementation is discussed by means of Parallel Design and Analysis with the use of Domain Decomposition (DD) strategy. The 3-DTEL with ADI scheme is implemented on the GCC cluster, with an objective to evaluate the overhead it introduces, with ability to exploit the inherent parallelism of the computation. Results of the parallel experiments are presented. The Speedup and Efficiency from the experiments on different block sizes agree with the theoretical analysis. I. INTRODUCTION Parallel computing has greatly motivated the research works on the parallel design and analysis of the 3-DTEL in parallel cluster system. Cluster applications have more processor cores to manage and exploit the computational capacity of high-end machines providing effective and efficient means of parallelism even as the challenges of providing effective resources management grows. It is a known fact that high capacity computing platform are expensive, and are characterized by long-running, high processor-count jobs. The performance of message-passing programs depends on the parallel target machine, and the parallel programming model to be applied to achieve parallelism. In a cluster machines having large number of processing units' scalability becomes an important issue. Many programs from scientific computing have a large potential for parallelism that is exploited best in such a programming model for mixed fast and data parallelism where the parallelism can be structured in the form of concurrent multi-processor tasks [21]. I. INTRODUCTION Parallel computing has greatly motivated the research works on the parallel design and analysis of the 3-DTEL in parallel cluster system.Cluster applications have more processor cores to manage and exploit the computational capacity of high-end machines providing effective and efficient means of parallelism even as the challenges of providing effective resources management grows.It is a known fact that high capacity computing platform are expensive, and are characterized by long-running, high processor-count jobs.The performance of message-passing programs depends on the parallel target machine, and the parallel programming model to be applied to achieve parallelism.In a cluster machines having large number of processing units' scalability becomes an important issue.Many programs from scientific computing have a large potential for parallelism that is exploited best in such a programming model for mixed fast and data parallelism where the parallelism can be structured in the form of concurrent multi-processor tasks [21]. Developing parallel applications have its own challenges in the field of parallel computing.With reference to [11], there are theoretical challenges such as task decomposition, dependence analysis, and task scheduling.Then they are practical challenges such as portability, synchronization, and debugging.However, there exist an alternative and cost effective way of achieving performance through the use of loosely connected system of processors with a local area network [3].Hence, for a global task with other processors relevant data needs to be passed from processor to processor through a message-passing mechanism [20,15], since there is greater demand for computational speed and the computations must be completed within reasonable time period.A multi-processor task can be implemented on a subset of processors, and one of the advantages is based on the fact that for many message-passing machines communication costs are affected by the number of participating processors. Design and analysis for finite difference DD for 2-D heat equation has been discussed in [23], and the parallelization for 3-DTEL on parallel virtual machine with DD [8] show effective load scheduling over various mesh sizes, which produce the expected inherent speedups.Parallel algorithms have been implemented for the finite difference method by [12], and [21,13] use the discrete eigen functions method with the AGE method on telegraph equation problem. The theoretical properties of the 3-D ADI algorithm with the parallel design approach employing SPMD model with DD are promising, achieving good performance as to what was done by [7] in practice can be challenging.There is a tradeoff between the reduction of the time required for an inherently sequential part of an algorithm, and an increase in the number of the iterations required to converge [2].Previous work on 3-D ADI scheme did not consider the parallel design approach on parallelism and improvement on scalability.To write SPMD programs using one of the standard message-passing software like MPI [13] requires the explicit administration of processors with a large user group.In this paper, we present a support for the implementation of parallel design and analysis with the use of DD strategy.Our programming style allows the application programmer to specify the program organization in a clear and readable program code. We presented a detailed study of using parallel design and analysis on 3-DTEL, and solved by the use of ADI method on a GCC cluster MPI.The SPMD model is employed with DD to enhance overlapping communication with computation that resulted in significant improved speedup, effectiveness, and efficiency across varying mesh sizes as compared to [7]. Our results demonstrated the overlap communication with computation, and the ability to arbitrary use of varying mesh sizes distribution on GCC to reduce memory pressure while preserving parallel efficiency.On the other hand, the advantage of our platform is to have somewhat specification mechanism through a static distribution, and an execution implementation. 123 | P a g e www.ijacsa.thesai.org The rest of the paper is organized as follows.Section 2 presents related work.Section 3 introduces the model for the 3-DTEL and the 3-D ADI scheme.Section 4 introduces the parallel design and analysis.Section 5 introduces the results of several experiments, which illustrate and evaluate the parallelization possible with our platform.Section 6 gives the conclusion. II. RELATED WORK A work by [16] achieved configuration of MPI-based message passing programs, and various other platforms for the application of telegraph and heat equations have been done in [7,8].Description of application aware job scheduler that dynamically controls resource allocation among concurrently executing jobs was done by [22].A framework called 'Gridway' for adaptive execution of applications in Grids was described by [14].Parallelization by time decomposition was first proposed by [18] with motivation to achieve parallel realtime solutions, and even the importance of loop parallelism, loop scheduling have been extensively studied [1].The ADI method for the Partial Differential Equations (PDE) proposed by [19] has been widely used for solving algebraic systems resulting from finite difference method analysis of PDE in several scientific and engineering applications.Works on parallel implementation of 2-D Telegraph problem on cluster systems have been done in [10,12]. In [12] the unconditional stability of the alternating difference schemes has similarity to our scheme and shows that the unconditional stability application is useful to its speedup and efficiency as studied.Our implementation in the GCC platform has several aspects that differentiate it from the above.GCC is designed for application running on distributed memory clusters, which can dynamically and statically calculate partition sizes based on the run-time performance of the application.We use an efficient algorithm with stability which maps data using message passing over the GCC cluster.We evaluated our system using experimental results from speedup and efficiency for the system utilization.Our approach is best suited to applications where data and computations are uniformly distributed across processors. III. THE MODEL PROBLEM We consider the second order telegraph equation in 3-D: where a RC GL , let z and y x    , be the grid spacing in the x, y, z and t directions, where , m is a positive integer.We can solve (3.1) by extending the 1-D simple implicit finite difference method [21] of the telegraph equation to the above 3-D telegraph equation, (3.1) becomes: 0 although this simple implicit scheme is unconditionally stable, therefore, the computational time is extremely huge. A. ADI Method on 3-DTEL We derive the ADI method for 3-DTEL of the simple implicit finite difference method by using a general ADI procedure [6] extended to (3.1).The ADI method is a wellknown method for solving the PDE.The main feature of ADI is to sweep directions alternatively.In contrast to the standard finite-difference formulation with only one iteration to advance from the nth to (n + 1)th time step, the formulation of the ADI method requires multilevel intermediate steps to advance from the nth to (n + 1)th time step.Equation (3.2) can be rewritten as: where the operators of I, A m s, and the constants of C o , C 1 are define as: v by the extrapolation method. Then splitting (3.3) by using an ADI procedure as in [17], we get a set of recursion relations as follows: are the intermediate solutions and the desired solution is A 2 and A 3 on the left side of (3.14) and (3.16), we get the 3-D ADI algorithm as in Table 1. A. The Parallel Platform The Geranium Cadcam Cluster consist of 32 Intel Pentium dual core processor at 1.73GHZ and 0.99GB RAM.Communication is through a fast Ethernet of 100 MBits per seconds running Linux, located at the University of Malaya.The cluster performance has high memory bandwidth with a message passing supported by MPI [13].The program is written in C and provides access to MPI through calling MPI library routines.The platform contains more computations on varying set of mesh sizes.Performance in the platform concerns the resource assessment and code placement on computing resources [5].The 3-DTEL with ADI scheme is implemented on the GCC cluster, with an objective to evaluate the overhead it introduces with ability to exploit the inherent parallelism of the computation.We observed the scalability across the varying number of processors and mesh sizes, to enable the speedup we need convergence in fewer than N iterations. B. Domain Decomposition The parallelization of the computations is implemented by means of grid partitioning technique.The computing domain is decomposed into many blocks with reasonable geometries.Along the block interfaces, auxiliary control volumes containing the corresponding boundary values of the neighboring block are introduced, so that the grids of neighboring blocks are overlapped at the boundary.When the domain is split, each block is given an I-D number by a "master" task, which assigns these sub-domains to "slave" tasks running in individual processors.In order to couple the sub-domains' calculations, the boundary data of neighboring blocks have to be interchanged after each iteration.The calculations in the sub-domains use the old values at the subdomains' boundaries as boundary conditions.This may affect the convergence rate; however, because the algorithm is implicit, the blocks strategy can preserve nearly same accuracy as the sequential program. The DD is used to distribute data between different processors; the static load balancing is used to maintain same computational points for each processor.The partitioning and load balancing is done in the pre-processing stage giving no room for extra storage when the parallel program is executed.Data parallelism originated the SPMD [17], thus, the finite difference approximation used in this paper can be treated as an SPMD problem.Same computation is performed for multiple data sets, and the multiple data are different parts of the overall grid. C. Parallel ADI with MPI We focus on computational domain partitions in implementing the parallel 3-DTEL ADI scheme on GCC platform.We need divide the dimensions into sub-domains with no unique way of partitioning the domain of computation.The case of making a balance between the implementation of the algorithm and the communication efficiency is paramount to balance.The partitioning considered is the orientation of slices changing with the sweeps according to [4]. Begin Sub-Iteration 1: After x-sweeps, the orientation changes to the y or the z direction.In this process each processor owns three data domains, one for each direction.Implementing the parallel algorithm for solving (3.1) is based on: indication of sweeping direction for each sub-domain.Sweeping direction of each sub-domain must be in opposite direction of its neighbors.For example, we must use left right direction for odd sub-domains and right left direction for even sub-domains.Updating start node of each sub-domain with (3.14) and (3.16), each processor of the parallel machine works only on its specific portion of the grid and when processor needs information from the nearest neighbor a message is passed through the MPI message passing library.For the best parallel performance, one would like to have optimal load balancing and as little communication between processors as possible.Considering load balancing first, one would like each processor to do exactly the same amount of work, hence, each processor is not idle.For the finite difference code, the basic computational element usually is the node; it makes sense to partition the grid such that each processor gets an equal number of nodes to work on.The second criterion is that the amount of communication between processors be made as small as possible.To minimize communication, the program must divide the domain in a way that minimizes the length of the touching faces in the different sub-domains.The number of processors that one processor has to communicate with also contributes to additional communication time, because of the latency penalty for starting the new message.At first step, we divide the spatial computational domain to D. Load Balancing With static load balancing, the computation time of parallel subtasks should be relatively uniform across processors; otherwise, some processors will be idle waiting for others to finish their subtasks.Therefore, the domain decomposition should be reasonably uniform.A better load balancing is achieved with the pool of tasks strategy, which is often used in masterslave programming [2]: the master task keeps track of idle slaves in the distributed pool and sends out the next task to the first available idle slave.With this strategy, the processors are kept busy until there is no further task in the pool.If the tasks vary in complexity, the most complex tasks are sent out to the most powerful processor first.With this strategy, the number of sub-domains should be relatively large compared to the number of processors. Otherwise, the slave solving the last sent block will force others to wait for the completion of this task; this is especially true if this processor happens to be the least powerful in the distributed system.The block size should not be too small either, since the overlap of nodes at the interfaces of the subdomains become significant.This results in a doubling of the computations of some variables on the interfacial nodes, leading to a reduced efficiency.Increasing the block number also lengthens the execution time of the master program, which leads to a reduced efficiency. E. Speedup and Efficiency A simple speedup analysis with reference to [2] produces the following: where r is the ration of the time taken by coarse propagation to fine propagation over the same time interval, K is the number of iterations required for convergence, and communication overhead is ignored.In the limit , , 0 therefore, the efficiency will be . K The algorithm for the scheme is performed on a distributed memory system of p processors, assumes that each processors initially stores n = N/p objects distributed over the entire physical domain.In the first iteration of the algorithm, the domain is decomposed into two sub-domains so that the difference between the sums of the weight of the sub-domain is as small as possible.Then the same process is applied to two sub-domains in parallel, and process is repeated recursively, for log p iteration.In other words, during iteration i, At the beginning of the iteration, the problem domain is already partitioned into 1 2 i sub-domains and the objects in each sub-domain are stored in single group of processors.At the end of the iteration, each processor group is divided into two groups, and the corresponding sub-domain is divided into two sub-groups with the object in one subdomain residing in one half the processors and the other objects in the other sub-domain residing in the other half of processor V. RESULTS AND DISCUSSION Consider the Telegraph Equation of the form: The results in the Tables show that the parallel efficiency increases with increasing grid size for given block number, and decreases with the increasing block number for given grid size.As the number of processors increase, though this leads to a decrease in execution time, but a point is reached when the increased processors will not have much impact on total execution time.Hence, when the numbers of processors increase, balancing the number of computational cells per processors will become a difficult task due to significant load imbalance.The gain in increasing execution time for certain mess sizes is due to uneven distribution of the computational cell, and the execution time has a very small change due to DD influence on performance in parallel computation. The total CPU time is composed of three parts: the CPU time for the master task, the average slave CPU time for data communication and the average slave CPU time for computation, . B. Numerical Efficiency The numerical efficiency includes the DD efficiency and convergence rate behavior.The DD efficiency includes the increase of floating point operations induced by grid overlap at interfaces and the CPU time variation generated by DD techniques.In Table 5, we listed the total CPU time distribution over various grid sizes and block numbers running with only one processor.In Table, the DD efficiency can be calculated, and the result as shown in Fig. 3.Note that the DD efficiency can be greater than one, even with one processor.Fig. 3 also shows that the optimum number of sub-domains, which maximizes the DD efficiency E DD , increases with the grid size.The convergence rate behavior, the ratio of the iteration number for the best sequential CPU time on one processor and the iteration number for the parallel CPU time on n processor, describes the increase in the number of iterations required by the parallel method to achieve a specified accuracy, as compared to the serial method.This increase is caused mainly by the deterioration in the rate of convergence with increasing number of processors and subdomains.Because the best serial algorithm is not known generally, we take the existing parallel program running on one processor to replace it.Now the problem is that how the decomposition strategy affects the convergence rate?The results are summarized in Table 6 and Fig. 4, and Table 7 and Fig. 5. It can be seen that the convergence rate decreases with increasing block number and increasing number of processors for given grid size.The larger the grid size, the higher the convergence rate.VI.CONCLUSION The results presented in this paper show the study on the parallel design and analysis for 3-D TEL ADI scheme with MPI.The objective is to present a design for the GCC for distributed computation, because they depend on empirical concern.The system allows a parallel collection of overlapping communication to avoid unnecessary synchronization and to have the impact of parallel convergence.In addition to the use of ease of our platform, compared to other approaches show negligible overhead with effective load scheduling over various mesh sizes, which produce the expected inherent speedups.It was also confirmed that flexible scheduling for the overlapping communication are important, and this is easy on with SPMD model as seen from the Tables and Figures.Computational results obtained have clearly shown the benefits of parallelization.The DD greatly influences the performance of the 3-DTEL ADI scheme on the parallel computers.On the basis of the current parallelization strategy, more sophisticated models can be attacked efficiently.Similarly, we are interested in improving our algorithms and testing implementations on additional architectures. the non-blocking message passing for this communication stage to reduce computing time by allowing work to be done while communication is in progress. T EfficiencyThe speedup and efficiency obtained for various sizes, for 70x70x6 to 210x210x6, are for various numbers of subdomains, for B = 50 are listed in Tables 2 -4.In the Tables www.ijacsa.thesai.orgwe also listed the wall (elapsed) time for the master task, , W T (this is necessarily greater than the maximum wall time returned by the slaves), the master CPU time, , all in seconds.The speedup and efficiency versus the number of processors are shown in Fig.1and Fig.2, respectively, with block number B as a parameter. TABLE II . THE WALL TIME TW, THE MASTER TIME TM, THE SLAVE DATA TIME TSD, THE SLAVE COMPUTATIONAL TIME TSC, THE TOTAL TIME T, THE PARALLEL SPEED-UP SPAR AND THE EFFICIENCY EPAR FOR A MESH OF 70X70X6, WITH B = 50 BLOCKS AND NITER = 100. TABLE III . THE WALL TIME TW, THE MASTER TIME TM, THE SLAVE DATA TIME TSD, THE SLAVE COMPUTATIONAL TIME TSC, THE TOTAL TIME T, THE PARALLEL SPEED-UP SPAR AND THE EFFICIENCY EPAR FOR A MESH OF 120X120X6, WITH B = 50 BLOCKS AND NITER = 100. TABLE IV . THE WALL TIME TW, THE MASTER TIME TM, THE SLAVE DATA TIME TSD, THE SLAVE COMPUTATIONAL TIME TSC, THE TOTAL TIME T, THE PARALLEL SPEED-UP SPAR AND THE EFFICIENCY EPAR FOR A MESH OF 210X210X6, WITH B = 50 BLOCKS AND NITER = 100. TABLE VII . THE NUMBER OF ITERATION TO ACHIEVE A GIVEN TOLERANCE OF 10 -2 FOR A GRID OF 120X120X6
4,877.6
2014-01-01T00:00:00.000
[ "Computer Science" ]
Origins of Systems Biology in William Harvey’s Masterpiece on the Movement of the Heart and the Blood in Animals In this article we continue our exploration of the historical roots of systems biology by considering the work of William Harvey. Central arguments in his work on the movement of the heart and the circulation of the blood can be shown to presage the concepts and methods of integrative systems biology. These include: (a) the analysis of the level of biological organization at which a function (e.g. cardiac rhythm) can be said to occur; (b) the use of quantitative mathematical modelling to generate testable hypotheses and deduce a fundamental physiological principle (the circulation of the blood) and (c) the iterative submission of his predictions to an experimental test. This article is the result of a tri-lingual study: as Harvey’s masterpiece was published in Latin in 1628, we have checked the original edition and compared it with and between the English and French translations, some of which are given as notes to inform the reader of differences in interpretation. Introduction In recent articles, we have both drawn attention to some of the historical antecedents of modern systems biology, notably in articles referring to Claude Bernard's Introduction à l'étude de la Médicine Expérimentale [1] and to Gregor Mendel's Versuche über Pflanzen-Hybriden, [2] both published in 1865. The first is considered as the founder of modern experimental medicine, while the second laid the ground for the development of genetics. We argued that both approached and unraveled the functioning of the biological systems they were studying through a highly relevant combination of experiment and modelling which is the hallmark of systems biology. In this article we draw attention to the very important historical antecedent represented by the work of William Harvey (1578Harvey ( -1657. While there may be no generally accepted and simple definition of systems biology, many good expressions of its main features can be found in review articles and books ( [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18] see also the paper by Saks et al. in this issue [19]), and it is almost universal to refer in some way to the concept of level and to the role of mathematics, whether they are combined in data-driven (bottom-up) or top-down (model-driven) approaches, or the middle-out (question-driven) research strategy that we favor. These two features appear prominently in the masterpiece of William Harvey, Exercitatio anatomica de motu cordis et sanguinis in animalibus published in Latin in Frankfurt in 1628 (translated into modern English by Gweneth Whitteridge [20], and into French by Charles Richet [21]), where he reported his experimental work demonstrating the circulation of blood in animals. Identifying historical precedents for modern biological ideas and methods is important. A subject that neglects its roots fails to benefit from the insights and problems of our predecessors. It is also somewhat humbling to realize that, however enthusiastic we may be about the modern systems approach to biology, the approach is not entirely new. Moreover, claiming such antecedents as Harvey, Bernard and Mendel serves to encourage other biologists to view systems biology in a more favourable light. Identifying the biological level at which rhythm is generated and integrated Harvey first describes an experiment in which he seeks a lower level than the organ for the origin of the rhythmic activity of the heart. He writes: "The heart of an eel and of certain other fish and animals, having been taken out of the body, beats without auricles. Furthermore, if you cut it in pieces, you will see the separate pieces each contract and relax, so that in them the very body of the heart beats and leaps after the auricles have ceased to move." Harvey could not, in his day, take this dissection further down to discover that the rhythmic mechanism was integrated at the level of individual cells (see [22], chapter 5), since the cell theory was formulated by Matthias Schleiden (1804-1881) and Theodor Schwann (1810-1882) two centuries later based on observations with the microscope introduced in practice in the life sciences by Anton van Leeuwenhoek (1632-1723) only after Harvey's death. However, he was the first to realise that rhythmicity was a property of the smallest structures he could discern. Demonstration of the circulation of the blood through a systems approach The discovery for which Harvey is best-known is, of course, the circulation of the blood. Already in 1616, in his lecture notes Prelectiones Anatomiae Universalis [23], he wrote: "So it is proved that a continual movement of the blood in a circle is caused by the beat of the heart." It is perhaps less well-known that this was the result of a careful series of observations and calculations subjected to an iterative process of modelling and experimental validation which has already all the features of a systems biology inquiry which typically comprises the following steps: formulate a general or specific question; define the components of a biological system and collect previous relevant datasets; integrate them to formulate an initial model of the system and generate testable predictions and hypotheses; systematically perturb the components of the system experimentally or through simulation, and study the results; compare the responses observed to those predicted by the model; refine the model so that its prediction fit best to the experimental observations; conceive and test new experimental perturbations to distinguish between the multiple competing hypotheses; iterate the process until a suitable response to the initial question is obtained [9,13]. In what follows, we examine how William Harvey goes through this multi-step process to address the general and fundamental question of the significance of the movements of the heart and the blood for the understanding of life and disease in animals. Critical assessment of previous data The introduction and the first seven chapters of his book are devoted to a critical assessment of previous work by the Greek philosophers and naturalists considered as the fathers of medicine: Hippocrates of Cos (460 BC-370 BC), Aristotle (384 BC-322 BC), Eristratus of Chios (304 BC-250 BC), and Galen of Pergamum (129-200), the founder of the medical practice in use until Harvey's time. As a result, he was able to assemble in a coherent manner a wealth of relevant information gathered by his predecessors, identifying uncertainties and contradictions in the description of the movement of the heart and the blood, and dismissing factual errors of observation or interpretation. While he cites them explicitly and extensively, he does not refer to the work of the Arabic polymath Ibn al-Nafis (1213-1288) or of Miguel Servet (1511-1553) who both described the pulmonary circulation independently, but remained unknown to him. Their observations were most likely conveyed by his immediate predecessors the anatomists Andreas Vesalius (1514-1564), who had been in contact with Servet, and Realdo Colombo (1510-1559) as they both worked for some time in Padua where Harvey obtained his medical degree in 1602. Details on the life of Harvey can be found in Keynes [24] and Whitteridge [25], while his anatomical lectures have been completely translated in a bilingual (Latin and English) edition [23]. While in Padua, Harvey was influenced by Girolamo Fabrizi d'Acquapendente or Fabricius (1537-1619) his teacher at the Faculty of medicine, and became fascinated by the valves of the veins (already known to Erasistratus). He showed that these could pass the blood only in one direction. From which it followed that the blood that was taken out to the limbs and organs by the arteries had to return via the veins. The existence of the valves ensured that it could not be just an ebb and flow of fluid, as had been taught since antiquity. Formulation of a model and derivation of testable hypotheses In chapter eight of Exercitatio anatomica de motu cordis et sanguinis in animalibus (Decopia sanguinis transeuntis per cor e venis in arteria, et de circulari motu sanguinis [26]), Harvey presents his model: the circular movement of blood (de circolari motu sanguinis) depends on the movements and pulsations of the heart. The model is based on a quantitative evaluation of the amount of blood passing through the heart, the veins and the arteries, and the disposition of the valves in the heart and the blood vessels. In chapter nine (Esse sanguinis circuitum ex primo supposito confirmato [27]), he derives three hypotheses (suppositio) from his model, which he intends to demonstrate through experiments: in a brief period of time, the totality of the blood in the organism passes 1) from the veins into the arteries; 2) from the arteries in all body parts; and 3) from the body parts to the heart through the veins. He states: "These proved, I think it will be clear that the blood circulates, passing away from the heart to the extremities and then returning back to the heart, thus moving in a circle" [28] 2.2.3. Quantitative assessment of experimental parameters He then proceeds with a numerical calculation, based on a quantitative estimation of the parameters and of their variation: the volume of blood in the heart, the volume of blood ejected from the left ventricle into the aorta at each contraction, the number of contractions in half an hour or a day. He writes: "Then we may suppose in man that a single heart beat would force out either a half ounce, three drams, or even one dram of blood, which because the valvular block could not flow back that way into the heart. The heart makes more than a thousand beats in half an hour, in some two, three, or even four thousand. Multiplying by the drams, there will be in half an hour either 3,000 drams, 2,000 drams, five hundred ounces, or some other such proportionate amount of blood forced into the arteries by the heart, but always a greater quantity than is present in the whole body." [29] and concludes: "On this assumption of the passage of blood, made as a basis for argument, and from the estimation of the pulse rate, it is apparent that the entire quantity of blood passes from the veins to the arteries through the heart, and likewise through the lungs." [30] and "But suppose even the smallest amount of blood be transmitted through the lungs and heart at a single beat, a greater quantity would eventually be pumped into the arteries and the body than could be furnished by the food consumed, unless by constantly making a circuit and returning." [31] Submission of the mathematical predictions to experimental tests Without the mathematics, the conclusion would not have been reached. But Harvey went even further. From the prediction of his calculation he proceeds to the key experiment. The calculation predicts that the body should empty itself of blood in half an hour if the blood is prevented from circulating: "This is also clearly to be seen by any who watch the dissection of living creatures, for not only if the great artery be cut, but, as Galen proves, even in man himself, if any artery even the smallest be cut, in the space of about half an hour, the whole mass of blood will be drained out of the whole body…" This is the iteration between theory and experiment that is essential to success of a systems approach today as it was already in Harvey's time. He summarizes at the beginning of chapter ten (Primum suppositum decopia pertranseuntis sanguinis e venis in arterias, et esse sanguinis circuitum ab obiectionibus vindicatur, et experimentis ulterius confirmatur [32]): "Whether the matter be referred to calculation or to experiment and dissection, the important proposition has been established that blood is continually poured into the arteries in a greater amount than can be supplied by the food. Since it all flows past in so short a time, it must be made to flow in a circle." [33] He then proceeds in a similar manner in chapters eleven (Secundum suppositum confirmatur [34]) and twelve (Esse sanguinis circuitum ex secundo supposito confirmato [35]) to demonstrate the second hypothesis, on the basis of observations and numerical calculations performed during experiments using ligatures and compressions, discussing their consequences in terms of medical practice: "If these things are so, we may very readily compute the amount of blood and come to some conclusion on its circular motion." [36] In chapter thirteen (Tertium suppositum confirmatur, et esse sanguinis circuitum ex tertio supposito [37]), Harvey endeavours to prove the third hypothesis: "This proposition will be perfectly clear from a consideration of the valves found in the venous cavities, from their functions, and from experiments demonstrable with them." He bases his argument on a series of experiments in which he details the consequences of ligatures and compressions exerted on arm veins, as illustrated in anatomic schemas, and supported once again by a numerical calculation: "By careful reckoning, of course, the quantity of blood forced up beyond the valve by a single compression may be estimated, and this multiplied by a thousand gives so much blood transmitted in this way through a single portion of the veins in a relatively short time, that without doubt you will be very easily convinced by the quickness of its passage of the circulation of the blood." [38] He concludes briefly in chapter fourteen (Conclusio demonstrationis de sanguinis circuitu [39]) on the demonstration of the circulation of blood, ending the first iteration of a typical systems biology approach: "It must therefore be concluded that the blood in the animal body moves around in a circle continuously, and that the action or function of the heart is to accomplish this by pumping. This is the only reason for the motion and beat of the heart." Circulation, circuit and capillaries While the central notion of circular movement is explicit from Chapter eight on, it is worth pointing that Harvey made a clear distinction between the anatomical structures and the action taking place within them (the movement of the heart and the circulation of the blood). This is apparent in his repeated use in eight of the last nine chapter headings of the Latin word circuitus, which has been translated rather loosely as "circulation" in both the English and French versions. As an outstanding anatomist, he was well aware that in order to allow "circulation" of the blood, the "circuit" had to be closed at the juncture between the arteries and the veins. It is therefore worth pointing to his reference to "venis capillaribus in paruas ramifications" (chapter fifteen), and "ultimae diuisiones capillares, arteriolae videantur" (chapter seventeen) which were wrongly translated as "tiny veins" and "terminal arteries" [20], giving the impression he had missed this important notion. The existence and role of capillaries in the circulation would be demonstrated only later in 1661 by the Italian histologist Marcello Malpighi (1628-1694) when he examined blood vessels in frogs using the then recently invented microscope. Malpighi is also famous for giving his name to a number of anatomical structures in animals and insects, and was the first to report his experimental findings in scientific articles including a method section, as has become the routine practice in modern science ever since. Conclusions: Harvey and the Conceptual and Ethical Foundations of Modern Science Harvey's achievements are all the more remarkable since they were performed when the basic concepts and methods that would form the bases for the development of modern science were just being established. Francis Bacon (1561-1626) published his Novum Organum in 1620, and René Descartes (1596-1650) Discours de la Méthode in 1637, the same year when he introduced the algebraic notation using Latin letters. It is therefore not surprising that all of Harvey's calculations are expressed literally. Despite these limitations, he was very much aware of the conceptual and practical aspects of his experiments, which were known to Descartes himself, as is shown in his responses to the criticisms of Jean Riolan Fils, of Paris, the chief medical doctor of Louis XIII's mother (Harvey himself was the doctor of two kings of England, James I and Charles I). Riolan, one of Harvey's severest critics on the circulation of the blood wrote in his Encheiridium anatomicum et patholigicum: "That this circulatory movement may be more easily and more conveniently maintained, William Harvey, Englishman, Royal Physician, and author and discoverer of this movement of the blood, and John Waleus, professor of Leyden, who defends and vigorously upholds it, believe the blood to be taken through the lungs from the right to the left ventricle of the heart and deny its passage through the septum of the heart, and so they believe that in one or two hours all the blood passes through the heart and through the whole body. This I do not admit." Riolan was trying, valiantly but vainly, to reconcile strict Galenic teaching with Harvey's observations. "The resulting inconsistencies and contradictions Harvey was not slow to point out" [25]. In his Exercitationes duae anatomicae de circulatione sanguinis (Two anatomical exercises concerning the circulation of blood) [21,24], published in 1649 in response to Riolan, Harvey states: "There is no science that derives only from a priori ideas, and there is no solid and certain knowledge that does not taken its origin from our sense organs" (first dissertation) [45]. "But it is our senses, not accepted theories, dissection and not the dreams of imagination, that should teach us what is true or false (second dissertation) [46] "A man remarkable for his brilliant genius, René Descartes, who I thank for the complimentary reference that he has made of me" (second dissertation) [47]. "But I think it a thing unworthy of a Philosopher and a searcher of the truth, to return bad words for bad words; and I think I shall do better and more advised, if with the light of true and evident observations I shall wipe away those symptoms of incivility" (second dissertation) [48]. Only two years later, using the same approach as for the study of circulation, he published Exercitationes de Generatione Animalium [49] which contributed to the foundation of modern embryology. It would be interesting to speculate on why some of the important features of the systems approach, particularly the use of mathematics and modeling, became neglected until recently. Factors that may have played a part include: the sheer difficulty of applying mathematics in biology; the lack of suitable means for solving the problems, which became tractable only after the invention of the digital computer in the second half of the 20 th century; the rise of a positivist (reductionist) bias in biology from the 19 th century onwards (many leading physiological journals actually excluded mathematical biology); and, most recently, the rise of molecular biology, with a tendency to avoid theory (except, very significantly, the central dogma of molecular biology). A full treatment of these and other factors would require a detailed historical analysis and will be the subject of a further article. In any event, almost four centuries after he published his masterpiece, the concepts and experimental principles that were laid out by William Harvey are some of the pillars on which several branches of natural and engineering sciences have been flourishing. This common origin should facilitate the cross-fertilization of biology, following the quantitative footsteps of physics and engineering, thus enabling the extension of physiology into integrative systems biology. 31. « Mais, quelque petite que soit la quantité de sang qui passe par le coeur et les poumons, il y en a néanmoins bien trop pour que les aliments ingérés y puissent suffire, à moins que le sang ne revienne par les mêmes trajets. » [21]. 32. The first proposition, concerning the amount of blood passing from veins to arteries, during the circulation of the blood, is freed from objections, and confirmed by experiments [20]; La première hypothèse sur la circulation du sang, fondée sur la quantité de sang qui passe des veines dans les artères, est confirmée par des expériences ; et les objections qu'on lui avaient opposées sont réfutées [21]. 33. « Jusqu'ici le calcul, les expériences, les dissections ont confirmé notre première hypothèse, que le sang passe continuellement dans les artères, et en trop grande quantité pour que les aliments y puissent suffire, en sorte que comme la totalité du sang passe en très peu de temps par le même endroit, le sang doit nécessairement revenir par les mêmes voies et accomplit un véritable circuit. » [21]. 34. The second proposition is proven [20]; Confirmation de la seconde hypothèse [21]. 35. That there is a circulation of the blood follows from the proof of the second proposition [20]; La confirmation de la seconde hypothèse démontre la circulation du sang [21]. 36. « Maintenant calculons la quantité de sang qui passe par les veines, et démontrons à l'aide de calculs le mouvement circulaire du sang. » [21]. 37. The third proposition is proven, and the circulation of the blood is demonstrated from it [20]; Confirmation de la troisième hypothèse, qui démontre la circulation du sang [21]. 38. « Calculez maintenant combien de sang vous aurez arrêté en mettant le doigt au-dessus de la valvule, et multipliez cette quantité par milliers ; vous verrez alors quelle grande quantité de sang passe ainsi dans cette petite portion de veine, en un temps aussi court, et je crois que vous serez bien convaincu de la circulation du sang et de la rapidité de son mouvement. » [21]. 39. Conclusion on the demonstration of the circulation of blood [20]; Conclusion de la démonstration de la circulation du sang [21]. 40. « Il faut donc nécessairement conclure que chez les animaux le sang est animé d'un mouvement circulaire qui l'emporte dans une agitation perpétuelle, et que c'est là le rôle, c'est là la fonction du coeur, dont la contraction est la cause unique de tous ces mouvements. » [21]. 41. The circulation of blood is confirmed by plausible methods [20]; La circulation du sang confirmée par les vraisemblances. 42. The circulation of the blood is supported by its implications [20]; La circulation du sang prouvée par les implications qu'elle entraîne [21]. 43. « Il y a encore des problèmes qui sont comme la conséquence de la vérité de la circulation. Ils ne sont point inutiles pour y faire croire et leur démonstration est comme un argument a posteriori. » [21]. 44. The motion and circulation of the blood is established by what is displayed in the heart and elsewhere by anatomical investigation [20]; Confirmation du mouvement et de la circulation du sang par ce que nous voyons dans le Coeur, et par les observations anatomiques [21]. 45. « Il n'y a pas de science qui ne dérive d'une idée a priori, et il n'y a pas de connaissance solide et sûre qui ne tire son origine des sens. » [21].
5,276.6
2009-04-01T00:00:00.000
[ "Biology", "Philosophy" ]
Analysis of Bidirectional Ballot Sequences and Random Walks Ending in Their Maximum Consider non-negative lattice paths ending at their maximum height, which will be called admissible paths. We show that the probability for a lattice path to be admissible is related to the Chebyshev polynomials of the first or second kind, depending on whether the lattice path is defined with a reflective barrier or not. Parameters like the number of admissible paths with given length or the expected height are analyzed asymptotically. Additionally, we use a bijection between admissible random walks and special binary sequences to prove a recent conjecture by Zhao on ballot sequences. Introduction Lattice paths as well as their stochastic incarnation -random walks -are interesting and classical objects of study. Several authors have investigated a variety of parameters related to lattice paths. For example, Banderier and Flajolet gave an asymptotic analysis of the number of special lattice paths with fixed length in [2]. De Brujin, Knuth, and Rice [4] analyzed the expected height of certain lattice paths, and Panny and Prodinger [14] determined the asymptotic behavior of such paths with respect to several notions of height. The particular class of lattice paths we want to analyze in this paper is defined as follows. for random walks on Z. Then (S k ) 0≤k≤n is said to be admissible of height h, if the random walk stays within the interval [0, h] and ends in h, i.e., S k ∈ [0, h] for all 0 ≤ k ≤ n and S n = h. It is called admissible, if it is admissible of height h for some h ∈ N. The probability that a random walk of length n is admissible of height h is written as p (h) n and q (h) n for random walks on N 0 and Z, respectively. Furthermore, the probabilities that a random walk is admissible at all are defined as p n := ∑ h≥0 p (h) n and q n := ∑ h≥0 q (h) n , respectively. Finally, an admissible lattice path is a sequence of integers realizing an admissible random walk. In a nutshell, this means that an admissible random walk is a non-negative random walk ending in its maximum. The definition is also visualized in Figure 1, where all admissible lattice paths of length 5 are depicted. There are three admissible lattice paths of height 3, and one of height 1 and 5, respectively. Note that when considering random walks on Z, every lattice path has the same probability 2 −n . Admissible random walks on Z are enumerated by sequence A167510 in [11]. However, in the case of random walks on N 0 , the probability depends on the number of visits to 0: if there are v such visits (including the initial state), then the path occurs with probability 2 −n+v . Note that by "folding down" (i.e., reflecting about the x-axis) some sections between consecutive visits to 0, or the section between the last visit and the end, 2 v lattice paths on Z can be formed, where the random walk is never farther away from the start than at the end. We will call such lattice paths extremal lattice paths -and by construction, the number of extremal lattice paths of length n is given by p n 2 n . To illustrate this idea of extremal lattice paths, all paths of this form of length 3 are given in Figure 2. One of our motivations for investigating admissible random walks originates from a conjecture in [16]. There, Zhao introduced the notion of a bidirectional ballot sequence: sequence if every prefix and suffix contains strictly more 1's than 0's. The number of bidirectional ballot sequences of length n is denoted by B n . Bidirectional ballot sequences are strongly related to admissible random walks on Z. In fact, every bidirectional ballot sequence of length n + 2 bijectively corresponds to an admissible random walk of length n on Z: given an admissible random walk, every up-step corresponds to a 1, and down-steps correspond to 0. Adding a 1 both at the beginning and at the end of the constructed string gives a bidirectional ballot sequence of length n + 2. Therefore, bidirectional ballot walks may also be seen as lattice paths with unique minimum and maximum. While we restrict ourselves to simple lattice paths (i.e., the path has steps ±1), Bousquet-Mélou and Ponty introduce a more general class of so-called culminating paths in [3]. Akin to bidirectional ballot walks, culminating paths are lattice paths with unique minimum and maximum -however, the lattice path has steps a and −b for fixed a, b > 0. They show that the behavior of these paths strongly depends on the drift a − b. In particular, for a = b = 1 (i.e., for bidirectional ballot walks) they determine the main term of the asymptotic expansion of B n (cf. [3,Proposition 4.1]). In [16], Zhao also shows that B n = Θ(2 n /n), states (without detailed proof) that B n ∼ 2 n /(4n) and conjectures that In this paper, we want to give a detailed analysis of the asymptotic behavior of admissible random walks. By exploiting the bijection between admissible random walks and bidirectional ballot sequences, we also prove a stronger version of Zhao's conjecture. In order to do so, we use a connection between Chebyshev polynomials and the probabilities p plore in detail in Section 2. This allows us to determine explicit representations of the probabilities p n and q n , which are given in Theorem 2.5. The analysis of the asymptotic behavior of admissible random walks of given length shall focus in particular on the height of these random walks. In this context, we define random variables H n and H n by These random variables model the height of admissible random walks on N 0 and Z, respectively. Besides an asymptotic expansion for p n and q n , we are also interested in the behavior of the expected height and its variance. The asymptotic analysis of these expressions, which is based on an approach featuring the Mellin transform, is carried out in Sections 3 and 4, and the results are given in Theorems 3.5 and 4.2, respectively. Finally, Zhao's conjecture is proved in Corollary 4.5. Chebyshev Polynomials and Random Walks We denote the Chebyshev polynomials of the first and second kind by T h and U h , respectively, i.e., In the following propositions, we show that these polynomials occur when analyzing admissible random walks. As usual, the notation [z n ] f (z) denotes the coefficient of z n in the series expansion of f (z). Proposition 2.1. The probability that a simple symmetric random walk (S k ) 0≤k≤n of length n on Z is admissible of height h is for h ≥ 0 and n ≥ 0. Proof. We consider the (h + 1) × (h + 1) transfer matrix which has the following simple yet useful property: if w n, k is the probability that 0 ≤ S 0 , S 1 , . . . , S n ≤ h and S n = k, then the following recursion for the vectors w The initial vector is w (h) 0 = e 0 = (1, 0, . . . , 0). Since we also want that S n = h, we multiply by the vector e h = (0, . . . , 0, 1) at the end to extract only the last entry w Cramer's rule yields Comparing this with the recursion for the Chebyshev polynomials and checking the initial values, we find that 2 h+1 det(I−zM h ) z h+1 from which (2.1) follows by extracting the coefficient of z n . An analogous statement holds for admissible random walks on N 0 with the sole difference that in this case, the Chebyshev polynomials of the first kind occur. Proposition 2.2. The probability that a random walk (S k ) 0≤k≤n of length n on N 0 is admissible of height h is given by for h ≥ 0 and n ≥ 1. Proof. For random walks with a reflective barrier at 0, the (h + 1) × (h + 1) transfer matrix has the form B.Hackletal. 6 B. Hackl et al. By the same approach involving Cramer's rule as in the proof of Proposition 2.1, we find the generating function where we have the recursion , which can be proved again by verifying that the same recursion holds for the Chebyshev-T polynomials and that the initial values agree. Remark 2.3. The coefficients of 1 T h (1/z) have also been studied in [9]. There, the case of fixed h is investigated, whereas we mostly focus on the asymptotic behavior of ∑ h≥0 p In the following theorem and throughout the rest of the paper, m will denote a halfinteger, i.e., m ∈ 1 2 N = 1 2 , 1, 3 2 , 2, . . . . While this convention may seem unusual, it simplifies many of our formulae and is therefore convenient for calculations. Proof. We begin with the analysis of p n . The probabilities are related to the Chebyshev-T polynomials by Proposition 2.2. It is a well-known fact (cf. [12, 22:3:3]) that these polynomials have the explicit representation AnalysisofBidirectionalBallotSequences 781 Analysis of Bidirectional Ballot Sequences 7 which immediately yields By applying Cauchy's integral formula, we obtain the coefficients of the factor Y (t) encountered in (2.5). We choose a sufficiently small circle around 0 as the integration contour γ. Thus, we get We want to simplify the expression √ 1 − t in this integral. This can be achieved by . Also, the new integration contour isγ, which is still a contour that winds around the origin once. Then, again by Cauchy's integral formula, we obtain Expanding the factor (1+u) 2n+h−1 1+u h into a series with the help of the geometric series and the binomial theorem yields and therefore, This allows us to expand the expression encountered before, that is, Using the binomial identity the expression above can be simplified so that, together with (2.5), we find By plugging this into (2.2), we obtain Combinatorially, it is clear that p (h) n = 0 for n and h of different parity, as only heights of the same parity as the length can be reached by a random walk starting at the origin. This can also be observed in the representation above. Assuming n ≡ h mod 2, we can write n − h = 2 or equivalently n−h 2 = . This gives us . For the second part, we consider the explicit representation of the Chebyshev-U polynomials, which is equivalent to Formula (2.4) is now obtained in the same way as (2.3). With explicit formulae for the probabilities p n , we can start to work towards the analysis of the asymptotic behavior of admissible random walks. Admissible Random Walks on N 0 In this section, we begin to develop the tools required for a precise analysis of the asymptotic behavior of admissible random walks on N 0 . Analysis of Bidirectional Ballot Sequences 9 Recalling the result of Theorem 2.5, we find that in the half-integer representation p (h) 2m−1 , the shifted central binomial coefficient 2m m−τ h, k appears. Hence, for the purpose of obtaining an expansion for p 2m−1 = ∑ h≥0 p (h) 2m−1 , analyzing the asymptotics of binomial coefficients in the central region is necessary. In the following, we will work a lot with asymptotic expansions. The notation for all integers R > −L, even if the series does not converge. Likewise, an asymptotic expansion in two variables given by for all R > −L. AnalysisofBidirectionalBallotSequences 785 Analysis of Bidirectional Ballot Sequences 11 By expanding the logarithm into a power series, we can simplify this expression to We also use 1 n ± α = 1 n By the symmetry of the binomial coefficient, the resulting asymptotic expansion has to be symmetric in α. Assembling all these expansions yields the asymptotic formula where S(α, n) is defined as in the statement of the lemma. Note that d 0 = 1, and thus the first summand of the series in (3.1) is 1 -which gives c 00 = 1. Summands where the exponent of α exceeds the exponent of 1/n only occur in the last series, with the maximal difference being induced by α 4r /n 3r . Thus, if j > 2 3 , we have c j = 0. Together with |α| ≤ n 2/3 , this implies the estimate for S(α, n). For |α| > n 2/3 , we can use the monotonicity of the binomial coefficient to obtain where τ h, k = (h + 1)(2k + 1)/2. Therefore, the total probability for a random walk of length 2m − 1 on N 0 to be admissible is given by The terms where τ h, k > m 2/3 can be neglected in view of the last statement in Since the sum clearly contains O m 4/3 terms, the error is at most O(m −1/2+4/3+2/3(2J(L)+1)−(L+1) ) = O(m 1/2−L/9 ). The exponent can be made arbitrarily small by choosing L accordingly. Finally, if we extend the sum to the full range (all integers h, k ≥ 0 such that h + 1 ≡ 2m mod 2) again, we only get another error term of order O exp − m 1/3 , which can be neglected. In summary, we have This sum can be analyzed with the help of the Mellin transform and the converse mapping theorem (cf. [6]). In order to follow this approach, we will investigate those terms in (3.4) whose growth is not obvious more precisely. That is, we will focus on the contribution of terms of the form We are also interested in the expected height and the corresponding variance and higher moments of admissible random walks. Asymptotic expansions for these can be obtained by analyzing moments of the random variable H n with P(H n = h) := p (h) n p n , as stated in the introduction. For the sake of convenience, let us consider the r-th shifted moment E(H 2m−1 + 1) r . We know AnalysisofBidirectionalBallotSequences 787 Analysis of Bidirectional Ballot Sequences 13 The asymptotic behavior of the denominator is related to the behavior of the sum from above -and fortunately, the behavior of the numerator is related to the behavior of the very similar sum ∑ h, k≥0 h+1≡2m mod 2 The following lemma analyzes sums of this structure asymptotically. Proof of Lemma 3.2. If we substitute m = x −2 , the left-hand side of (3.5) becomes This is a typical example of a harmonic sum, cf. [6, §3], and the Mellin transform can be applied to obtain its asymptotic behaviour. First of all, it is well known that the Mellin transform of a harmonic sum of the form f (x) = ∑ k≥1 a k g(b k x) can be factored as ∑ k≥1 a k b −s k g * (s) [6, Lemma 2], provided that the half-plane of absolute convergence of the Dirichlet series Λ(s) = ∑ k≥1 a k b −s k has non-empty intersection with the fundamental strip of the Mellin transform g * of the base function g. In this particular case, the Dirichlet series is Now we simplify the Dirichlet series. For s ∈ C with ℜ(s) > 2 j + 2 + r, the sum converges absolutely because it is dominated by the zeta function. In view of the definition of the β function, this simplifies to where κ 2m (s) depends on the parity of 2m. We find Thus, the Mellin transform of f is By the converse mapping theorem (see [6,Theorem 4]), the asymptotic growth of f (x) for x → 0 can be found by considering the analytic continuation of f * (s) further to the left of the complex plane and investigating its poles. The theorem may be applied because Λ(s) has polynomial growth and Γ(s/2) decays exponentially along vertical lines of the complex plane. We find that f * (s) has a simple pole at s = 2 j + 2 + r, which comes from the zeta function in the definition of κ 2m . There are no other poles: β is an entire function, and the poles of Γ cancel against the zeros of β (at all odd negative integers, see the earlier remark). The asymptotic contribution from the pole of f * is which does not depend on the parity of 2m, as the respective residue of κ 2m is 1 2 in either case. Finally, the O-term in (3.5) comes from the fact that f * may be continued analytically arbitrarily far to the left in the complex plane without encountering any additional poles. At this point, all that remains to obtain asymptotic expansions is to multiply the contributions resulting from Lemma 3.2 with the correct coefficients and contributions from (3.4). Theorem 3.5. (Asymptotic analysis of admissible random walks on N 0 ) The probability that a random walk on N 0 is admissible can be expressed asymptotically as (3.6) where π/2 ≈ 1.25331. The expected height of admissible random walks is given by where 2G 2/π ≈ 1.46167, and the variance of H n can be expressed as where π 3 − 32G 2 /(4π) ≈ 0.33092. Generally, the r-th moment is asymptotically given by Moreover, if η = h/ √ n satisfies 3/ √ log n < η < √ log n/2 and h ≡ n mod 2, we have the local limit theorem Remark 3.6. The fact that the two series in (3.10) and (3.11) that represent the density φ (η) are equal is a simple consequence of the Poisson sum formula. We also note that the asymptotic behavior of the moments of H n readily implies that the normalized random variable H n / √ n converges weakly to the distribution whose density is given by φ (η) (see [7,Theorem C.2]). The local limit theorem (3.10) is somewhat stronger. as well as some numerical comparisons can be found at http://arxiv.org/src/ 1503.08790/anc/random-walk_NN.ipynb), an asymptotic expansion in the halfinteger m is obtained. Substituting m = (n + 1)/2 then gives (3.6). The results in (3.7) and (3.8) are obtained by considering making use of (3.4) and Lemma 3.2 again. Note that we have EH n = E(H n + 1) − 1, as well as VH n = E(H n + 1) 2 − [E(H n + 1)] 2 . For higher moments, we only give the principal term of the asymptotics, which corresponds to the coefficient c 00 in (3.4), but in principle it would be possible to calculate further terms as well. It remains to prove (3.10). To this end, we revisit the explicit expression (recall that we set n = 2m − 1) First of all, we can eliminate all k with τ h, k > m 2/3 , since their total contribution is at most O m exp − m 1/3 as before. For all other values of k, we replace the binomial coefficient according to Lemma 3.1 by Note here that It follows that We are assuming that τ h,k ≤ m 2/3 = ((n+1)/2) 2/3 , which implies hk 2 /n = O(n 1/3 /h). In view of our assumptions on h, this means that the error term is O n −1/6 √ log n . Thus we have Adding all terms τ h, k > m 2/3 = ((n + 1)/2) 2/3 back only results in a negligible contribution that decays faster than any power of n again, but we need to be careful with the O-term inside the sum, as we have to bound the accumulated error by the sum of the absolute values. We have which can be seen, e.g., by approximating the sum by an integral (or by means of the Mellin transform again), so For η ≥ 1, the sum is bounded below by a constant multiple of exp − η 2 /2 (as can be seen by bounding the sum of all terms with k ≥ 1), which in turn is at least exp(−(log n)/8) = n −1/8 by our assumptions on η. Thus the first term indeed dominates the error term in this case. If η < 1, we use the alternative representation (3.11), which shows that φ (η) is bounded below by a constant multiple of η −2 exp − π 2 /(8η) 2 . This in turn is at least (1/9)n −π 2 /72 log n by the assumptions on η, and since π 2 /72 < 1/6, we can draw the same conclusion. Remark 3.7. As stated in the introduction, the number 2 n p n gives the number of extremal lattice paths on Z -and thus, with the asymptotic expansion of p n , we also have an asymptotic expansion for the number of extremal lattice paths on Z of given length. This concludes our analysis of admissible random walks on N 0 . In the next section, we investigate admissible random walks on Z. Ballot Sequences and Admissible Random Walks on Z In principle, the approach we follow for the analysis of the asymptotic behavior of admissible random walks on Z is the same as in the previous section. However, due to the different structure of (2.4), some steps will need to be adapted. B. Hackl et al. With the notation of Lemma 3.1, we are able to express q 2m−2 for a half-integer m ∈ 1 2 N with m ≥ 1 as In analogy to our investigation of admissible random walks on N 0 , we also want to determine the expected height and variance of admissible random walks. These are related to the random variable H n , which we defined by To make things easier, we will investigate moments of the form E H n + 2 r . They can be computed by Therefore, we are interested in the asymptotic contribution of which is discussed in the following lemma. Furthermore, in the introduction we illustrated that admissible random walks are strongly related to bidirectional ballot sequences. Since every bidirectional ballot sequence of length n + 2 corresponds to an admissible random walk of length n on Z i.e., B n = 2 n−2 q n−2 , we are able to prove Zhao's conjecture that was mentioned in the introduction. In Figure 3 we compare the exact values of B n /2 n with the values obtained from the asymptotic expansion in (4.11).
5,301.8
2016-11-01T00:00:00.000
[ "Mathematics" ]
Structure and luminescence of DNA-templated silver clusters DNA serves as a versatile template for few-atom silver clusters and their organized self-assembly. These clusters possess unique structural and photophysical properties that are programmed into the DNA template sequence, resulting in a rich palette of fluorophores which hold promise as chemical and biomolecular sensors, biolabels, and nanophotonic elements. Here, we review recent advances in the fundamental understanding of DNA-templated silver clusters (AgN-DNAs), including the role played by the silver-mediated DNA complexes which are synthetic precursors to AgN-DNAs, structure–property relations of AgN-DNAs, and the excited state dynamics leading to fluorescence in these clusters. We also summarize the current understanding of how DNA sequence selects the properties of AgN-DNAs and how sequence can be harnessed for informed design and for ordered multi-cluster assembly. To catalyze future research, we end with a discussion of several opportunities and challenges, both fundamental and applied, for the AgN-DNA research community. A comprehensive fundamental understanding of this class of metal cluster fluorophores can provide the basis for rational design and for advancement of their applications in fluorescence-based sensing, biosciences, nanophotonics, and catalysis. Introduction Metal "nanoclusters" are the smallest of nanoparticles, consisting of only 2 to 10 2 metal atoms and possessing remarkable properties which are very nely tuned by cluster size, shape, and charge. Bare metal clusters have been studied for decades in order to understand how single atoms with quantized energy levels transition into the continuous properties of bulk materials. 1 Because the majority of unprotected metal clusters are unstable at ambient conditions, fundamental studies of metal clusters previously necessitated interrogation under ultra-high vacuum, 2 which limited practical applications of these nanomaterials. This challenge has been overcome by the use of stabilizing ligands and supporting surfaces to bring metal clusters into the "real world" for applications such as catalysis, photonics, and electronics. 3 In the past two decades, advances in synthetic chemistry have produced a "zoo" of different stable metal clusters passivated by molecular ligands, with cluster sizes that can even be tuned to atomic precision for especially ne control of their emergent properties. 4 This review concerns an especially unusual type of ligand-stabilized metal cluster, the DNA-templated silver cluster (Ag N -DNA), which combines the atomic precision of cluster science with the programmability of DNA nanotechnology. Ag N -DNAs are relatively new entrants into the diverse zoology of metal clusters, with unique properties that arise from their polynucleic acid ligands. Following work by the Dickson group on silver clusters stabilized in dendrimers 5 and silver oxide lms, 6 in 2004, Petty, Dickson, and co-authors reported formation of uorescent silver nanoclusters exhibiting 400-600 nm electronic transitions by chemically reducing an aqueous mixture of single-stranded cytosine-rich DNA and AgNO 3 . 7 They then found that certain Ag N -DNAs exhibit very bright uorescence 8 and signicant photostability and can be harnessed as biolabels. 9, 10 Gwinn, et al., showed that the uorescence colors of Ag N -DNAs depend sensitively on nucleobase sequence and that Ag N -DNAs prefer to form on single-stranded (ss) DNA rather than double-stranded (ds) DNA, 11 motivating the important role played by silver-nucleobase interactions in Ag N -DNA formation. In the next few years, Ag N -DNAs were shown to be effective sensors for toxic metal ions, 12 polynucleic acids, [13][14][15] and other biomolecules. 16 Together, these and other early studies generated considerable interest in harnessing DNA's sequence programmability for custom design of Ag N -DNA uorophores tailored for precise sensing, uorescence microscopy of cells and tissues, and direct integration into DNA nanotechnology schemes. [17][18][19] The most remarkable characteristic of Ag N -DNAs is their sequence-dependent uorescence. By employing DNA template strands with wide-ranging nucleobase sequences, a diverse color palette of Ag N -DNAs with uorescence emission colors of 450 nm to 1000 nm has been developed 22,23 (Fig. 1A), with quantum yields as high as 93% (ref. 24) and Stokes shis as large as 5893 cm À1 . 25 Ag N -DNA uorescence may be excited by at least two pathways, either directly at the cluster's size-, shape-, and charge-dependent excitation peak or universally via the DNA bases (Fig. 1B). 21,26 Ag N -DNAs also exhibit unusual photophysics, 27 intriguing dark states which can be harnessed for background-free uorescence microscopy, [28][29][30][31] light-up or colorswitching behavior induced by various stimuli, 13,32-41 and catalytic activity. 42,43 Most well-studied ligand-stabilized metal clusters are protected by monolayers of small molecules such as thiolates 44 and phosphines, with sizes smaller than or comparable to the metal clusters themselves. 45 In contrast, Ag N -DNAs and their lessstudied counterparts, Ag N -RNAs, 46 are protected by bulky polynucleic acids much larger than the silver cluster. The structure and properties of these and other metal clusters stabilized by large macromolecular ligands, including proteins 47 and dendrimers, 48 are less understood than for monolayer-protected clusters, in part because bulky ligands can obscure resolution of cluster(s) and challenge crystallization, a necessary step for "solving" structure by X-ray crystallography. However, macromolecular ligands can also endow functionalities without the need for ligand exchange, adding a degree of versatility to applications of Ag N -DNAs and other macromolecule-stabilized nanoclusters. Ag N -DNA synthesis is facile and is typically carried out by borohydride reduction of a solution of Ag + and ssDNA in neutral pH aqueous solution (Fig. 1C). This method is robust to varying solution compositions, stoichiometries, and specic mixing/ heating. 7,8,11,[49][50][51][52] In contrast to the simplicity of synthesis, achieving compositionally pure solutions of Ag N -DNAs is more challenging because reduction forms a heterogeneous mixture of silver-bearing DNA products containing varying numbers of silver atoms, N tot , and numbers of DNA strands, n s . The majority of these products are nonuorescent 53 and include clusters, Ag + -DNA complexes, and larger silver nanoparticles. 54,55 It is also possible for a given DNA template to stabilize multiple different emissive cluster species, 56 as has been observed for up to 25% of randomly selected DNA template sequences. 57 Due to characterization of as-synthesized Ag N -DNAs without purication and/or due to fragmentation during mass spectrometry (MS), early reports underestimated Ag N -DNA sizes 8,11 or found no correlation of uorescence color with silver cluster size. 58 A lack of awareness of this heterogeneity continues to hinder accurate characterization of Ag N -DNAs, and the assumption that the composition of Ag N -DNAs is uncorrelated to the optical properties of these nanoclusters still persists. 59 The challenge of heterogeneity has been overcome by the use of reversed-phase high performance liquid chromatography (HPLC) 53,60 and size-exclusion chromatography (SEC) 61,62 to isolate a uorescent Ag N -DNA of interest prior to compositional and spectral characterization. Additionally, development of gentle electrospray ionization (ESI) MS now enables compositional analysis without fragmentation of the Ag N -DNA product. 24,53,63 Using tandem HPLC-MS with in-line UV/Vis and uorescence spectroscopy, Schultz, et al. determined the compositions of several uorescent Ag N -DNAs with uorescence emission wavelengths, l em , ranging from green to near infrared (NIR), nding that these clusters contained N tot ¼ 10-21 Ag atoms stabilized by n s ¼ 1-2 copies of the templating DNA strand. 53 This ability to isolate and characterize compositionally pure solutions of Ag N -DNAs has enabled numerous future studies, leading to dramatic advances in our understanding of the structure-property relations of these nanoclusters, which we discuss in Section 3, and of their photophysical properties, which we discuss in Section 4. This review focuses on the recent advances in fundamental understanding of Ag N -DNAs, with a particular emphasis on the recent detailed studies of compositionally pure Ag N -DNAs. We note that this review is timely because previous reviews which primarily focused on fundamental structure and properties [64][65][66][67] are several years old and do not discuss recent breakthroughs, including the rst reported Ag N -DNA crystal structures. 22,25,68,69 Readers may also nd a comprehensive list of DNA sequence/ structure and optical properties for a large number of Ag N -DNAs by New, et al., 70 as well as previous reviews focused on the emerging applications of Ag N -DNAs as sensors and biolabels. 64,[71][72][73][74][75] Here, we summarize what is known about the connections among DNA sequence, Ag N structure, and photophysical properties. We rst review current understanding of the Ag + -mediated DNA base paired structures that are the synthetic precursors of Ag N -DNAs (Section 2). Next, we discuss current models for the structures of Ag N -DNAs, which have rapidly advanced due to detailed studies of compositionally pure Ag N -DNAs and a few breakthrough crystal structures (Section 3). 22,25,68,69 Then we review current understanding of the excited state processes which lead to uorescence in Ag N -DNAs and the unusual dark states exhibited by Ag N -DNAs (Section 4). We then discuss recent work to decode how DNA sequence selects Ag N -DNA properties by combining high-throughput experimentation and machine learning (Section 5). Finally, we review work on merging structural DNA nanotechnology with Ag N -DNAs for Fig. 1 (A) The fluorescence colors of Ag N -DNAs, which are selected by DNA sequence, span a large spectral range from visible to NIR wavelengths and are correlated with cluster size. 20 (B) Ag N -DNA excitation spectra exhibit a dominant peak in the visible to NIR spectral range as well as a UV excitation band corresponding exactly to the DNA template strand. Fluorescence spectra excited via the DNA bases (inset, purple) have the same lineshapes as spectra excited at the cluster's unique visible to NIR transition. 21 ordered arrangement of these nanoclusters (Section 6) and comment on opportunities and challenges facing the eld of Ag N -DNA research (Section 7). It is our intent to provide a comprehensive and current picture of the properties of Ag N -DNAs which is accessible to researchers from many backgrounds, in order to aid others in developing applications of these unique nanoclusters and to inspire new experimental and computational studies of their fundamental properties. 2. Silver-mediated base pairingprecursors to Ag N -DNAs A complete understanding of Ag N -DNA structure and sequencedependent properties naturally begins with an understanding of Ag + -DNA complexation. This is because (i) Ag N -DNAs are formed by chemical reduction of Ag + -DNA complexes, 76 (ii) high-resolution MS of HPLC puried Ag N -DNAs shows that usually about half of the silver atoms within Ag N -DNAs remain cationic, 24 meaning that Ag + -DNA interactions play a key role in determining Ag N -DNA structure, and (iii) Ag + -DNA interactions are highly sequence-dependent, 54,55 which may lead to the sequence dependence of Ag N -DNA size and uorescence properties. Here, we review recent advances in fundamental understanding of Ag + -nucleobase interactions and secondary structures of Ag + -DNA complexes, with a focus on properties relevant to the formation and sequence-dependence of Ag N -DNAs. We note that this topic is a small part of the rich eld of metal-mediated nucleobase pairing, an area of great interest as a route to expanded base-base interactions, DNA-based electronics, and sensing. We do not attempt to review this entire eld here and point to excellent comprehensive reviews elsewhere on metal-mediated pairing of both natural and articial bases [77][78][79][80] and in the specic case of Ag and Au for natural DNA. 81 Watson-Crick base pairing The four natural nucleobases of DNA are adenine (A), cytosine (C), guanine (G), and thymine (T). In canonical Watson-Crick (WC) pairing of dsDNA in B form, which is the most common structure of DNA in vivo ( Fig. 2A), two complementary DNA strands join by hydrogen bonds ("H-bonds") between A and T and between C and G, forming the familiar antiparallel double helix. C and G are held together by three H-bonds between the O2, N3 and N4 positions of C and the O6, N1 and N2 positions of G. In like manner, T and A are H-bonded through the O2 and N3 positions of T, and the N6 and N1 from A (Fig. 2C). This difference in the number of H-bonds between nucleobase pairs results in a weaker A-T WC bond as compared to C-G. The WC B-form double helix is further stabilized by hydrophobic stacking interactions between neighboring nucleobases. Additional less common DNA structures also exist, including WC paired A-DNA 84 and Z-DNA 85 and Hoogsteen base pairing. 86 The extensive scientic understanding of DNA structure and thermodynamics has enabled the birth of DNA nanotechnology, which exploits DNA as a fundamental materials building block, 82 engineering DNA sequence to achieve self-assembled predened shapes, 18,87 tuned colloidal interactions, 88-90 and molecular computation. 91,92 2.2 Ag + -nucleobase interactions of homobase strands Silver cations (Ag + ) are well-known to prefer binding to DNA nucleobases over the phosphate backbone at neutral pH. 93 (Hg 2+ possesses similar preference, 79,93-95 but its signicant toxicity prohibits applicability). This preference enables Ag + intercalation into single base mismatches in WC-paired dsDNA, typically by interactions with nucleobase ring nitrogens. 93,96,97 Cytosine (C) is especially well-known for affinity to Ag + , and this has been harnessed to expand the interactions among DNA oligomers, enabling Ag + -paired C-C mismatches, 96 Ag + -folded imotif secondary structures in C-rich DNA, 98 Ag + -crosslinked DNA hydrogels, 99 and DNA nanotubes. 100 More recently, the study of Ag + -mediated nucleobase pairing has been extended to consider DNA that is unconstrained by WC base pairs. These studies show that silver can completely rearrange canonical DNA structures, as opposed to simply intercalating within base pair mismatches. Here, we review these recent advancements to provide context for the sequence-property connections that govern Ag N -DNAs (Section 5). To understand how Ag + complexes with DNA in the case where the DNA does not form WC base pairs, Swasey, et al., investigated interactions of Ag + with homobase DNA strands. 54 Aer solvent-exchanging DNA oligomers to remove any residual salts from oligomer synthesis, DNA was mixed with AgNO 3 in an aqueous solution of ammonium acetate, followed by thermal annealing at 90 C. Resulting products were analyzed by highresolution negative ion mode ESI-MS to determine absolute composition by resolving the isotopic distribution (discussed in Section 3.1). Fig. 3A shows the compositions of all observed products for 11-base homobase strands. While C is best-known for affinity to Ag + and was shown by Ritchie, et al., to form Ag +mediated duplexes, 52 G was actually found to associate the greatest number of Ag + , with order of affinity: G > C > A > T. While the 4 types of natural nucleobases all formed Ag + -bearing single homobase strands, Ag + also mediates formation of homobase duplexes for C and G. When two different single homobase strands are mixed, Ag + only mediates the heteroduplex A-Ag + -T, completely replacing the WC A-T duplex. Ag + also disrupts WC-paired C-G duplexes to instead form C-Ag + -C and G-Ag + -G homobase duplexes. Fig. 3B summarizes all observed pairing between homobase strands. C-Ag + -C and G-Ag + -G homoduplexes are remarkably stable, with C 6 -Ag + -C 6 and G 6 -Ag + -G 6 homoduplexes remaining intact at 90 C, while C 6 -G 6 WC duplexes melt below 20 C (Fig. 3C). 54 Quantum chemical calculations support greater stability of Ag + -mediated homoduplexes for C and G than for A and T. In the absence of steric factors, (base-Ag + -base) N duplexes have higher bond energies than (base-Ag + ) N structures. Because C-Ag + -C and G-Ag + -G are nearly coplanar, with dihedral angles of 171.9 and 181.2 respectively, while T-Ag + -T and A-Ag + -A are nonplanar, with dihedral angles of 140 and 101.6 , respectively, C-Ag + -C and G-Ag + -G homoduplexes are expected to be signicantly more stable (Fig. 3D). The A-Ag + -T bond is also non-coplanar, but its stability could be explained by the difference in size between A and T, which still allows adenine stacking interactions. 54 The nucleobase sites with which Ag + interacts differ from WC pairing. Simulations by the Lopez-Acevedo group have determined that pyrimidines C and T interact with Ag + at the N3 position, 54,101 which is deprotonated for thymine, while purines A and G coordinate with Ag + at the N7 position. 54 These binding sites correspond to the Hoogsteen region (Fig. 2C). However, these positions might change depending on the other nucleobase of the Ag + -bridging bond, as is the case for the C-Ag + -G bond, reported by Kondo, et al., where the interaction with the purine base is through the N1 position, which is deprotonated. 102 2.3 Ag + mediates parallel strand orientation of highly stable homobase duplexes Quantum chemical and hybrid quantum mechanics/molecular mechanics (QM/MM) calculations by the Lopez-Acevedo group predicted that, unlike the antiparallel strand orientation of natural WC duplexes where one 5 0 -3 0 strand pairs to a complementary 3 0 -5 0 strand, 82 C-Ag + -C duplexes and G-Ag + -G prefer a parallel orientation, with 5 0 ends aligned. 101,103,104 These helical duplexes align Ag + along the helix axis and are stabilized not only by Ag + -nucleobase interactions but also by novel interplanar H-bonds ( Fig. 4A and B). 101,103,104 Calculated electronic CD spectra of C 2 -Ag + -C 2 tetramers agree well with experimentally measured CD spectra, further supporting a parallel arrangement. 101 However, other experimental studies report varying behavior. One study of the conductivity of C-Ag + -C duplexes achieves antiparallel duplex formation of strands conned at ends to a metal surface and scanning probe tip. 105 As Recent study of unconstrained homobase strands conrms parallel duplex structure for C-Ag + -C and G-Ag + -G by utilizing Förster Resonance Energy Transfer (FRET) experiments to determine DNA strand orientation and ion mobility spectrometry (IMS) MS coupled with density functional theory (DFT) calculations to elucidate structure. 108 Variations in FRET efficiency between donor and acceptor dyes coupled to ends of two DNA strands support parallel Ag + -paired C homobase duplexes and G homobase duplexes (Fig. 4C). This parallel orientation was further demonstrated by IMS-MS experiments coupled with DFT calculations of collision cross sections (CCS), which support high aspect ratios for both guanine and cytosine duplexes, consistent with rigid, wire-like structures (Fig. 4D). Based on CCS values and their agreement to calculated values, the G-Ag + -G duplex is found to be more rigid because nucleobases form additional H-bonds with the phosphate groups in the backbone, whereas the C-Ag + -C duplex lacks these extra bonds and is more exible. Ag + -nucleobase interactions of mixed base strands The vast majority of reported Ag N -DNA nanoclusters are stabilized by DNA strands with mixed base sequences. To understand how heterobase strands recruit Ag + , Swasey and Gwinn examined ten noncomplementary 11-base DNA strands, determining composition of Ag + -DNA complexes by ESI-MS (HPLC-MS was employed to analyze very heterogenous samples). 55 Interestingly, strands with sequences formed by single-base "mutations" of C 11 increase the distribution of the number of Ag + attached to duplexes, and inclusion of mutations in G 11 homobase strands can signicantly increase the average number of Ag + by up to 7 or 8 Ag + per duplex (Fig. 5A). Both homobase and heterobase Ag + -mediated duplexes were found to be stable in various solution conditions, signicant Mg 2+ concentrations, and high concentrations of urea (a strong denaturant). While the chemical structures adopted by these heterobase duplexes are not known, the differences in Ag + recruitment have important implications for the origins of Ag N -DNA sequence dependence, which we discuss in Section 5. Kondo, et al., recently developed remarkable uninterrupted Ag + -DNA "nanowires" and solved their 3D structure, determining formation of consecutive Ag + -paired duplexes with antiparallel orientation. 102 The DNA strand used to form the Ag + wires, GGACT( Br C)GACTCC, is a near-complement which forms a WC-paired homodimer with one C-C mismatch at room temperature in biologically relevant salt concentrations (determined using UNAfold soware 109,110 ). C-Ag + -C, G-Ag + -G, T-Ag + -T, and C-Ag + -G bonds were observed in the nanowire, and interestingly, not all nucleobases in the strand participate in the principal linkage between strands. A's protrude outwards (Fig. 5B) and contribute to crystal-packing through formation of AT-Ag + -A triplets and AA stacking interactions. Thanks to the near reversibility of the sequence used, and because A's do not participate in duplex bonding, most nucleobases are bonded to a like base in the partner strand, with only two C-Ag + -G pairs (C) Emission spectra of Ag + -mediated C 20 and G 15 duplexes labeled with donor (green dot) and acceptor (red dot) dyes at 5 0 end and 3 0 end, respectively (orange curve) or with both dyes at 3 0 ends (blue), compared to emission of the donor-bearing strand alone (blue dotted curve). Excitation is at 450 nm, which directly excites the donor only. Significant quenching of donor emission with concomitant acceptor emission (high FRET efficiency) clearly demonstrates that Ag + -mediated pairing of homo-duplexes arranges strands in a parallel orientation. 108 (D) DFT-optimized structures of Ag + -DNA duplexes of G 20 and C 20 compared to WC duplexes of observed. Pairing between the two strands occurs with a oneposition shi, enabling formation of nanowires up to 0.1 mm long. Despite the antiparallel orientation and the C-Ag + -G pairs, in which the G is bonded through the N1 position, the system clearly does not obey WC pairing because the main interaction sites lie in the Hoogsteen region. Furthermore, the propeller twist angles obtained are larger than in WC pairing, which can be explained by repulsions between amino and carbonyl groups of opposite bases. 102 Liu, et al. solved the 3D structure of another Ag + -paired mixed base strand, 5 0 -GCACGCGC-3 0 , which forms curved dimers attached by one G-Ag + -G bond and one C-Ag + -C bond, with parallel strand orientation (Fig. 5C). 106 In this structure, it is only like bases which participate in Ag + -mediated pairing, and these base pairs are less planar (Fig. 5C) than the nearly coplanar angles predicted by previous DFT calculations. 54 This suggests that mixed base strands can accommodate a wide range of Ag + -mediated base interactions beyond just linear wires. This 8-base sequence was also uncovered in an unrelated study using machine learning methods to design templates for Ag N -DNAs with uorescence emission in the 600 nm < l em < 660 nm window. 111 This surprising coincidence suggests that some Ag N -DNAs are formed by chemical reduction of nontrivial Ag + -DNA complexes. Very recently, the Kohler group reported evidence for a parallel oriented Ag + -mediated duplex of C 20 with signicant "propeller" twist of the C-Ag + -C base pairs, as has been reported in the studies above. This evidence was based on strong agreement between experimentally measured and calculated CD spectra. 113 The authors note that such twisting has been associated with reduced exibility of DNA, 114 and this enhanced rigidity agrees with the past IMS studies of C-Ag + -C duplexes described in Section 2.3. 108 2.5 Relevance of Ag + -mediated base pairing for Ag N -DNAs As synthetic precursors of Ag N -DNAs, 76,115 Ag + -DNA complexes are the scaffolds that reorganize into the cluster-stabilizing cage of an Ag N and, at least in part, provide the Ag + "fuel" to grow the Ag N upon reduction. Early studies which found that Ag N -DNA do not form on completely dsDNA templates 11 have led to the false assumption that the Ag N can always be conned within singlestranded regions of WC-paired DNA structures such as hairpins 116,117 or other dsDNA structures with ssDNA regions, 118 based on the assumption that WC DNA secondary structure is preserved in the presence of Ag + . The dramatic rearrangement of DNA homobase and heterobase strands by Ag + , together with the signicant thermal and chemical stabilities of Ag + -mediated DNA duplexes, 54,55 call into question whether this assumption is accurate. It is more likely that Ag + can invade and unravel WC dsDNA under appropriate conditions, rearranging secondary and tertiary structures which then further evolve upon chemical reduction. This has been suggested by several careful studies, 49,[119][120][121] and Ag + has also been shown to rearrange the well-known G-quadruplex structure 108 and i-motif structure. 113 Further studies will be needed to determine to what degree DNA secondary structure is preserved aer Ag N -DNA synthesis, especially when Ag N -DNAs are incorporated into the larger DNA structures discussed in Section 6. Protruding adenines foster assembly of multiple wires into 3D lattices. 102 Silver atoms are shown in gray and potassium atoms in purple. Image created from PDB ID 5IX7 with NGL Viewer. 112 (C) Structure of a dimer of 5 0 -GCACGCGC-3 0 (orange, green) paired by two Ag + (grey). The third Ag + (bottom right of structure) supports supramolecular assembly of the structure during crystallization. 106 Image created from PDB ID 5XJZ with PyMOL. advancements have been enabled by compositionally pure Ag N -DNAs isolated using HPLC 53 or SEC. 37,38,62 These techniques separate different DNA complexes by exploiting variations in size and polarity that are induced by different silver products on the DNA template strands. (Methods for isolating Ag N -DNAs using HPLC have been reviewed in detail previously. 66 ) Purication prior to characterization is crucial because assynthesized solutions contain multiple dark and uorescent products, including Ag nanoparticles, Ag N -DNAs and Ag + -DNA complexes, as supported by LC-tandem MS. 24,57 Even though one would naively expect Ag N -DNA properties to be similar in the as-synthesized and puried states, a recent report by Gambucci, et al., showed different rotational correlation times, indicating that synthesis fragments could be attached to the Ag N -DNAs, e.g. by Ag + -mediated interactions. 122 Compositional analysis methods that only infer average stoichiometry of the entire heterogeneous as-synthesized solutions may misjudge the number of silver atoms within an Ag N -DNA and cannot resolve the number of DNA strands n s that stabilize a single cluster, and MS performed directly on as-synthesized samples makes it challenging to identify the uorescent Ag N -DNA of interest from the other products formed during synthesis. Here, we primarily review structural studies of HPLC-isolated Ag N -DNAs with bright visible or NIR uorescence, which have thus far been found to contain N tot ¼ 10-30 Ag atoms, 23,24,53,57,123 as opposed to earlier reports of dimers or trimers of Ag. 8, 11 We then discuss other studies that focus on inference of the conformation of the DNA template strand(s) around the Ag N . Unless indicated, all Ag N -DNAs discussed are compositionally pure. Mass spectrometry to determine Ag N -DNA composition Prior to the breakthrough crystallographic structures of Ag N -DNAs solved in 2019, 22,69 efforts to discern Ag N -DNA structure mainly employed correlations of experimentally measured absorption, excitation, and/or emission for Ag N -DNAs of known composition with computational studies or simple models. These past studies do not provide the same level of structural detail as the recent crystal structures but do provide a more comprehensive picture of the structure-property relations of Ag N -DNAs in general, with detailed studies on about 20 different HPLC-puried Ag N -DNAs as compared to the smaller number of crystal structures currently available. 22,25,68,69 Since metal cluster size, charge, and geometry strongly determine properties, accurate characterization of composition is a key step towards building a fundamental understanding of metal clusters. 45 It is well-established that other ligandstabilized metal clusters are only partially reduced because a fraction of the metal atoms in the cluster are bound to the surrounding ligands and that the number of remaining effective valence electrons in the cluster core is a major determinant of the electronic properties of the cluster. [124][125][126][127] Partially oxidized Ag N -DNAs were proposed by Ritchie, et al., based on the oxygen and chloride dependence of the uorescence of C 12 -stabilized clusters. 128 An experimental method to not only count the numbers of silver atoms N tot and DNA strands n s in a puried sample of Ag N -DNA but also to separate N tot into neutral (N 0 ) and cationic (N + ) silver content can yield insights into how Ag N clusters are ligated to the DNA and enable computational studies of their electronic properties. 126,127,129 High resolution mass spectrometry (HR-MS) is an ideal tool to achieve this goal because HR-MS can be used to determine both ion mass, M, and charge, Z, rather than just the ratio M/Z, by resolving the isotope pattern that arises due to natural variation in isotopic abundances of elements. Koszinowski and Ballweg determined the charge of an Ag 6 4+ -DNA by comparing the experimentally measured isotope pattern to the calculated distribution of this cluster. 63 To characterize the properties of uorescent Ag N -DNAs, this approach has been developed in conjunction with chromatographic purication by the Gwinn and Petty groups. 24,130 Because DNA is easily deprotonated, negative ion mode ESI-MS is suitable to resolve weakly bound, noncovalent DNA complexes 131,132 and has been used to size a variety of silverbearing DNA complexes. [53][54][55]57,60,63,108 (While more sensitive, positive mode ESI-MS can oxidize encapsulated clusters during electrospray, hindering full determination of composition. 133 ) The mass spectrum of an Ag N -DNA product may be collected either by tandem HPLC-MS ( Fig. 6A) or by direct injection into the MS following previous purication. Determination of N 0 and N + for an Ag N -DNA from its mass spectrum is illustrated in Fig. 6. First, the charge state Z-of a M/Z (mass to charge ratio) peak is determined by the spacing between adjacent peaks of the isotope pattern: these peaks are spaced by 1/Z (Z is dened as a positive integer). For example, Fig. 6B shows the 7-charge state (Z ¼ 7, minus sign is due to negative ion mode MS) of an Ag 30 -DNA, with individual isotope peaks separated by 1/7. 23 (The product shown in Fig. 6B has a charge of -7e, where e is the fundamental unit of charge). The total charge of the complex corresponding to this M/Z peak is equal to the charge of the number of silver cations, eN + , minus the charge of the number of protons removed from the DNA, en pr , to reach the total charge of ÀeZ: Note that as number of silver cations, N + , increases, more protons must be removed from the complex to reach charge state Z. Then, because n pr protons have been removed from the Ag N -DNA complex, the measured total mass M (in amu) is: where m DNA is the DNA template strand mass, n s is the number of DNA strands in the complex, and m Ag is the silver atom mass (the mass of a proton is treated here as 1 amu). In the case of well-resolved patterns, N + and N 0 may be determined by calculating the isotope distribution pattern for varying values of N + , and thus n pr , to determine the charge which best matches the isotope pattern (Fig. 6B). 24 If signal is too low to precisely resolve the isotope pattern, charge may be inferred by comparing Gaussian ts of the calculated and experimentally measured isotope patterns. 55 Using this method, Schultz, et al., determined that approximately half of the silver atoms within Ag N -DNA are cationic in nature. 24 HR-MS is advantageous for determination of n s , N 0 , and N + without ambiguity, provided that gentle enough ESI is applied. Inductively coupled plasma-atomic emission spectroscopy (ICP-AES) has also been used to determine the composition of puried Ag N -DNAs, 37 although n s cannot be determined by this method. This has led to underestimates of the sizes of Ag N -DNAs with n s > 1, which were later characterized by HR-MS. 24 Experimental evidence for elongated cluster geometry The rst experimental evidence that Ag N -DNA cluster geometry differs from globular (or quasi-spherical) arose by comparing the absorption spectra of compositionally pure Ag N -DNAs (whose N + and N 0 were determined by HPLC-MS) to the experimental and computed spectra of bare Ag N in the gas phase which have similar numbers of effective valence electrons (equal to N 0 ). The electronic properties of ligand-stabilized metal clusters depend on the number of effective valence electrons in the cluster core, not only the total number of atoms N tot , and these valence electrons can delocalize to form "superatomic" orbitals. 125 Thus, it is most appropriate to compare the properties of Ag N -DNAs with bare silver clusters having like numbers of effective valence electrons. Due to ligation with the nucleobases, 24 not all Ag atoms in an Ag N -DNA will contribute to the valence electron count. To determine the effective valence electron count of an Ag N -DNA, we subtract the charge of the cluster, N + , from the total number of atoms in the cluster, N tot , nding that the number of effective valence elec- Schultz, et al., found that the numbers and locations of peaks in the optical spectra of Ag N -DNAs differ markedly from their globular bare cluster counterparts. Naked Ag N with cluster sizes N ¼ 2 to 20 exhibit globular geometries and absorption spectra with multiple UV transitions in the 3 to 5 eV spectral range. 134,135 In contrast, puried Ag N -DNAs have much simpler spectra with single dominant peaks in the visible to NIR range <3 eV, whose locations strongly depend on N 0 , 24,53 and an additional UV absorption band due to the DNA ligand (Fig. 1B). 21 The energies of the visible to NIR absorbance peaks of Ag N -DNAs with varying N 0 can be described by quantum chemical calculations by Guidez and Aikens for linear atomic chains of silver (Fig. 7A). 136 Based on these results and on the signicant degree to which Ag N -DNA emission is polarized, as observed by spectroscopy of single Ag N -DNAs, a rod-like structure for Ag N -DNAs was proposed by the Gwinn group. 24 Following this model, Ramazanov and Kononov used DFTcalculated electronic excitation spectra to argue that threadlike clusters show better agreement with experimental data than planar clusters. 137 A rod-like geometry is also supported by the magic N 0 numbers of Ag N -DNAs. The energetic stability of many ligandstabilized metal clusters can be described by the "superatom" In this illustration, the initial sample (yellow tube) is a mixture of products including multiple dark Ag-DNA complexes, one green-fluorescent Ag N -DNA species, and one red-fluorescent Ag N -DNA species. The as-synthesized Ag N -DNA solution is injected into an HPLC outfitted with a core-shell C 18 column for reverse-phase, ion-pair (IP) HPLC. Products are separated due to slight variations in column affinity with a water-methanol gradient and a triethyl ammonium acetate (TEAA) IP agent. By monitoring both absorbance at $260 nm, which correlates to the absorbance of DNA, and fluorescence emission (e.g. UV-excited fluorescence 21 ), correlation of absorbance and fluorescence chromatogram peaks indicate elution of a fluorescent Ag N -DNA species. We note that the chromatogram schematics are simplified for illustration; real chromatograms are more complex. 53 Products of interest can either be sized by in-line negative-ion mode ESI-MS or collected for subsequent ESI-MS. A mass spectrum for a previously studied 30-atom NIR-emissive product is shown in the bottom right. 23 Both monomeric and dimeric (labeled "D") products are visible, with spacing of the isotopic peaks indicating the charge state of each product (labeled as superscript of "D") for dimeric products. (B) Experimental mass spectrum of the Ag 30 -DNA product at the 7À charge state dimeric product (labeled D À7 in (A)) is shown in black, with the calculated mass distribution (green bars) for a product with 2 DNA strands, N 0 ¼ 12 Ag 0 , and N + ¼ 18 Ag + . 23 Inset: compares the experimental spectrum 23 with the calculated distribution for a product with no charged silvers (2 DNA strands and 30 Ag 0 ), illustrating how the shift between the experimental and calculated isotopic finger distribution can be used to accurately determine the numbers of Ag 0 and Ag + in an model, which states that the effective valence electrons in the cluster core are characterized by an electronic shell structure, similar to the shell structure of the atomic nuclei. 124,125 For spherical metal clusters, closed shells are expected for N 0 ¼ 2, 8, ., resulting in enhanced abundances of clusters of these sizes due to their signicantly enhanced stabilities (the same behavior is observed for gas phase bare metal clusters 2 ). Copp, et al., performed a large-scale study of Ag N -DNAs stabilized by $700 different DNA templates, nding enhanced abundances of Ag N -DNAs with even numbers of neutral silver atoms: greenemissive Ag N -DNAs with N 0 ¼ 4, red-emissive Ag N -DNAs with N 0 ¼ 6, and larger NIR-emissive N 0 ¼ 10-12 Ag N -DNAs (Fig. 7B); the spherical magic numbers of 2 and 8 were not especially abundant (Fig. 7C). This behavior is consistent with clusters that are signicantly aspherical, for which additional energy stability is primarily conferred by pairing of electron spins, resulting in enhanced stabilities for even values of N 0 . 57 Chiroptical properties of Ag N -DNAs have been well-modeled by a thread-like cluster structure. Because circular dichroism (CD) spectroscopy is extremely sensitive to specic geometrical structure and can be calculated using rst-principles methods, CD allows a direct interface with theory. Swasey, et al., measured the CD spectra of four Ag N -DNAs spanning the visible to NIR color palette. Quantum chemical calculations for bare atomic Ag chains with a chiral twist agree well with the experimental spectra. 138,139 Similarity between CD spectra of Ag N -DNAs and their unreduced Ag + -DNA precursors was also observed, pointing to the role played by the Ag + -DNA complex in dictating nal cluster structure 138 (we note that recent studies suggest the Ag N itself is not the cause of the CD signal observed for Ag N -DNA but that the DNA-silver interaction of the intrinsically chiral DNA plays a crucial role in generating chiroptical properties of these clusters 127 ). Past studies have found that classical theories which describe collective electronic excitations of colloids, 140 such as Mie-Gans theory, 141,142 show surprising agreement with the optical properties of small metal clusters, 134,143 particularly for longitudinal plasmonic modes. 144 Copp, et al., examined whether Ag N -DNAs can also be described by classical models, applying Mie-Gans theory to HPLC-puried Ag N -DNAs with 400-850 nm cluster excitation wavelengths and numbers of effective valence electrons, N 0 , determined by HR-MS in order to elucidate the aspect ratios of these clusters. Application of Mie-Gans theory to this experimental data predicted prolate cluster geometry, with aspect ratios of 1.5 for N 0 ¼ 4 up to $5 for N 0 ¼ 12. (The currently reported crystallographic structures for Ag N -DNAs do not yet have determined charges, 22,25,68,69 so these aspect ratios remain unconrmed by solved structures.) Ag N -DNAs with N 0 $ 6 displayed shis in peak excitation wavelength dependent on solvent dielectric, as is expected for a collective electronic excitation 145 and observed for larger metal nanoparticles; 146 such sensitivity may be useful for applications. The increase in peak excitation wavelength and extinction coefficient with increasing cluster core size N 0 (ref. 147) is a characteristic shared by the longitudinal collective electronic excitation sustained by rod-like metal clusters 136,144,148 and larger metal nanoparticles. 145,149,150 While the proper denition of a plasmon versus a collective electronic excitation at the cluster scale remains debated, other molecular-scale systems have also been shown to exhibit plasmon-like behavior. [151][152][153][154] The Sánchez group simulated toy model Ag N -DNAs with magic number sizes, 57 nding that a neutral silver cluster rod surrounded by nucleobase-bound Ag + is generated when a partial charge is placed on the cluster. When excited, these clusters supported longitudinal plasmon-like modes. 129 Intriguingly, single Ag N -DNAs studied at temperatures below 2 K exhibit surprisingly broad spectral linewidths. 155 For larger nanoparticles, surface plasmon resonance broadening is understood to arise from dephasing processes for multiple delocalized electrons, 156 but such effects are less well understood at the cluster scale. As silver cluster rods, Ag N -DNAs may provide a unique platform to investigate these important questions. It remains to be determined whether the optical transitions in Ag N -DNAs are collective or plasmonic-like, and further experimental and theoretical studies are needed to reveal which models are most suitable to represent the behavior of Ag N -DNAs. 3.3 X-ray and IR spectroscopy of solution-state Ag N -DNAs Several groups have applied X-ray spectroscopy, nuclear magnetic resonance (NMR), and infrared (IR) spectroscopy to probe the structures and silver-DNA interaction in puried Ag N -DNAs. To interrogate stoichiometry, oxidation state, ligand environment, and structure of a violet-absorbing Ag N -DNA, Petty, et al. used ESI-MS, X-ray absorption near edge structure (XANES), and Extended X-ray Absorption Fine Structure (EXAFS). 130 This dimly uorescent cluster has absorbance peaked at 400 nm but converts into a NIR-emissive species upon perturbation of its DNA template strand. 33 MS data (Fig. 8A) and Ag L 3 edge XANES spectra establish the SEC-puried violet cluster to be an Ag 10 6+ . CD spectra of this Ag 10 6+ remain stable above 70 C, pointing to the temperature stability of the DNA-Ag interaction (Fig. 8B). Ag K-edge EXAFS was used to probe organization of Ag atoms and Ag-nucleobase interactions. The experimental EXAFS trace (black) was tted to three individual scattering paths (Fig. 8C) to infer specic bond lengths and coordination numbers. Based on these results, the authors proposed an octahedral cluster structure (Fig. 8D). While creating an accurate model from EXAFS data is nontrivial given the vast number of possible geometries in such a complicated system, the model contains several structural elements later found in the crystal structure of an Ag 16 This altered cluster has the same oxidation state as the "violet" cluster above and can be reversibly converted by manipulation of the hairpin region. The altered cluster is highly uorescent and has red-shied absorbance. Using differences in EXAFS data between the two clusters, the altered cluster is proposed to have a more extended and distinct metal-like core, presumably due to variations in coordination with the DNA ligand. These variations are supported by later studies using activated electron photodetachment MS. 121 Volkov and co-workers used X-ray photoelectron spectroscopy (XPS) to study an HPLC-puried Ag N -DNA. 158 The oxygen spectra are similar with and without Ag + , supporting that Ag + prefers to bind to nitrogen when no reducing agent has been added. For the puried Ag N -DNA, binding of silver to oxygen atoms was present, suggesting that the interacting oxygens belong to the sugar moiety and/or phosphodiester bond. The crystal structures by Cerretani, et al., found Ag atoms bound to the phosphate group, conrming this observation. 25,68,69 In addition, Ag 3d core-level spectra were measured for various species containing both Ag(0) and Ag + . The 3d 5/2 Ag peak shis to higher binding energies (x0.6 eV) as one goes from Ag(0) nanoparticles to Ag N -DNAs to Ag + -DNA complexes (Fig. 9), supporting an Ag N -DNA with a positive charge which is neither purely cationic nor fully reduced, in agreement with MS studies by others. 24,57, 130 Schultz, et al., recently studied an HPLC-puried Ag N -DNA 159 emissive at 670 nm with a previously measured high quantum yield of 0.75. 24 By combining analytical centrifugation with NMR and MS, it became apparent that despite HPLC isolation, the emissive product was a mixture of Ag 15 and Ag 16 . Thus, even rigorous chromatographic separation may not always fully separate Ag N -DNAs into compositionally pure solutions when two or more species have very similar compositions/ conformations. IR spectroscopy combined with MD simulations provided insights into the DNA binding sites of Ag + . The experimentally measured IR spectra of the Ag N -DNA and bare DNA only show marked shis between 1350-1500 cm À1 aer cluster formation (Fig. 10A). These shis correspond to the nucleobases (Fig. 10B), not phosphate backbone, conrming that the Ag N ligates primarily to DNA through Ag-nucleobase interactions. 54,55 The Kohler and Petty groups very recently reported femtosecond time-resolved IR (TRIR) spectroscopic studies of two Ag 10 6+ clusters stabilized by very similar 18-base DNA strands, C 4 AC 4 TC 3 XT 3 , where X represents either guanosine or inosine (an articial nucleoside lacking the exocyclic C2-NH 2 of natural guanosine). 160 These two DNA strands stabilize products with nearly identical spectra but dramatically differing quantum yields and uorescence decay times, suggesting that the X nucleoside inuences the excited state processes of the Ag 10 6+ . Following excitation of the clusters by a 490 nm femtosecond laser pulse, the TRIR spectra are collected in the 1400-1720 cm À1 range, corresponding to spectral features from the nucleobases. While individual nucleobases are excited in the UV, TRIR spectra show that 490 nm excitation of the clusters results in bleaching of the vibrational modes of select nucleobases, most notably cytosine. Thymine appears unperturbed by cluster excitation, supporting the many past studies which show that silver has low affinity for this nucleobase at neutral pH and suggesting that this base does not coordinate with the cluster. Slight differences in the TRIR spectra of X ¼ G and X ¼ I Ag N -DNAs suggest that this method may enable precise probing of the electronic coupling of the Ag N and surrounding nucleobases, a topic which remains poorly understood for Ag N -DNAs. 160 Electron microscopy Many reports of transmission electron microscopy (TEM) to characterize Ag N -DNAs report 2-20 nm particles, which have been attributed to the uorescent clusters of interest. [161][162][163][164][165][166][167][168][169][170] However, due to the much smaller sizes of Ag N -DNAs established by HPLC-MS and recent crystallographic studies (Section 3.5), it is highly likely that the particles observed in TEM are silver nanoparticles formed as byproducts during chemical synthesis. silvers not directly bound to the Ag 8 promote crystal packing (blue spheres in Fig. 11). Ag-Ag distances in the pentameric core are x2.9Å, comparable to Ag-Ag bond lengths in bulk silver. 174 In this cluster core, adenines interact with Ag via N1 and N6, whereas cytosines are coordinated through N3 and N4 (Fig. 11). Ag The exocyclic nitrogens (N4, N6) are hypothesized to be deprotonated. The zipper region is characterized by C-Ag + -C base pairs with parallel strand orientation and twisted base pairs, as observed elsewhere. 108,113 Every Ag interacts with the N3 site of one cytosine on each strand (Fig. 11C), as established previously for Ag N -DNAs stabilized by C 12 strands 52 and for C-Ag + -C duplexes. 54,101 All base-Ag interactions have distances of x2.1Å. Unlike the Ag N -DNA studied by Schultz, et al., 159 signicant interactions between adenines and Ag were found in the Ag 8 -DNA (Fig. 11D). It is notable that one Ag atom of the pentamer portion of the Ag 8 is stabilized by a neighboring strand's adenine (Ag atom in orange in Fig. 11D), which may explain why the cluster could not be formed in solution without modications to the DNA template strand. 22 Six crystal structures have also been reported by Cerretani, et al., for NIR Ag 16 -DNAs stabilized by DNA templates that differ by only one nucleobase. 25,68,69 The rst reported crystal structure is for an Ag 16 -DNA stabilized by two strands of a DNA decamer, 5 0 -CACCTAGCGA-3 0 , previously identied by Copp, et al., 175 with an unusually large Stokes shi. 176 The second Ag 16 -DNA is stabilized by two copies of a 9-base sequence corresponding to removal of the A 10 at the 3 0 -end of the decamer (Fig. 12A). Clusters formed on these two templates are nearly identical, and removal of the terminal A 10 has no discernable impact on the wavelength of the absorbance peak but causes a slight redshi in uorescence emission. 25 Similarly, mutations of position 5 in the DNA sequence allow one to produce and crystalize a similar NIR emitter. 68 The latter study showed also that certain nucleotide positions in the DNA sequence, while not relevant for binding to the Ag N , could be mutated in order to promote or alter crystal packing interactions. This concept could enable in the near future to re-engineer DNA sequences to promote crystallization and determine the structure of the emissive Ag N . 68 It also demonstrates that, especially when the Ag N is stabilized by multiple strands, the 3D organization of the nucleotides is more relevant than the sequential 5 0 to 3 0 order. Unlike the Ag 8 -DNA investigated by Huard, et al., which did not perform NaBH 4 reduction before the crystallization process, the Ag 16 -DNAs were synthesized in aqueous solution and then HPLC-puried prior to crystallization. The clusters comprise 16 Ag atoms with occupancy of 1, along with additional silvers with lower occupancy (Fig. 12A and B). All bases, except thymine and one of the adenines in position 2, interact with Ag atoms, with the thymine ensuring strand exibility and promoting crystal packing interactions (Fig. 12C-G). Most of the Ag-Ag distances are between 2.7 and 2.9Å, similar to or shorter than their metallic radius. Nevertheless, the cluster charge cannot be elucidated by these distances alone and as mentioned previously, ample HR-MS data suggests that Ag N clusters are generally highly cationic in nature. 23,24,57,130 Similar to the crystal structure published by Huard, et al., 22 Ag atoms interact with cytosines via N3, and with adenines via N1. Interestingly, additional interacting sites were discovered, consistent with Schultz, et al. 159 silvers coordinate O2 of cytosines, as well as N1, N7 and O6 of guanines, and N7 and the oxygens of the adenine phosphate group. Ag-N distances are 2.2-2.5Å, mostly shorter than the Ag-O coordinate bond lengths 2.4Å to 2.9Å. The Ag-N bond lengths suggest that G 9 is deprotonated at N1 (2.3-2.4Å). 25,69 Some Ag N -DNA crystal structures contain Ag + which are not attached to the central cluster but do participate in non-WC base interactions and crystal packing. 22,25,68,69 It is possible that such "accessory" Ag + also exist in solution-phase Ag N -DNAs, as recently suggested by Gambucci, et al. 122 If sufficiently tightly bound, these Ag + would be counted by HR-MS as part of the Ag N -DNA but may not be part of the silver cluster itself and, thus, may not contribute signicantly to the cluster's electronic properties. HR-MS results have not been reported for the Ag 8 reported by Huard, et al., 22 nor the multiple Ag 16 species reported by Cerretani, et al., 25,68,69 so it remains unknown whether all of the accessory Ag + are present in solution. Studies which compare the MS-determined sizes of Ag N -DNAs with their crystallographic sizes are needed in order to probe the existence and role(s) of accessory Ag + in Ag N -DNAs and, more generally, to what degree HR-MS measurements of puried Ag N -DNA species can discern the size of the emissive cluster. It will also be important to clearly state the assumptions made when assigning the cluster size N of an Ag N -DNA, particularly in light of the aforementioned evidence that observed optical properties are strongly correlated to the numbers of neutral silver atoms N 0 determined by HR-MS and not necessarily the total silver atom number N tot . Alternate possible cluster geometries and higher-order structures We have primarily reviewed compositional and structural studies of Ag N -DNAs which are synthesized by chemical reduction and, notably, are stable under HPLC purication to enable accurate characterization. Smaller Ag 2 and Ag 3 clusters intercalated between base pairs of dsDNA can be synthesized by electrochemical means and exhibit $300 nm uorescence emission, 177-180 which supports smaller size and/or different cluster geometry than the HPLC-puried Ag N -DNAs discussed here. The versatility of macromolecular cluster ligands like DNA may permit multiple classes of metal clusters of the same metal species, even for Ag N -DNAs synthesized by chemical reduction. Thus, it is likely that other cluster sizes and geometries than the HPLC-puried ones discussed here may exist which may be unstable under the solvent and high-pressure conditions requisite for chromatographic separation. Conformation of the DNA template strand In addition to cluster structure, the secondary/tertiary structures of the cluster's DNA templates are of interest. An understanding of this structure is also critical for schemes which integrate with DNA nanotechnology. 181 As before, we primarily review studies of puried samples or which employ techniques which may lead to isolated Ag N -DNA species, including micro-uidic capillary electrophoresis, 49,182 gel electrophoresis, 59,116 and SEC. 38 The Petty group combined SEC and other analytical methods to discern tertiary structures of their developed Ag N -DNA sensors, which signal binding of DNA analytes by transforming nonuorescent Ag 11 clusters on ssDNA templates into NIRemissive clusters of twice the size. 38,183 SEC separates complexes by molecular size and shape, with larger products eluting more quickly. A difference in retention time between two products indicates differences in molecular size. To count the number of DNA strands, n s , which scaffold the NIR cluster, 10-thymine tails were appended to one end of the target DNA analyte. SEC shows that a 1 : 1 mixture of DNA analytes with and without tails splits the chromatogram into three peaks. This splitting can be interpreted as complexation of two DNA probes to form the NIR Ag N -DNA (Fig. 13A). This was one of the rst demonstrations of formation of Ag N -DNAs stabilized by template strand dimers, 38 apart from HR-MS. 53 For a modied sensor scheme, alignment of thymine tails was further used to probe alignment of the two DNA strands stabilizing the NIR Ag N -DNA. 183 These clever experiments provide an alternate technique for inferring n s , which is especially useful for larger DNA complexes hosting an Ag N -DNA, which may not be stable even under gentle negative mode ESI-MS. Thymine tails were later used by Del Bonis-O'Donnell, et al., to separate a set of Ag N -DNA-based probes for Hepatitis A, B, and C in a single microcapillary electrophoresis protocol. 184 The Petty and Brodbelt groups recently determined and compared binding sites of two different Ag 10 clusters to their DNA templates using activated electron photodetachment (a-EPD) MS. 121 One Ag 10 was stabilized by a 20-base strand which is single-stranded in the absence of silver (the subject of Fig. 8), 130 and the other by a 28-base strand which forms a hairpin in the absence of silver (Ag N -DNAs were studied without subsequent purication). 157 The DNA templates with and without Ag N were analyzed by a-EPD, using 193 nm irradiation to induce DNA fragmentation, followed by MS. Fig. 13B-E compares mass spectra of the fragmented DNA host strands with and without Ag N , showing that certain fragments are suppressed in the presence of the Ag N . The suppression of fragmentation for certain regions of the DNA templates was associated with binding of the nucleobases to the Ag N in these suppressed regions. For the ssDNA template, a remarkably short 4-base segment of CCTT was suppressed (Fig. 13C); in comparison to available crystal structures, 22,25,68,69 it is reasonable that this segment represents only part of the silver-ligated nucleobases. For the hairpin DNA template, a much longer 13-base segment was suppressed, most of which is in the WC paired hairpin stem in the absence of silver (Fig. 13E). This provides credence to the notion that Ag + can signicantly reorganize DNA secondary structure. Future a-EPD studies could yield insights into other Ag N -DNAs. Photophysical studiesprobing excited luminescent and dark states of Ag N -DNAs Compared to the current growing understanding of Ag N -DNA structure, the luminescence process of Ag N -DNAs remains less understood. Ag N -DNAs most certainly luminesce through an allowed uorescence-like process, as supported by 1-4 ns uorescence decay times and quantum yields >0.1 for most puried Ag N -DNAs. 24,123,176,185,186 In contrast, phosphorescencelike emission of other metal clusters is characterized by much longer decay times and lower quantum yield values due to less allowed/forbidden transitions. 4,187 However, the Ag N -DNA uorescence process does differ from the simple Jablonski diagram of organic uorophores. 188 Ag N -DNAs lack the characteristic vibronic shoulders of organic molecular uorophores, 21,147 and their solvatochromic behavior is not well- described by Onsager-based methods used to model many organic uorophores. 189 Certain Ag N -DNAs retain surprisingly high quantum yields into the NIR, 190 while quantum yields of organic dyes diminish rapidly in this region. 191 Ag N -DNAs also have highly polarized excitation and emission due to well-dened transition dipole moments. 192 Finally, the process of indirect uorescence excitation via the DNA bases, which produces the same color of uorescence as direct excitation in the visible or NIR excitation band of the Ag N -DNA (Fig. 1B), 21,26 remains poorly understood. Here, we review spectroscopic studies of the photophysics of Ag N -DNAs, with a focus on puried Ag N -DNAs in more recent years. In order to ensure that measured photophysical properties are not affected by the presence of byproducts, such as Ag nanoparticles and nonuorescent Ag N -DNAs, purication is essential to preparation and analysis of these uorophores. Ultrafast studies of the Franck-Condon state A limited number of experimental studies have probed the ultrafast dynamics that occur upon excitation of Ag N -DNAs to the initial excited state (Franck-Condon state). 26,27,193,194 Patel, et al., proposed the rst phenomenological model describing the excitation process (Fig. 14), based on ultrafast transient absorption experiments performed on three unpuried red and NIR Ag N -DNAs (Fig. 15). 193 It was observed that a fraction of the population in the Franck-Condon state returned to the ground state with a time constant in the hundreds of fs, as seen by the ground-state recovery (Fig. 15A). Additionally, a rise component of similar time scale was attributed to formation of the emissive state. This emissive state then decays back to the ground state on a nanosecond timescale, as witnessed by similar time-scales of the ground-state recovery and the typical ns uorescence decay times measured by time-correlated single photon counting (TCSPC). 35,176,185 In addition, nanosecond transient absorption spectroscopy (Fig. 15B) . The depletion appears at negative DOD, but is plotted in its absolute value. It has been corrected for the spectral overlap by subtracting the contribution from the transient absorption, which is based on the kinetics at 775 nm calibrated to the expected value based on the peak curve fittings. The data was collected by exciting with a 100 fs Ti-sapphire laser at 1 kHz, then probing with a white light continuum generated from the same laser. The excitation wavelength was tuned to the peak of the ground state absorption. (B) Normalized femtosecond and nanosecond transient absorption spectra for Ag680. The sample was excited by 100 fs pulsed excitation, except for the long delay time curve, which was generated from excitation by a 7 ns pulsed laser. The dip in the spectrum around 800 nm is an instrumental artifact. 193 Recently, Thyrhaug, et al., performed 2D electronic spectroscopy experiments 27 on a previously sized NIR-emitting Ag 20 -DNA. 53,196 Excitation into the Franck-Condon state led to ultrafast evolution of the Franck-Condon state into the emissive state, which then decayed on a nanosecond time-scale observed from TCSPC measurements. The transfer from the initially populated state to emissive state occurred in about 140 fs, in line with the order of magnitude reported by Patel,et al. 193 Additionally, the Ag N -related absorption feature appeared to consist of two closely lying transitions, and a coherent excitation of both states occurred due to the short pulse width of the laser. Interestingly, for this particular Ag N -DNA, coherence was transferred to the emissive state and can be seen by oscillatory quantum beating features that dephased with a time constant of $800 fs. Thus, aer a few ps, all ultrafast processes were complete, and the only remaining process is the $ns uorescence. The dominant quantum beating mode frequency of 105 cm À1 is similar to Ag-Ag vibrational modes. 197 Dark states The presence of a ms-lived dark state in Ag N -DNAs was rst reported by Vosch et al. 8 Ag N -DNAs were immobilized in a polyvinyl alcohol (PVA) lm and uorescence intensity was recorded as a function of time. Autocorrelations of the uorescence intensity trajectories revealed ms blinking. A similar ms correlation time was observed by uorescence correlation spectroscopy (FCS) in solution. FCS experiments are not only useful for determination of the decay time of the dark state and the quantum yield of dark state formation 198 but also for estimation of the molar extinction coefficient by determining the number of emitters in a certain volume identied from a reference measurement. 8,199 While nontrivial to determine or suggest the exact nature of the dark state, dark states have been reported in other studies 60,196,200 and may be common for most Ag N -DNAs. The quantum yields of dark state formation have been estimated to range from a few up to 25 percent. 8,60,196,201 When removal of molecular oxygen from the environment results in a lengthening of the dark state decay time, this is oen a good indicator that the dark state is a triplet state. 202 For Ag N -DNAs, the DNA scaffold around the silver cluster might act as a physical barrier for this type of Dexter-type triplet state quenching, resulting in minimal or no effect of removal of oxygen on the dark state decay time. 8 Richards, et al., demonstrated that the dark state formed by a primary excitation laser can be optically excited with a secondary NIR laser, resulting in depletion of this long-lived state and an overall increase in uorescence intensity. 31 It was recently proven that optical excitation of the dark state can transition the Ag N -DNA to the emissive state, resulting in optically activated delayed uorescence (OADF). [28][29][30]195 This process is similar to typical reverse-intersystem crossing processes observed in organic dyes. 203,204 OADF combined with timegating provides background-free signal because the delayed uorescence is on the Anti-Stokes side (lower wavelength side) of the secondary excitation laser, allowing any Stokes-shied auto-uorescence from the secondary laser to be suppressed with a short-pass lter in the detection path. Fig. 16 shows an example by Krause, et al., that demonstrates the OADF imaging concept. 30 Additionally, Krause, et al., showed that the use of the secondary NIR laser only (blocking the primary excitation laser) yielded similar uorescence which was linearly dependent on the excitation intensity. This process is termed upconversion uorescence (UCF), 29,30 in analogy to the well-established upconversion processes in lanthanide based emitters. 205 Emissive state While one would expect a single emissive species to exhibit mono-exponential uorescence decay, several HPLC-puried Ag N -DNAs with long DNA template strands (19-30 bases) exhibit multi-exponential uorescence decay. 35,185,186 Because solutions are puried prior to characterization and no shi is present in the steady-state emission as a function of excitation wavelength, a heterogeneous mixture of Ag N -DNA species can be excluded as the cause of this multi-exponential decay. The multi-exponential decay behavior can instead be explained by relaxation of the emissive state on a time-scale similar to the uorescence decay. This effect, termed "slow" spectral relaxation, can be conrmed by time-resolved emission spectra (TRES) which show a gradual red-shi of the emission maximum on the nanosecond time scale (Fig. 17A). We note that the "slow" spectral relaxation is a minor part of the overall Stokes shi, with the majority of relaxation occurring on a timescale below the IRF. A consequence of "slow" spectral relaxation is that the average decay time increases as a function of emission wavelength (Fig. 17B). Furthermore, the decay associated spectra (DAS) usually lead to spectra where the fastest decay time component tends to have positive amplitudes at shorter wavelengths and negative amplitudes (rise) at longer wavelengths (Fig. 17C). 185 Only two processes can cause this effect: energy transfer or "slow" spectral relaxation. 188 Energy transfer can be excluded since, as stated above, there is no evidence for multiple independent emitters in the HPLC-puried Ag N -DNA solutions. Unlike small solvent molecules which rearrange on picosecond timescales, the DNA template and its structurally bound water molecules require much longer, up to a few nanoseconds, to adapt to the new charge distribution of the Ag N -DNA in the emissive state. A similar effect was observed when a coumarin dye was embedded in an abasic site of dsDNA. 206 Emission spectral shis could be observed from the femtosecond time scale up to tens of nanoseconds. Other parameters, e.g. changes to solvent viscosity or temperature, also affect the "slow" spectral relaxation. 176,185 If spectral relaxation occurs entirely within the time-scale of the IRF, the observed decay time will be mono-exponential. This is the case for Ag N -DNAs stabilized by short, 9-10 base DNA strands, whose "slow" spectral relaxation is negligible at room temperature and in low viscosity solvents, 25,176,190 most likely because multiple short strands are more exible and rearrange faster than one long oligomer. Spectral relaxation could be a useful tool to establish the rigidity of the DNA scaffold and its effect on the excited state of Ag N -DNAs. 35,51,185,186 Excitation and emission transition dipole moments Another interesting spectroscopic feature of Ag N -DNAs is their parallel excitation and emission transition dipole moments. Hooley, et al., employed defocused wideeld microscopy to investigate the transition dipole moments of a C 24 -templated Ag N -DNA immobilized in PVA. 192 By defocusing a common wideeld image, the emission of a single emitter displays a bilobed shape that depends on the orientation of its emission transition dipole. In order to determine both excitation and emission transition dipoles simultaneously, defocused wide-eld microscopy was combined with rotating the polarized excitation light. Then, the intensity of each emitter is directly correlated to the excitation efficiency of the Ag N -DNA. Maximum emission intensity was observed when excitation light was aligned with the emission transition dipole, indicating that the excitation and emission transition dipole moments lie along a similar direction. The Vosch group has also observed further evidence of the alignment of excitation and emission transition dipole moments by time-resolved anisotropy measurements. 122,176,190 Three different Ag N -DNAs, two NIR emitting and one red, displayed limiting anisotropy values close to 0.4, which indicates that the excitation and emission transition dipole moments are parallel (one example in Fig. 18). Coherent two-photon excitation Patel, et al., rst reported two-photon excitation (800 to 1000 nm range) of Ag N -DNAs in 2008, in a study of four non-puried Ag N -DNAs with emission maxima at 620 nm, 660 nm, 680 nm and 710 nm. 207 For the 660 nm, 680 nm and 710 nm emitters, the two-photon emission exhibited quadratic dependency on excitation intensity, as expected, and one-photon and two-photon uorescence decay times were similar, indicating that emission occurred from the same emissive state. The one versus twophoton excitation spectra of the 620 nm emitter indicated that cross-section maxima occurred at different wavelengths. The reported two-photon cross-sections ranged from 33 900 to 50 000 GM, roughly two orders of magnitude higher than typical organic uorophores (e.g. 210 GM at 840 nm for Rhodamine B). 208 Yau, et al., reported a two-photon cross-section of $3000 GM at 800 nm and quadratic dependence of emission on excitation intensity for an unpuried 650 nm emitter. 194 This 650 nm emitter was made by rst creating an Ag N -DNA using ssDNA, followed by addition of an excess of a complementary strand with a guanine-rich section. Because few studies have probed two-photon excitation of Ag N -DNAs, future investigations on puried Ag N -DNA could shed light on the origin of the very high two-photon cross-sections. Informed designdecoding the sequence-color connection for Ag N -DNAs The fascinating sequence dependence of Ag N -DNAs results from the nucleobase-specic interactions of DNA with silver (Section 2). The ability of DNA sequence to select for the sizes and optical properties of metal nanoclusters has attracted great interest due to the promise of highly customized uorophores. 199,209 To date, it is likely that thousands of different DNA template strands have been reported, corresponding to Ag N -DNAs with wideranging uorescence colors, Stokes shis, quantum yields, chemical yields, photostabilities, and chemical stabilities. 70 Yet the connection between DNA sequence and Ag N -DNA properties has remained obscure. Most studies select Ag N -DNAs by experimentally testing small numbers of DNA template strands rich in C or G. 32,46,128 One large-scale study by the Dickson group used DNA microarrays to identify uorescent Ag N -DNAs, but only a few of the DNA template sequences were reported. 199 To fully realize Ag N -DNAs as programmable materials, it is crucial to "decode" the connection between DNA sequence and Ag N -DNA properties. Rational design of Ag N -DNAs is especially challenging due to an astronomical number of possible DNA template sequences and a complex connection between Ag N -DNA color and DNA sequence. Ag N -DNA templates are typically 10-30 base oligomers. Because a sequence of the four natural nucleobases can have 4 L distinct L-base sequences, Ag N -DNA templates must be chosen from 4 30 ($10 18 ) possible sequences. While in some cases subtle sequence changes can dramatically shi uorescence, 65,169 in other cases different DNA sequences can stabilize Ag N -DNAs with the same emission wavelength. 57 To make matters more complex, some DNA sequences can stabilize different types of uorescent Ag N clusters, 57 with yields of each cluster species possibly depending on synthesis method and/or Ag:DNA stoichiometry. First-principles computational methods have not yet matured sufficiently to model the structures of realistic Ag N -DNAs, let alone their accurate electronic properties. Small-scale studies of DNA sequences with constrained patterns 119,169,196,[210][211][212] have been useful for developing a few Ag N -DNAs with well-controlled properties but are limited in their applicability to the majority of reported Ag N -DNAs. Here, we review large-scale experimental studies of the Ag N -DNA sequence-color connection for 10 3 DNA strands, in which machine learning enables predictive design and provides new physical insights. Large-scale studies of sequence dependence The combinatorial nature of DNA makes data science wellsuited to study how DNA sequence selects Ag N -DNA properties. Copp, et al., have pioneered high throughput experiments together with supervised machine learning (ML) to understand how DNA sequence selects for Ag N -DNA uorescence emission and to predict new templates for optimized Ag N -DNAs. The methods described here have uncovered Ag N -DNAs which are the subjects of later detailed studies. 23,69,176,190 For readers seeking to learn more about ML, we recommend a tutorial review by Domingos 213 and a review of ML for so matter by Ferguson. 214 To train a ML algorithm to output Ag N -DNA uorescence properties (or whether any uorescent product can be stabilized) given an input DNA template sequence, one must rst amass a data library connecting DNA sequence to Ag N -DNA uorescence spectra for hundreds to thousands of sequences. This data cannot be mined from the literature because (i) synthesis and characterization methods vary widely, prohibiting isolation of the effects of DNA sequence from other experimental parameters, and (ii) while $75% of DNA sequences are unsuitable for templating uorescent Ag N -DNAs, these "negative" DNA sequences are rarely reported. 57 The absence of negative sequences from the literature is problematic because to effectively learn what makes a suitable DNA template for brightly uorescent Ag N -DNAs also requires knowledge of what does not make a suitable template. To enable ML for Ag N -DNAs, Copp, et al., developed highthroughput Ag N -DNA synthesis and characterization in well plate format using robotic liquid handling followed by rapid uorimetry, 57 via universal UV excitation of all Ag N -DNA products through the nucleobases (Fig. 19 part I). 21 Because uorimetry is performed one day, one week, and four weeks aer synthesis, this data set allows ML studies to focus only on timestable Ag N -DNAs. Experiments are normalized using a wellstudied Ag N -DNA control, 24,35,122,185 for direct comparison of uorescence wavelength and intensity among all data in the library. To date, we have reported on >3000 distinct DNA template sequences, most 10 bases long, for Ag N -DNA synthesis in 10 mM NH 4 OAc aqueous solution of neutral pH. 23,111,175,215 Effective ML requires appropriate choice of "feature vectors," which are the parameterizations of training data provided as inputs to the ML classier(s). For Ag N -DNAs, feature vectors should represent the salient properties of a DNA sequence which determine how sequence is mapped onto Ag N -DNA uorescence. Because these properties are not well-known (otherwise ML would be unnecessary), this feature engineering process is a critical step in the ML workow 213 and has led to new physical insights into Ag N -DNAs. Early work used training data for 684 randomly generated 10-base DNA sequences to learn to predict Ag N -DNA uorescence brightness given an input template strand sequence. 215 Using integrated uorescence intensity, I int , as a metric of brightness, sequences with the top 30% of I int values were dened as bright and the bottom 30% of I int values dened as "dark." Then, a ML algorithm called a support vector machine (SVM) was trained to distinguish bright and dark sequences ( Fig. 19 part II). It was found that the SVM most accurately predicted a sequence's class if feature vectors were engineered to quantify the occurrence of certain DNA subsequences called "motifs" which were identied by bioinformatics approaches to be correlated with one class but not the other. 216 The resulting trained SVM's classication accuracy was 86%, as determined by crossvalidation (a process which trains on most of a training data set and reserves a small $10% portion as a "test set" to assess SVM performance on data which the ML classier has not yet encountered). New DNA templates for bright Ag N -DNAs were designed using the bright-correlated motifs as building blocks and then screened by the trained SVM to choose those predicted as most likely to be bright. 78% of designed DNA templates stabilized bright Ag N -DNAs, as compared to 30% of the initial random sequences. 215 This early work pointed to the role of certain DNA base motifs in stabilizing Ag N -DNAs, which agreed with later ndings that not all DNA bases coordinate the Ag N . 22,25,69 While predicting Ag N -DNA uorescence intensity increases the likelihood of selecting uorescent Ag N -DNAs by three-fold, this simple method also prefers red-uorescent Ag N -DNAs over green Ag N -DNAs. 215 It is ideal to instead predict both brightness and color from an input DNA sequence. To achieve this, Copp, et al., used physically motivated Ag N -DNA classication based on the known correlation between Ag N -DNA color and cluster size. The multi-modal distribution of Ag N -DNA uorescence colors in the visible spectrum was shown to arise due to the magic numbers of these clusters: Ag N -DNAs in the 500-570 nm abundance have N 0 ¼ 4 neutral Ag atoms, while Ag N -DNAs in the 600-670 nm abundance have N 0 ¼ 6 ( Fig. 20A). 57 Because "Green" and "Red" Ag N -DNAs have distinct cluster sizes, there is likely a fundamental difference between template sequences for these two cluster sizes. To learn to distinguish between DNA sequences based on cluster structural differences, a training data set of $2000 10-base DNA sequences was separated into four color classes: the three shown in Fig. 20A ("Very Red" is dened as the high wavelength histogram shoulder, which may signal a different cluster structure) and a "Dark" class similar to the one previously dened. 215 Because the numbers of sequences in these classes are unequal, with far more Dark sequences than Green sequences (Fig. 20B), it is critical to apply subsampling to balance classes prior to ML, ensuring training on equal numbers of sequences from each class. 217,218 Feature vectors were then constructed using DNA motif mining to identify color-correlated motifs, followed by feature selection 219 to reduce the list of selected motifs to those most important for classication; this critical step reduces problems which can arise from overtting. We note that both data balancing and feature selection should generally be applied when using ML for real-world materials systems. Because SVMs are inherently binary classiers, a "one-versusone" approach was used to distinguish the four color classes. Six different SVMs were trained to discriminate between the six possible pairs of classes (cross-validation scores, which represent the accuracy of classication, in Fig. 20C). To experimentally test the performance of the trained classiers, new DNA template sequences were designed for the two least abundant classes, Green and Very Red. First, color-correlated DNA motifs for the desired class were selected from a probability distribution weighted by intensity and placed into an initially empty DNA sequence. Second, designed candidate DNA templates were screened by the trained SVMs to estimate the probability of falling within the desired color class. Finally, templates corresponding to the top 180 probabilities were selected for experimental testing. With this method, the likelihood of selecting a Very Red Ag N -DNA increased by nearly 330%, and the likelihood of selecting a Green Ag N -DNA was increased by >80%. 175 This method was later modied to enable design of Ag N -DNA templates of any strand length, and it was found that training data of only 10-base sequences still enabled effective prediction of Ag N -DNA color for other lengths of DNA templates, up to the maximum 16-base length tested (Fig. 20D). 111 This suggests that there exist certain DNA motifs which are selective of cluster type and thus color for a range of DNA template lengths, making ML design approaches for Ag N -DNAs much more promising. We note that thus far, all Ag N -DNAs stabilized by DNA templates of <19 bases have been found to be "strand dimers" which contain two template strands per cluster; 23,24,53,57 it is possible that longer DNA templates, which have not been designed by ML, may have some different DNA sequence rules for Ag N -DNA color selection. In addition to improving design efficiency, ML provides key insights into how DNA sequence selects silver cluster size, and thus uorescence wavelength. Fig. 21B shows average base composition of the motifs identied by feature selection to be most predictive of Dark, Green, Red, and Very Red sequences. 175 To summarize, thymines are strongly correlated with no uorescence. Adenines show preference for smaller and Green Ag N -DNAs, while guanines, particularly consecutive guanines, are correlated with long wavelength uorescence (associated by MS with clusters containing more Ag atoms). Cytosines are strongly selective for uorescence brightness but less selective of color than A or G. To better understand these correlations, we compare to HR-MS studies of DNA-Ag + complexes (Section 2), which are the precursors of Ag N -DNAs prior to reduction by NaBH 4 . Fig. 3A shows the distribution of Ag + attached to single DNA homobase strands or pairs of strands, and Fig. 5A shows the same distribution for Ag + -mediated dimers of C or G strands with central base mutations. 54,55 Because homo-thymine strands only weakly associate with Ag + , thymine-rich DNA sequences may be unsuitable (at neutral pH) to host uorescent Ag N due to (i) too few Ag atoms recruited prior to reduction, resulting in insufficient silvers to form a cluster and/or (ii) little to no coordination with the cluster. The greater occurrence of T's in green Ag N -DNA templates further supports this notion, since these clusters are smaller in size and may require fewer nucleobase coordination sites. Adenine homobase strands bind to a few Ag + , which may support formation of smaller N 0 ¼ 4 clusters with green emission. In comparison to A and T, C-and G-rich homobase strands can form Ag + -mediated duplexes with $1 Ag + per base pair, providing more Ag atoms during cluster growth and supporting nucleobase-silver binding in the Ag N -DNA. Interestingly, duplexes of G homobase strands with a single central A, C, or T base mutation can harbor $60% more Ag + than G homobase polymers with no mutation. This significant increase in Ag + attachment as compared to C-rich strands, and the structural differences in the DNA secondary/tertiary structures supported by IMS-MS of these strands, 108 could explain why consecutive G's are strongly associated with Very Red Ag N -DNAs. 175 6. Supra-cluster assemblytowards applications in photonics and sensing Structural DNA nanotechnology harnesses DNA as a programmable building block for self-assembled nanostructures. 220 It is promising to combine sequence-controlled Ag N -DNAs with DNA nanotechnology for realization of precise metal cluster arrays, which could be envisioned as functional sensors and photonic devices. These achievements will require robust strategies to effectively embed Ag N -DNAs into larger WC-paired architectures. Here, we review efforts to harness DNA self-assembly for multi-Ag N -DNA organization (many groups have incorporated single Ag N -DNAs within WC paired DNA structures to build biomolecular sensors, 13,14,33,37,73,183 which were recently reviewed elsewhere 74 ). O'Neill, et al., rst reported decoration of a DNA nanostructure with multiple Ag N -DNAs. 100 A mixture of green and red clusters were synthesized onto ssDNA hairpin protrusions programmed into DNA nanotubes (Fig. 22A). Without hairpins, nanotubes did not foster cluster growth, consistent with early ndings that dsDNA is an unsuitable Ag N -DNA template. 11 The authors noted that Tris buffers typical of DNA self-assembly schemes were unsuitable for chemical synthesis of uorescent Ag N -DNAs; this incompatibility is commonly faced in supracluster assembly of Ag N -DNAs. Orbach, et al., demonstrated Ag N -DNA synthesis on mm-scale DNA wires with hairpin protrusions. 118 Resulting uorescence colors depended on salt concentration, pointing to the complexity of controlled cluster synthesis on complex DNA scaffolds. The authors then incorporated Ag N -DNA-stabilizing hairpins into a hybridization chain reaction (HCR), with wire formation only aer addition of an additional DNA strand (Fig. 22B). Ag N -DNAs have also been incorporated into DNA hydrogels. 221 Only two works have assembled puried Ag N -DNAs in order to approach atomic precision over cluster size in multi-cluster assemblies. Schultz, et al., developed DNA "clamps" for dualcolor Ag N -DNA pairs which exhibited Förster resonance energy transfer (FRET) between donor and acceptor Ag N -DNAs. 222 DNA clamps were designed by appending complementary tails of A and T bases 13 to templates for a green-emissive Ag N -DNA donor (Fig. 23A) and a red-emissive Ag N -DNA acceptor (Fig. 23B), which have a 6 nm Förster radius. 188 Aer HPLC purication of individual Ag N -DNAs, various geometries of clamps were formed by WC pairing. For clamps where donor and acceptor were held within <6 nm, donor excitation produced acceptor emission (e.g. Fig. 23C), with >60% FRET efficiency estimated by donor quenching (Fig. 23D) and assuming no isolated donor is present 186 (use of excess of acceptor increased the likelihood of all donors in the paired state). FRET could be repeatedly cycled by heating and cooling, corresponding to cyclic melting and reforming of the DNA clamp (Fig. 23E). The clamp design is somewhat general and was demonstrated with a different acceptor cluster. Notably, Schultz, et al., found that HPLC purication was essential to observing FRET due to low chemical yield of Ag N -DNA synthesis; without purication, very few clamps contain both donor and acceptor clusters. 222 Recently, Zhao, et al., observed FRET between donor and acceptor Ag N - DNAs without prior purication by synthesizing Ag N -DNAs within surfactant reverse micelles with 5-10 nm diameters. 223 By this method, a fraction of the micelles contained both donor and acceptor clusters conned together within a "nanocage" whose size is of the length scale of the Förster radius of the pair. This method also enabled spectroscopy-based measurement of micelle diameter in agreement with more laborious cryoelectron microscopy, suggesting that Ag N -DNA-based FRET may be a promising route to size measurement of biological "nanocage" structures. 223 Copp, et al. presented a modular design strategy for multifunctional DNA templates with distinct Ag N -DNA stabilizing regions and single-stranded "linker" regions. 200 This strategy exploits large data libraries 57 to identify 10-base DNA strands which do not foster uorescent Ag N -DNA growth. These strands are candidate linkers to extend an Ag N -DNA template strand while leaving the cluster unchanged. Candidate linkers are appended to the DNA sequence of an HPLC-stable Ag N -DNA (Fig. 24A) and experimentally screened to determine if linkers leave Ag N -DNA optical spectra unshied, signalling little to no change in cluster geometry. A complementary "docker" site is then engineered onto a DNA nanotube (Fig. 24A). Following HPLC purication of the Ag N -DNA, multi-cluster assembly occurs by WC pairing of linker and docker strands (Fig. 24B). Ag N -DNAs with atomically selected sizes are unperturbed aer binding to the nanotubes, as supported by unchanged spectral shapes aer assembly (Fig. 24C). Future studies are needed to conrm labelling efficiency. The method is general to multiple sizes of Ag N -DNAs and linker sequences 200 and could generalize to many types of DNA scaffolds, for precise control over both cluster geometry and orientation. Recently, Yourston, et al., thoroughly studied Ag N -DNAs formed on RNA nanorings with DNA "arms." 224 Because in situ synthesis was used to decorate arms with Ag N -DNAs, it is uncertain how many Ag N were harboured on a given nanoring. Interestingly, placement of the ssDNA region on which Ag N presumably formed affected not only uorescence spectra of the clusters, indicating variations in size/shape and possibly rigidity, but also time stability: clusters formed within the nanoring were much more time stable, perhaps due to enhanced protection from redox reactions which can blue-shi Ag N -DNA emission over time. 35,225 Studies such as these will be important for assessing the practicality of DNA-based Ag N -DNA arrays as functional materials. The heterogeneous mixture of products and low chemical yield of Ag N -DNA synthesis can prohibit precise Ag N -DNA arrays by direct synthesis onto a DNA nanostructure. Additionally, we pointed out in a previous section that one should also not a priori assume that the envisioned WC base pairing of the DNA nanostructure will be maintained once silver is introduced. Assembly methods which instead rely on WC pairing aer purication have their own limitations due to the limits of purity aer HPLC and due to labelling efficiency of the DNA nanostructure by binding of Ag N -DNA linkers to each docker site; this may be overcome by adding an excess of Ag N -DNAs. Much more work is required to realize precise cluster arrays by either method. Future directions and challenges Signicant recent progress has been made in understanding the structure-property relations of Ag N -DNAs and achieving their rational design. These advances were enabled by new experimental and computational strategies to purify and size Ag N -DNAs, to select new DNA templates for especially uorescent Ag N -DNAs, and to crystallize Ag N -DNAs for structure determination, as discussed in this review. Here, we discuss outstanding challenges in this eld and areas of especial promise, which we hope will catalyze new research directions in this important eld. Near-infrared emissive Ag N -DNAs Nearly all reported Ag N -DNAs exhibit l em in the 500-750 nm range. 57 The most well-studied NIR Ag N -DNAs have been developed by Petty and coauthors. 15,33,60,183,196 High-throughput studies by Copp, et al., uncovered additional NIR emissive clusters, 175 and the Vosch group has characterized several of these recently discovered NIR Ag N -DNA, including one with an unusually large Stokes shi 176 and one with an impressively high 73% quantum yield. 190 These quantum yields are competitive with organic uorophores, making Ag N -DNAs promising for development of biolabels in the NIR tissue transparency windows. 191 Until recently, only two Ag N -DNAs with l em > 800 nm were reported, 196 and it was assumed that NIR Ag N -DNAs are inherently rare compared to their visibly emissive counterparts. However, because Ag N -DNA studies employ UV-Vis optimized photodetectors commonly used for spectroscopy in the chemical and biological sciences, for which sensitivity is poor above $800 nm, it is possible that many NIR-emissive Ag N -DNAs have simply gone undetected. Swasey and Nicholson, et al., developed a custom NIR well plate reader equipped with an InGaAs detector to search for NIR uorophores in high-throughput (Fig. 25A). 226 Using this tool to scan $750 Ag N -DNA samples, 161 previously unidentied NIR-emissive Ag N -DNAs were uncovered (Fig. 25B). This huge abundance of NIR products was unexpected because the scanned Ag N -DNAs were stabilized by randomly selected DNA template sequences 57 or by oligomers previously designed for visible uorescence. 175,215 Among the newly discovered Ag N -DNAs were found the longest wavelengthemissive Ag N -DNA to date, with 999 nm peak uorescence emission 23 (Fig. 25C) and the largest Ag N -DNA to date, an Ag 30 with 12 Ag 0 and 18 Ag + (Fig. 25D). A directed search for NIR Ag N -DNAs, using the informatics methods described in Section 5, is highly promising for discovery of new NIR Ag N -DNAs. Ag N -DNA photophysics Both experimental and computational efforts are needed to further our understanding of the uorescence process in Ag N -DNAs, including the nature of the initial excited state, the relaxation process(es) leading to the origins of Stokes shis for these emitters, and the roles of both the Ag N and the surrounding nucleobases in governing excited state properties. While a zoo of Ag N clusters stabilized by different ligands have been described in literature, their optical properties largely differ from the distinctive features of Ag N -DNAs described in Section 4. Zeolite stabilized Ag N clusters display mainly strong UV absorption bands, with emissive excited-state decay stretching from picoseconds to the microsecond range. 227,228 Similarly, the Mak group recently reported an intriguing octahedral silver cluster with 95% uorescence quantum yield and microsecond-scale uorescence decay times caused by thermally activated delayed uorescence. 229 Such microsecond-scale uorescence decay times have not yet been observed for puried Ag N -DNAs. Only microseconds-lived dark states of Ag N -DNAs have been reported. The unusual rod-like geometry of HPLC-stable Ag N -DNAs, 24 which has been conrmed in recent crystal structures of NIR Ag N -DNAs, 25,68,69 makes Ag N -DNAs particularly interesting experimental systems for the study of collective electronic excitations in molecular-like materials. 148,[151][152][153] With only N 0 ¼ 4-12 effective valence electrons in Ag N -DNAs characterized thus far by HR-MS, 23,24,57 Ag N -DNAs lie well below the atomic size identied as the onset of plasmonic excitations in monolayerprotected gold clusters. 230 However, the high aspect ratios of some identied Ag N -DNAs 147 may make certain Ag N -DNAs better approximated as atomic silver rods, which computational studies have shown to exhibit plasmonic-like excitations. 136,139,144 Future studies probing the ultrafast excited state dynamics of Ag N -DNAs are needed to better understand whether, or to what degree, collective electronic excitations are involved in the luminescence process of Ag N -DNAs. Another feature of Ag N -DNA photophysics which remains poorly understood is the exact nature of the UV excitation process, which for the case of pure Ag N -DNA solutions leads to the same uorescence spectral shapes as visible/NIR excitation (Fig. 1B). 21 Due to DNA's complex and elegant excited state dynamics, 231 it is possible that DNA imbues Ag N with similar properties. Berkadin, et al., have computationally examined the UV excitation process in Ag N -DNAs using molecular dynamics (MD) to simulate thread-like silver clusters in a DNA duplex, followed by DFT-based tight binding to calculate the electronic dynamics of the relaxed structure. Interesting, UV excitation results in a net negative charge transfer to the cluster, due to promotion of electrons from the localized p state of the DNA to the cluster. 232 Such simulations performed on the recently reported crystal structures would be of great interest. 22,25,68,69 Furthermore, recent experiments by the Kohler group on Ag +nucleobase complexes are also promising for enhancing our understanding of this aspect of Ag N -DNA photophysics, 113,233 with their very recent study nding evidence for an extremely long-lived, $10 ns excited state in a C 20 -Ag + -C 20 duplex. Rational sensor design Many chemical and biomolecular sensing schemes employing Ag N -DNAs have been developed, such as NanoCluster Beacons, 32,39 ratiometric sensors, [234][235][236] and microRNA sensors 14,119 (more complete list in a past review 74 ). Designing these sensors is extremely challenging, and designs may not generalize because silver clusters are not conned only to the expected regions of a probe. 234 Further, the mechanisms underlying the function of these sensors remain uncertain in most cases, although color and brightness changes are likely due to restructuring of Ag N -DNAs. 34,62,138 Recent efforts have focused on strategies to improve sensitivity and selectivity of Ag N -DNA sensors, such as by addressing the low chemical yield of these clusters. 237 Due to the complexity of designing Ag N -DNA sensors, we propose that high-throughput experimentation combined with machine learning approaches may be a useful path forward. The Yeh group has recently pioneered highthroughput screening of Ag N -DNA sensors, which may signicantly expedite progress in this area. [238][239][240] Puried Ag N -DNAs could also serve as sensitive nanophotonic sensors. The photophysical properties of pure Ag N -DNAs have also been shown to exhibit sensitivity to temperature, 176,185 refractive index, 147,189 and viscosity. 176 Combined with advancement in the ability to pattern DNA nanostructures with Ag N -DNAs, it may be possible to design sensors which colocalize Ag N -DNAs with analyses of interest for nanoscale measurements. Non-natural polynucleic acids as cluster templates While DNA has been well-studied as a template for silver clusters, and RNA to a lesser extent, 46 much less is known about the suitability of non-natural polynucleic acids to template silver clusters. 241 Because RNA is less exible than DNA, it has been noted that RNA may be less suitable as a scaffold for Ag N if signicant exibility of oligomer ligands is required for a given cluster geometry. 69 Synthetic polynucleic acids could expand cluster structures and geometries, enhance stabilities, and imbue added functionalities. In addition to the four natural nucleobases, numerous articial nucleobases have well-studied affinity for silver and other metals. 77-80 A large range of uorescent nucleotide analogues are currently available, with more being actively developed continuously. [242][243][244] These bases could shi the universal UV excitation peak into the blue region of the visible spectrum, and FRET experiments could help elucidate the energy transfer processes in Ag N -DNAs and even unravel distances and proximity of selected nucleobases to the Ag N cluster. Also of interest are chemical modications developed for therapeutics to reduce enzymatic nucleotide digestion, 245 which have been reported to template Ag N -DNAs, 246 and other backbone modications which would inuence ligand conformation and, therefore, possible stabilized cluster geometries. Future studies are needed in this promising area. Uncharacterized toxicity and biocompatibility of Ag N -DNAs Ag N -DNAs are oen touted as nontoxic and biocompatible uorophores, 64,65,70-74 but very few studies have established these properties. 247,248 While Ag + is certainly less toxic than other heavy metals that compose luminescent nanoparticles such as quantum dots, 249 it is also a common environmental metal pollutant. 250 Ag nanoparticles can break down in the body in a range of different ways, resulting in toxicities due to either the nanoparticles themselves or to Ag + and silver salts. Ag nanoparticles are also nding use as anti-cancer therapeutics, 251 adding further complexity to our understanding of the toxicity of Ag N -DNAs. In-depth studies of the toxicities of Ag N -DNAs and their uptake and possible clearing from tissues and organisms are needed to advance their applications in the biomedical sciences and to ensure environmentally responsible use and disposal. Enhancing stability While Ag N -DNAs are oen touted as extremely photostable and/ or chemically stable, degradation of Ag N -DNAs in biologically relevant solutions and in the presence of living cells is a significant hindrance to their practical use in bioimaging. 252 To overcome this challenge, Jeon, et al., encapsulated Ag N -DNAs within silica nanoparticles, signicantly increasing cluster chemical stability. 253 The encapsulated Ag N -DNAs can also be used to monitor the stability of their silica nanoparticle hosts in various biological media. 254 In situ synthesis of Ag N -DNAs within DNA hydrogels can improve photostability, likely by shielding clusters from oxidative degradation. 255 Lyu, et al., reported signicantly enhanced chemical stability and increased cellular uptake of Ag N -DNAs modied by cationic polyelectrolytes. 252 Conclusions Ag N -DNAs lie at the unique intersection of metal cluster science and DNA nanotechnology, combining the atomic precision of ligand-stabilized metal clusters with the sequence programmability of DNA nanomaterials. Their photophysical properties also provide a window into the regime between behavior associated with single small molecules and behavior associated with nanoparticles. For these reasons, the study and engineering of Ag N -DNAs are both extremely challenging and extremely promising. Here, we have reviewed recent advances in the fundamental understanding of these nanoclusters, with a focus on studies of puried Ag N -DNAs with chromatographically selected sizes. The latest (and evolving) ndings of Ag N -DNA structure and the nature of the DNA-silver interaction have been discussed. Photophysical studies, particularly of puried Ag N -DNAs, have been summarized. The current understanding of how DNA sequence selects for cluster size and optical properties has been reviewed, as have emerging methods for predictive design of Ag N -DNAs and their larger organization into multi-cluster arrays. We provide perspectives on emerging areas of interest and signicant unanswered questions related to these uorescent clusters in the hopes of stimulating researchers to explore these fascinating nanomaterials. Conflicts of interest There are no conicts to declare.
23,102.8
2021-01-21T00:00:00.000
[ "Chemistry" ]
Flexures for Kibble balances: minimizing the effects of anelastic relaxation We studied the anelastic aftereffect of a flexure being used in a Kibble balance, where the flexure is subjected to a large excursion in velocity mode after which a high-precision force comparison is performed. We investigated the effect of a constant and a sinusoidal excursion on the force comparison. We explored theoretically and experimentally a simple erasing procedure, i.e. bending the flexure in the opposite direction for a given amplitude and time. We found that the erasing procedure reduced the time-dependent force by about 30%. The investigation was performed with an analytical model and verified experimentally with our new Kibble balance at the National Institute of Standards and Technology employing flexures made from precipitation-hardened Copper Beryllium alloy C17200. Our experimental determination of the modulus defect of the flexure yields 1.2×10−4 . This result is about a factor of two higher than previously reported from experiments. We additionally found a static shift of the flexure’s internal equilibrium after a change in the stress and strain state. These static shifts, although measurable, are small and deemed uncritical for our Kibble balance application at present. During this investigation, we discovered magic flexures that promise to have very little anelastic relaxation. In these magic flexures, the mechanism causing anelastic relaxation is compensated for by properly shaping and loading a flexure with a non-constant cross-section in the region of bending. Introduction Work led by T.J. Quinn, C.C. Speake, and colleagues at the International Bureau of Weights and Measures (BIPM) in the late 1980s and early 1990s popularized flexures for the use in weighing instruments [1,2].The team at the BIPM showed that flexures [3] are superior over knife edges, the traditional choice for the main and auxiliary pivots in beam balances and mass comparators.Two planes come together in a single line to form a knife edge.However, zooming in to a cross-sectional view, the meeting point is not a sharp corner but rather a curve with a finite radius [4].This has two negative consequences affecting the location of the rotation axis for such a pivot.(1) As the knife edge moves, it effectively rolls, subtly altering the lever-arm ratio of the beam and hence changing its sensitivity and ultimately limiting the performance of the balance.(2) The Hertzian contact between the knife and the flat and, hence, the radius of curvature of the knife, depends on the load supported by the pivot.And, as Quinn [2] notes, anelasticity and plastic deformations as well as friction in this contact causes zero point drift of the balance.It is believed that these effects contributed to the non-stationarity of the data obtained with the third-generation Kibble balance at the National Institute of Standards and Technology (NIST) [5,6]. Replacing the central knife with a flexure prevents the problems discussed above, as the flexure can be designed to have a well-defined rotation axis.A drawback of flexures, which Quinn, Speake, and coworkers were aware of and have investigated thoroughly, is the anelastic relaxation, which is also known as the anelastic aftereffect. The simplest model of a flexure can be thought of as a perfectly elastic spring, in parallel with a lossy spring, usually described as a spring in series with a viscous element, referred to as a dashpot, see Fig. 1.The combination of a spring and a dashpot forms what is known as a Maxwell unit, and when this Maxwell unit is combined in parallel with an ideal spring, it constitutes the Maxwell model in material science.A material obeying the systematic of the Maxwell model is called a Maxwell solid.When a sudden strain is applied at t = 0 to a Maxwell solid, a significant fraction of the stress response is immediate, but the Maxwell unit contributes an exponentially decaying stress with a time constant that is given by the ratio of viscosity to the elastic modulus of the Maxwell unit. In mass comparators, this anelastic aftereffect can be kept small by avoiding large excursions of the flexure.This can be achieved by limiting its mobility with mechanical hard stops that prevent the balance mechanism from deflecting significantly during mass placement [2]. For Kibble balances, however, the story is different -large motions cannot be avoided, because two measurement modes are required: force and velocity mode [7].The former is similar to weighing with a mass comparator, with the difference that the weight of a test mass is compared directly to a magnetic force produced by a coil.In the latter, the current-to-force coefficient of the coil is determined by sweeping the coil through the magnetic field while simultaneously measuring induced voltage and the coil's velocity. While it is not necessary to use the same mechanism for moving the coil and weighing the mass, it is advantageous to do so.This observation was made by Kibble and Robinson [8] and shall be abbreviated as Kibble-Robinson theory (KRT). Interestingly, the first two Kibble balances ever built [9,10], one at the National Physical Laboratory (NPL) in the United Kingdom, the other at the National Institute of Standards and Technology in the United States of America, followed the fundamental principles of KRT, i.e., to use the same mechanism for weighing and moving before the principle was known as such.The balances at NPL and NIST used a beam and a wheel mechanism, respectively.Both were supported by a central knife-edge since large motions were necessary.In both experiments, the travel of the coil aligned with the axis of local gravitational acceleration was several centimeters, requiring rotation about the central pivot of several degrees, and flexures were not deemed suitable for this application. Although flexure-based weighing cells have been used in Kibble balances, most of these designs ignore KRT, because a different mechanism is used to move the coil than to perform the precise weighing.Examples include the Kibble balances built in Switzerland, Korea, and at the BIPM, [11][12][13].Not taking advantage of KRT may come at a performance cost -a balance that outperforms the original NPL balance designed by Kibble and Robinson still needs to be built. Additionally, a larger measurement uncertainty is not the only penalty for these designs.The requirement for multiple mechanisms increases the mechanical complexity and escalates the engineering, manufacturing, and operating costs. A flexure-based Kibble balance that obeys KRT is the panacea for this field.Hence, it's not surprising that there is renewed interest in this topic.Researchers at the Physikalische Technische Bundesanstalt [14], NPL [15], and NIST [16] are actively working on such systems, and the BIPM Kibble balance [17] is switching to a single flexure mechanism for moving and weighing, as well.Hence, it is time to revisit the anelastic relaxation of flexures and the applicability and limitations in this use case. After exercising the mechanism for the large deflections required for velocity modes, anelastic drift from the flexures [2] limits the accuracy obtainable from weighing in typical ABA weighing schemes (A= mass off, B= mass on).Ideally, this effect needs to be characterized and compensated for to allow for accuracy in weighing and systematic studies.An erasing or compensation procedure shall be established to provide the means for the correct operation of such mechanisms.Erasing procedures to reduce the influence of mechanical hysteresis are well-known to researchers familiar with the Kibble balance.For example, erasing procedures are needed in knife-edge balances, where plastic deformation is the leading cause of hysteresis.The procedure is typically a sinusoidal motion with decaying amplitude and is used to reduce errors caused by disturbances during mass exchange [18,19].Such procedures are time consuming -especially if they need to be carried out after each mass exchange. In the following three sections, we present fundamental investigations into the anelasticity of the flexure mechanism for the new Kibble balance at NIST [16].The theoretical work can be seen as the relevant extension of existing models of anelasticity at low frequencies to describe the modes of operation in the Kibble balance.These models are presented in connection with experimental results from a working prototype of the new mechanism.Measurements of the relaxation force from the flexures in servo-controlled position feedback after defined disturbance are shown. Review of anelasticity in the literature Anelasticity and anelastic relaxation in polycrystalline metals appeared in the literature in the late 19th century, notably by Boltzmann (1874) [20] and Wiechert (1893) [21].Wiechert seems to be the first to employ a generalized form of relaxation model to metals to describe an anelastic solid.His model consists of an ideal spring in parallel with multiple parallel Maxwell units.This model, which shall be called generalized Maxwell model throughout this manuscript, was later used in configurations of a finite number, n, of parallel Maxwell units, e.g., in works by Nowick and Berry (1972) [22] or by Beilby et al. (1998) [23] up to a continuum of relaxation stages in work by Quinn et al. (1992) to model low-frequency anelastic relaxation of polycristalline metals [24]. One hundred years after Boltzmann and Wiechert, anelasticity and the search for materials with low internal loss garnered new interest driven by precision measurements, such as beam balances for mass comparison [4], torsion balances for accurate determinations of the Newtonian gravitational constant G [25], and thermal noise limited test mass suspensions for gravitational wave antennas [27].In the 1950s, Callen et al. [28] formulated the fluctuation-dissipation theorem linking internal damping to thermal or Brownian noise in mechanical suspensions, which counts as one of the fundamental limits to the sensitivity of highprecision mechanical detectors [27].In the context of G, Kuroda [29] pointed out a possible measurement bias due to the anelastic effect in the time-of-swing method. Experiments to measure anelastic damping can be classified into dynamic and static approaches.The former observes the ringdown of a free oscillation at the resonance frequency of the mode under investigation using the logarithmic decrement or the mechanical quality factor, Q, to quantify the modulus defect in a material directly [30,31].Using an inverted pendulum, for example, anelastic effects in the material can be shown very clearly and investigated for various frequencies [32,33].For these experiments, other sources of damping, for example, gas pressure forces, [34] have to be reduced as much as possible by performing these measurements in high or ultra-high vacuum conditions. Beilby et al. [23] presented an interesting method to characterize open-loop static relaxation based on photoelastic measurements from optical birefringence of glass ceramics under a change in stress.The anelastic aftereffect was measured in a static experiment in optical borosilicate-crown glass (BK7) and fused silica after a sharp increase in stress had been applied and released.Beilby's report also includes dynamic measurements by exciting the eigenmode.[24,25,30,31].Several practical aspects of their work provide important foundations for the work presented in this manuscript which is based on observations on flexures made from similar Maxwell unit ideal spring The Maxwell model.An ideal spring with an elastic modulus, E 0 , is parallel to a Maxwell unit.The Maxwell unit consists of an ideal spring with elastic modulus, E 1 , in series with a viscous element.The viscous element known as damper or dashpot produces a velocity-dependent stress.The proportionality factor is given by the material coefficient of viscosity, ν 1 .The ν 1 and E 1 quotients yield a time constant, τ 1 .The stress, σ, is applied in the vertical direction producing a strain, ε, denoting the relative length extension from the original spring length L. metallurgic composition and temper.Speake and Quinn noticed that in such polycrystalline metal flexures of minimal notch thickness down to at least 50 µm, damping results mainly from bulk effects and not surface effects [24].The latter are more signifciant for high Q materials such as sapphire or glass ceramics [35,36] in fibers with large surface-tovolume ratio. Physics of the Maxwell model In this section, we review the physics of the Maxwell model, see Fig. 1.The stress of the damper, σ = ν 1 ε, depends proportionally on the derivative of the strain ε with respect to time.The proportionality constant is given by the material coefficient of viscosity, ν 1 .The quotient, ν 1 /E 1 has dimension time and is abbreviated as τ 1 .For the Maxwell unit, the strain is the sum of the strain produced by the spring and the damper.This is also true for the time derivative of the strain.Hence, we obtain the following differential equation for the Maxwell unit, Performing the Laplace transformation yields, We use the accent˘to denote the Laplace transform of the corresponding variable. The ideal spring has a stress-strain relationship of σ 0 = E 0 ε.The strain for the ideal spring is the same as for the Maxwell unit, white the stresses add, i.e., which can be simplified to the response function of the Maxwell model Response to two simple stimuli The response function, Eq. 4, allows us to calculate the time domain response, σ(t) of any stimulus ε(t), by calculating the inverse Laplace transform of ȓε.In the next few sections, we investigate different stimuli.The total duration of all stimuli is τ s .The first stimulus (subscript 1) is given in the time domain by In the Laplace domain, the above stimulus is The inverse Laplace transform of the product ȓε for t ≥ τ s yields the stress response.It is The desired outcome of the calculation is the stress after a strain stimulus, which can be obtained by shifting the time axis using t = τ s + t * .Then, the stress is given in the new time variable as consistent with Eq. ( 2) in [24].The imprinted stress decays with a time constant τ 1 .The absolute initial amplitude, |σ a1 |, is larger as τ s is longer and converges to −ε a E 1 for τ s → ∞.The top panel of Fig. 2 shows the development of the stress as a function of time during and after the application of stimulus one.After the stimulus, the stress decays from a negative value to zero with a time constant, τ 1 .The negative value appears as the strain is suddenly released, see t * = 0 in Fig. 2. We are not interested in a static deflection for the Kibble balance but instead, a sinusoidal deflection as the coil moves up and down in the magnetic field.To make the result comparable to the result above, we model a second (subscript 2) strain impulse as Here n is the number of integer periods the mechanism moves in a time interval, τ S .The motion given by Eq. 10 starts and ends at ε 2 = 0, which is a reasonable assumption for the Kibble experiment, where the velocity measurement is usually sandwiched between two force mode measurements for which the mechanism is controlled to the equilibrium position, ε = 0.In the s domain, that impulse is The resulting stress for t * > 0 still relaxes as σ 2 (t * ) = σ a2 e (−t * /τ1) , but the amplitude is given by The lower panel of Fig. 2 shows the stress as a function of time for a stimulus with n = 1.It can be seen that, for the same signal duration, the absolute value of the stress at t * is smaller than the one for a static deflection.The sign of the stress is also different from the constant excursion.The attenuation of imprinted stress of a sinusoidal motion relative to a constant excursion with the same amplitude is captured by the unitless ratio ξ, given by the negative ratio of σ a2 to σ a1 , which computes to We note that which makes intuitive sense, because only half the time is spent straining the flexure to either side.The maximum, ξ = 1/2 is reached at τ s = 2πnτ 1 . Figure 3 shows ξ as a function of τ s /τ 1 for different integer numbers, n, of periods in the signal duration, τ s .The maxima are clearly visible.On either side of the maxima, ξ drops like τ s /τ 1 or its inverse. The consequence of this observation is that the sinusoidal motion attenuates the relaxation effects of the Maxwell units with τ 1 >> τ s /(2πn) and τ 1 << τ s /(2πn).Hence, in the Kibble balance with sinusoidal coil motion, the Maxwell units of concern (largest ξ) have a ratio τ s /τ 1 of order 2πn.Fortunately, as discussed in the next section, the elastic energy stored in these units can be erased most effectively. The boxcar eraser In statistics and signal processing, a boxcar average is a convolution of a time-domain signal with a rectangular aperture [26]. Similarly, the erasing procedure discussed here is a rectangular motion given by two parameters, the duration, τ r , and the amplitude of the motion, i.e., strain, ε r1 . In general, an erasing procedure is a prescribed motion that is performed after the initial stimulus with the goal of minimizing the stress after both motions are completed.We assume that the duration of the stimulus and erasing procedure is τ s and τ r , respectively.For the rectangular stimulus, an erasing procedure can be found such that σ(t) = 0 for t > τ s + τ r . We define a new stimulus in the time domain, labeled 1 (tilde to show the case with erasing).In the time domain, it has the following functional form: The strain amplitude ε r1 is the erasing amplitude, which depends on the erasing time, τ r .The dependence can be found as follows.The Laplace transformation of Eq. ( 15) is From the inverse Laplace transform of the product of stimulus with response function, we obtain, after shifting the time axis, this time by t * = t−τ s −τ r again an exponential decay, identical to Eq. ( 8), although with an amplitude This amplitude can be made zero by choosing The absolute value of the ratio of ε r1 to ε a is shown in Fig. 4. In the limit τ 1 → ∞, we obtain lim The erasing amplitude is inversely proportional to the ratio of the erasing time to stimulus time.In other words, in this case, the relaxation can be erased with an opposite but otherwise identical signal to the stimulus.For shorter erasing procedures the erasing amplitude needs to be scaled up correspondingly.Another special case of Eq. ( 19) to consider is τ r = τ s .Then, This equation says that the erasing stimulus needs to be opposite to the original stimulus, and the size is identical to how much the Maxwell unit has been charged after τ s .For shorter times, τ r , the absolute magnitude of the erasing amplitude becomes larger and vice versa.Experimentally, both extremes are of limited utility.(1) There is no point in using erasing times much longer than 3τ 1 , because a similar effect 0.1 0.2 0.3 0.4 0.5 0.6 0.70.8 1.0 The absolute value of the ratio of ε r1 to εa as a function of erasing time to stimulus time for different time constants, τ 1 .For a sinusoidal stimulus, the vertical axis of the graph has to be multiplied by ξ given by Eq. ( 13). can be obtained by just waiting for the decay according to Eq. 8. (2) Short erasing times require large motion amplitudes that may not be available experimentally.From the discussion above and Fig. 4 it becomes apparent that it is much harder to erase Maxwell units with large time constants. The same formalism can be applied to the sinusoidal stimulus, and not surprisingly, the results are similar, and the erasing amplitude is scaled by −ξ i.e., A zoomed plot of the strain for t * > 0 is not necessary because then the stress produced by the flexure is zero. . The generalized Maxwell model.The main element is perfectly elastic with a modulus E 0 .In parallel to it is an infinite number of Maxwell units, each with a spring of elastic modulus E 1 , but different viscosity values, ν j .Hence, each unit has a different time constant The generalized Maxwell model Previously, we discussed the stress relaxation behavior for the Maxwell model.While this model is conducive to building intuition and deriving simple equations, it falls short of reproducing the behavior of a realistic material.A better model is an ideal spring in series with many Maxwell units having a distribution of time constants as is shown in Fig. 6.This model is known as the generalized Maxwell model and is sometimes also called the Maxwell-Wiechert model.We follow in the footsteps of Speake and Quinn, who have discussed such models in the literature [2,25,37].First, let's assume we have n discrete sets of Maxwell units, each with an elastic modulus, E 1 , and a viscosity, ν j , yielding a time constant, τ j = ν j /E 1 .Hence, the response function is given by By replacing the sum with an integral with lower limit τ l and upper limit τ u and using a distribution function f (τ ), we obtain Note that the distribution function is normalized such that Following [2], a good candidate is With this distribution function, the density of Maxwell units decreases for longer time constants -a reasonable assumption that is found in many mechanisms that produce 1/f noise.The response function simplifies to The second summand of the equation above is shown as a Bode Plot in Fig. 7.For comparison, the equivalent term for the Maxwell model with an effective time constant of √ τ u τ l is shown also.Qualitatively, both Bode plots look similar and show high pass behavior.However, in the region of interest, for angular frequencies between 1/τ u and 1/τ l the curves differ.More importantly, the generalized Maxwell model has about 10 dB more gain for low frequencies. The quotient σ ε describes the transfer function of strain to stress (the modulus).The imaginary part is the relevant portion responsible for the anelastic effect.The literature [2,37] introduces the relative modulus defect as the imaginary part of the modulus of the generalized Maxwell model, which we will abbreviate with η, as Using s = iω with i being the imaginary unit, the value of the modulus defect at the geometric middle of the angular frequency range ω = (τ l τ u ) −1/2 can be found to be As is indicated in Fig. 7, the imaginary part of the anelastic response stays nearly constant between frequencies ranging from approximately τ −1 u to τ −1 l .Hence, a frequency-independent modulus defect at the level given by Eq. 29 is assumed in the literature [24,25,37] and we continue this tradition. Reasonable assumptions for τ l and τ u are 10 s and 5000 s, respectively.With these time constants, π/(2 ln (τ u /τ l )) evaluates to 0.253, which is given by the green dotted line in the lower panel of Fig. 7. With the response function at hand, the anelastic relaxation after a rectangular strain given by Eq. ( 5) is applied can be found.Substituting the time scale, t = τ s + t * such that t * = 0 after the applied strain, the stress is where Ei denotes the exponential integral given by Ei Equation 30 has previously been given in a slightly different form in [24], as Eq.(14a). The boxcar eraser In this section, we study the effect of the boxcar eraser given by Eq. ( 15), replacing ε r1 with ε r3 for the generalized Maxwell model.In this case, the response for time t * > 0 is Equation ( 32) is plotted for different ratios ε r3 /ε a and for the specific values of τ r = τ s /4 and τ s = 1920 s in Fig. 8.The boxcar eraser does not completely remove the anelastic relaxation.It is also interesting to note that for long t * , i.e., t * > τ s , the remaining stress is independent of the erasing amplitude and is dominated by the initial excursion.Since perfect erasing is not attainable, we aim to find the best possible erasing procedure.To measure the effectiveness of the boxcar eraser we define a figure of merit, Our goal is to minimize f om .Figure 9 shows a heatmap of f om as a function of τ r /τ s and ε r3 /ε a .One can see a curved valley.Two points are marked in the valley.The plots to the top and the right of the heat map show the figure of merit along the horizontal and vertical dashed lines.Both marks are at the minimum along the respective dimensions.Note that the minimum marked with the blue circle is much lower than that marked with the green square.The diamond on the top right marks the point where ε r3 = 0.This point is equivalent to no erasing and only entails waiting. The stress as a function of t * for the three locations in the heatmap is shown in Fig. 10.While simply waiting achieves the best figure of merit, it also takes the longest time, almost the same as the original excursion.The box car eraser can hasten the process.In just about a quarter of the time, a reasonable reduction in the figure of merit can be achieved. .The stress after the erasing procedure using the boxcar eraser.The curves correspond to the three marked parameters in Fig. 9. Effect on the measurement Previously, we calculated the drift in the stress produced by a linear flexure after it has been exercised sinusoidally. Here, we would like to apply this knowledge to a Kibble balance, and in order to achieve this goal, three layers of mechanics need to be revisited. (1) The basic equilibrium mechanics of the flexurebased balance.(2) The calculation of the stiffness of a flexural element.(3) The effect of one or more Maxwell units on the stiffness.Most of the details discussed below can be found in the literature [30,37].For convenience, we reiterate the main finding using coherent notation with the rest of the article.The potential energy of the system in the smallangle approximation is The restoring force imposed at an end flexure as a function of z is the negative derivative of the energy with respect to z. By taking a second derivative with respect to z, the linear spring constant can be calculated via K = −∂F/∂z = −∂F/∂φ • ∂φ/∂z, the spring constant can be calculated.We find The spring constant has two principal contributions: a gravitational component, given by the first two terms, and a flexural component, given by the last two terms. Only the latter suffers from loss and anelasticity.Note, in the following the symbol s describes the position along the neutral fiber axis of the flexural element starting at the fixed end and not the frequency in the Laplace domain.The use of the symbol s is typical for these bending problems. Flexural stiffness Table 1.The characteristic dimensions for the elliptical notch contour of the main flexure (shown) and the circular contour for the end flexures the beam balance.A coutour of an elliptical flexure is shown in Fig. 13. Dimension Main Here, the angular stiffness, κ, of a loaded flexure as a function of load, geometry, and elastic modulus, which is assumed to be constant, is calculated.Following the approach using nonlinear equations of large deflections in bending used in [38] for flexures of arbitrary shape under load in tension the following two relationships hold dM (s) ds = F w,0 sin θ(s) and dθ(s) ds = r(s), with r(s) = M (s) EI z (s) , and where the second moment of area I z (s) can vary along the flexure length parameterized by s.The variables used in the calculation are shown in Fig. 12. The differential equations above can be solved for θ(s).The bending of the flexure in the x, z coordinate system is obtained using dz(s) ds = − cos θ(s) and dx(s) ds = sin θ(s) .(39) This system of ordinary partial differential equations can be solved using numerical tools, such as the shooting method [39].We have a Python [40] code that allows us to calculate arbitrarily shaped flexures.The end flexures can be approximated by the same calculation by reversing the boundary conditions to the ones shown in Fig. 12, which is a valid assumption at a long pendulum length of tens of centimeters of the suspended masses, m p [1].Now, κ can be extracted with the energy method, e.g., [30,37] from the second derivative of the elastic energy with respect to the deflection angle θ 0 at the free end, i.e., In general, κ can only be obtained numerically.The data describing the geometries for the two types of flexure used in the experiment are given in Tab. 1.The numerically obtained values of κ are shown as a function of mass load in Fig. 14.Additionally, κ for the actual load are printed in Tab. 2 and a crosssectional drawing of the flexure is shown in Fig. 13.Next, we need to understand the contribution of the time-dependent elastic modulus. Time dependent elastic modulus The Maxwell model shows anelastic relaxation.Here, we investigate how such a change in E affects κ.Therefore, we separate our elastic modulus into timeindependent and time-dependent parts, where ∆E depends on time and is much smaller than E 0 .Since ∆E ≪ E 0 , we can Taylor expand κ el .It is Equation 42 also defines the static and time-dependent parts of κ el .The derivative of κ el with respect to E can be obtained by numerically evaluating κ el for discrete values of E around E 0 .Since it is an important quantity, we name it the modulus sensitivity, because it describes the sensitivity of κ to elastic modulus.We abbreviate it with The elastic modulus for calculation of the elastic stiffness κ was corrected from plane stress to a plane strain assumption [43] by multiplication with 1/(1 − ν 2 ), where ν is Poisson's ratio which is 0.3 for Copper Beryllium.The results of these calculations for the two types of flexures used in the experiment below are shown in Fig. 14.For an unloaded flexure, S mod = κ 0 /E 0 , indicated by the dotted horizontal lines in Fig. 14.The higher the load, the smaller the derivative.The relative fraction of the anelastic effect can be diluted by loading the flexure.For Kibble balance design, we advocate for the largest possible anelastic dilution, i.e., very heavily loaded flexures. The use of dilution in precision mechanical experiments is not unheard of.Most notably, Quinn and Speake, and coworkers [41] were using gravitational dilution by adding a lossless gravitational spring in their experiment to determine the gravitational constant G.More recently, Pratt and coworkers [42] used tension in a ribbon to achieve dissipation dilution.In both of these examples, a lossless element is added in parallel to a lossy spring.The concept of the anelastic dilution discussed here is different because it minimizes S mod without adding lossless restoring elements, see.Sec. 6. The modulus defect Starting from the groundwork laid out in the three preceding sub-sections, the modulus defect can be calculated from the experimental observation.The experiments discussed below measure the force on the balance, F , as a function of z.From that, we can calculate a spring constant K = F/z.It is given by the gravitational and perfectly elastic terms in Eq. 36.The remaining time-dependent part, ∆K = ∆F/z, can easily be converted to ∆κ = L 2 ∆K; see Eq. 36.According to Eq. 42, ∆κ = ∆ES mod and a combination of the last two equations yields, where the numerically calculated values for the derivative are in the denominator.The numerical values for S mod used here are shown in Tab. 2. In the experiment, we can only test the combination of flexures.And just as the spring constants add in Eq. 36 so do the modulus sensitivities.As is shown in Tab. 2, the modulus sensitivity of the whole balance (last row in the table) is dominated by the end flexures.The two end flexures contribute 93.8 % to the total S mod .So, the experimental results given below are primarily a statement about the end flexures.The end flexures dominate because they are shorter and thicker than the main flexure, and define the region of bending more rigidly due to a larger notch effect than the main flexure, leading to negligible anelastic dilution.Small anelastic dilution values are typically not desired for Kibble balance experiments.For the investigation presented here, however, a small dilution is favorable because it emphasizes the anelastic behavior, which is the point of this article.12 × 10 −3 n.a. Previous measurements of the modulus defect Several measurements of the modulus defect conducted with often hardened tempers of Copper Beryllium alloy C17200 can be found in the literature.Tab. 3 presents an overview of these results.Measurements were performed in bending and torsion, yielding comparable results.The earliest measurement produces a high result, which was subsequently explained by clamping losses.One study found a dependence on load. Magic flexures For the flexures investigated here, the modulus sensitivity, S mod = dκ/dE, decreases with increasing load as is shown in the lower panel of Fig. 14.The question that remains is whether it is possible to build a flexure whose modulus sensitivity is zero.We call such flexures magic because they should have very little anelastic relaxation.Flexures that do not reach S mod = 0 but have a minimum as a function of the suspended load F w,0 , we call almost magic.While Fig. 14 is the result of a numerical solution of the differential equation Eq. 37, we strive to intuitively understand the mechanism that makes certain flexures magic.To this end, we study the bending of a flat flexure of length L with constant cross section b × h under applied torque.The coordinate of the flexure normalized by x(L) as a function of s is shown in Fig. 15 for two load cases and for two different values of the elastic modulus.A larger elastic modulus moves the bending upward (to smaller s). Imagine now that the cross-section of the flexure is engineered such that the area moment of inertia, I z , gets smaller as the region of where the flexure bends moves up, in reaction to an increase of E.Then, the increase in E can be counteracted by a decrease in I z and the total stiffness may stay the same.Clearly, a delicate balance of these two effect is necessary to make the stiffness independent of E. But even if the effects do not cancel exactly, the dependence of κ on E can be made small. The parameter that helps to tune a flexure to its magic condition is the load F w,0 on the flexure.As shown in Fig. 15 for a flat flexure, changing the load has the opposite effect of changing the modulus.A larger load moves the bending down on the flexure (larger s).We note that the bending point in a flexure can be moved by changing the load.The dimensions of the flexure were arbitrarily chosen to be L = 6 mm, h = 50 µm, and b = 20 mm .The analytical solution to the differential equation for flexures of constant cross-section can be found, e.g. in [37].To emphasize the region where the flexure bends, the curves are normalized to the values x(L), which are different for the four cases.They are in order of the legend 0.17 µm, 0.09 µm, 0.19 µm, and 0.10 µm. While it is instructive to explain the emergence of magic on a flat flexure, magic flexure can only be achieved by engineering the moment area of inertia I z (s) to change as a function of s. For these cases, however, analytical solutions to the differential equations governing the bending are either elusive or mathematically so complex that most insight is lost. The only way forward is a numerical calculation.Here, we use the code and mathematics described in the previous section to calculate the main flexure. Fig. 16 shows the deflection as a function of the load on the flexure given by the weight, mg, of a point mass at the end of the flexure with mass values, m, ranging from 0 kg to 50 kg.Similar to to the behavior of the flat flexure shown in Fig. 15, the bending point moves down the flexure as the mass load increases.The unloaded flexure bends approximately in the middle of the flexure, s = 5 mm, where the flexure is thinnest.As the mass load increases, the bending point moves down (increase in s) where the flexure becomes thicker and hence stiffer. The bending is, of course, distributed throughout the flexure.To indicate where the bending occurs the normal stress is shown in the lower panel of Fig. 16.The maximum of the (bending) stress is a The complement to Fig. 16 is Fig. 17 where the deflection and the stress of the same flexure are plotted for different values of E 0 under a constant load of 15 kg.We observe the opposite.As E increases the bending moves towards the middle of the flexure where it is thinnest and bending happens at a lower torque. With this combination, it is possible to find a load where the stiffness of the flexure does not change as E increases or decreases slightly, as is the case in the model of the anelastic relaxation.Usually, an increase in E would increase κ, but as the location of bending moves towards a thinner part of the flexure, it simultaneously decreases.By choosing the right load, these two effects can be fine tuned such that κ is independent of E, i.e., a vanishing S mod .Normalized deflection (upper graph) and normalized normal stress (lower graph) of the main flexure numerically calculated from the differential equation system in Eq. 39 under variation of the suspended load, F w,0 = mg.The angle θ 0 was imposed as a boundary condition at the free end and the calculation was performed with E 0 = 128 GPa. To put this to practice, we calculate κ and S mod for the main flexure for loads up to 100 kg.The results are shown in Fig. 18.To verify the Python code, several points were also calculated using commercial finite element analysis (FEA) software.While there are small differences between the two calculations that we attribute to negligible numerical inaccuracies in both methods, the overall trends agree.Both calculations predict the existence of a magic flexure.The main flexure becomes magic at a load of 56 kg. Much more work needs to be done to fully understand magic flexures.We believe they have low loss, i.e., high Q even at low frequencies in the bandwidth of the characteristic frequencies of the generalized Maxwell solid (Fig. 7), and very little anelastic relaxation, which is our focus in this article.Hence, Kibble balances might be a perfect application for magic flexures.The investigation presented in this section is theoretical and numerical in nature, and a careful experimental investigation is unfortunately outside the scope of this work.We hope to dive into this exciting research in years to come after the Quantum Electro-Mechanical Metrology Suite (QEMMS) [16] has been built and qualified. .The spring constant, κ, and the modulus sensitivity, mod , as a function of load for the main flexures for a larger range of loads than in Fig. 14.The orange circles are obtained by numerically solving the differential equation for the given flexure shape.The green triangles were obtained with a commercial finite element analysis software. Besides small numerical inaccuracies, the general trend agrees, and more importantly, both methods find S mod = 0 mm 3 /rad for m ≈ 56 kg.At this load in tension, the flexure has a safety factor of 3.5 to yield, and even with a bending angle of θ 0 = 7 • , a safety factor of ≈ 2 is present. Experimental evidence The goal of the measurements is to understand anelastic relaxation and verify the effectiveness of the boxcar eraser. Here, we seek to arrive at a measurement sequence that reduces the effect of the anelastic relaxation on a classical ABA measurement [46], i.e., mass-off, mass-on, and mass-off measurement to below 2 nN.Furthermore, we would like to compare our obtained value of the modulus defect to published values. Description of the apparatus For the experiments discussed below, the QEMMS Kibble balance was modified to only consist of a single beam, i.e., the guiding mechanism and the wheel were disconnected.With this disconnect, the system becomes much easier, and only the main flexure and the two end flexures are under investigation with this setup. The beam balance mechanism, a position detector for feedback operation, and an electromagnetic actuator are housed in a vacuum chamber kept at 0.15 mPa.A photo of part of the apparatus is shown in Figure 19. This mechanism is used to suspend and guide a coil and can further be used as a high-precision force comparator.The salient aspects regarding the design are outlined in [16].Most important for this work is that the flexure elements (main and end) are made from hardened (TF00 temper) Copper Beryllium alloy C17200 (containing about 1.8 % to 2 % Beryllium). In such a modular design (Aluminum frame and beam, but Copper Beryllium flexures), it is crucial to prevent stress gradients in the interface of flexure and connecting members, which, if poorly designed, give rise to additional damping, anelastic aftereffect, and possibly also static hysteresis [30].To avoid clamping directly on flexible elements the flexures are monolithic, i.e., the thin part and the interface piece bolted onto the trusses are machined from a single coupon of material. The main flexure was wire cut from bulk material, which was solution heat treated and then precipitation hardened to provide a TF00 temper.It has an elliptical notch contour (see Fig. 13) with the dimensions given in Tab. 1.Each end flexure is a cube that has two pairs of perpendicular flexures intersecting at a single point.These cubes were used for the coil suspension in the NIST-4 Kibble balance [44].The cube is visible in Fig. 19, it has a side length of 19.05 mm. The schematic of the block diagram of the control loop is shown in Fig. 20.For the data reduction and analysis, two quantities are important.The position of the balance, z M , and the force, F , is required in order to control the balance to M .Here, z M is the vertical position of a mass suspended from the left (in Fig. 19) end flexure.The force is obtained by multiplying the coil current, i C , with the geometric factor, Bl.The coil current is measured as a voltage drop over a calibrated resistor.The geometric factor has been calibrated to three digits by measuring a known mass. Sources of measurement uncertainties The main sources of measurement and analysis uncertainties are summarized below: (i) Fit uncertainty, mostly due to the defined start of t * .We fit a perfect mathematical description to an imperfect experiment. (ii) Since velocity mode was not available yet, we calibrated the Bl with a standard mass, whose value was known with a relative uncertainty of 1 × 10 −3 .We assign this number to the relative uncertainty of the force measurements. (iii) The measurements discussed below were made with very little current in the coil as the instrument was balanced to better than 300 µN leading to two benefits: • Thermal effects due to ohmic heating in the coil were negligible.• Magnetic hysteresis is not a concern at these small currents, up to 4 µA at static holding. (iv) Thermal effects due to evacuation and room temperature fluctuations affect the measurements at long-time scales.To account for these, a linear drift is removed from the measurements during data processing. (v) Similarly, ground tilt can shift the center of mass and can cause long-term drift in the required holding current in the balance.Similarly to thermal effects, tilt produces a 1/f effect and is subtracted by linear fitting to the data. (vi) The flexure clamps are designed to act on rigid pieces of metal, never on flexing elements. (vii) When deflecting the balance mechanism from one position to another, there is a trade-off between the duration of movement and the overshoot from the commanded position due to the large inertia of the structure.The overshoot is of order < 2 % and lasts less than 10 s.Hence, such overshoots only trigger the very short time constants of a generalized Maxwell solid and are negligible for the long time constants that we are interested in; See Fig. 3. (viii) The coil current is supplied by wires attached to the balance mechanism.To minimize parasitic elastic and hysteretic effects produced by these wires, they are very thin (AWG 50 about 25 µm in diameter, including insulation).They are routed parallel to the flexing axis of the main pivot.The wire stiffness is further reduced by using extra wire, about 50 mm, which is coiled into a spring shape.The wires running from the wheel to the coil are wound to springs as well, long, and do not move horizontally more than 1.5 mm for the largest imposed displacements of the mechanism. Despite the limitations discussed above, the measurement results provide valuable information for the anelastic relaxation. Verifying the generalized Maxwell Model The first series of measurements is summarized in Fig. 21.For the day-long experiment, the balance position, z, was controlled to ±2 mm, ±5 mm, and ±10 mm.In each position, the beam was held for 32 minutes.After that, the force required to maintain the balance at z = 0 was measured for three hours. Two steps of post-processing were applied to the data.From all measurements, a constant balance drift of −1.3 µN h −1 was subtracted.The experimental run was started within 24 hours of pumping the system to vacuum.The temperature change caused by the evacuation led to the aforementioned drift, which was numerically taken out.Usually, after a few days in the vacuum, the drift has vanished. Second, each trace was shifted vertically such that the fit result converges to a F = 0 for t → ∞.Subtracting these offsets makes the plot less discombobulated and shows clearly that the relaxation amplitude is proportional to the initial deflection.This processing step hides one observation, though.For the large amplitudes, i.e., ±10 mm, the equilibrium position of the balance changes as much as 3.5 µN.These irreversible changes were investigated separately and are discussed in detail in Sec. 8.The black lines are fits to Eq. ( 45) using F A as a prefactor.A least squares fit was used to fit the function to the data.The functional form of this equation, with the exception of the prefactor, and hence the unit, is identical to Eq. (30).The only free parameter is the amplitude F A , as the time constants were fixed to be τ l = 10 s and τ u = 5000 s.The time constants were fixed to the values populated in literature, e.g., in [2,24] to avoid getting a randomly better fit without physical meaning from a combination of unphysical values.From F A the modulus defect, η, can be obtained by combining Eq. ( 44) with ∆K = F A /z a and is found to be where z a is the original excursion.A close inspection of Fig. 21 leaves us with several observations.(1) The anelastic aftereffect can take a long time.The plot shows three hours after the excursion, and the anelastic effect has not completely decayed.(2) Eq. ( 45) is a reasonable fit to the measured traces.For the fits used in Fig. 21, τ l and τ u have been fixed to 10 s and 5000 s, respectively.(3) The size of the anelastic aftereffect is roughly proportional to the original excursion, consistent with Eq. ( 45), which is proportional to z A .Tab. 4 gives the fitted amplitude for the four measurements. The calculated modulus defects are shown in Tab. 4. While there is considerable scatter in the data, the average is η = 1.2 × 10 −4 ± 2.7 × 10 −5 , where we used the sample standard deviation as an uncertainty.This number is a factor of two to three larger than the results previously reported in the literature; see Tab. 3. Given the difficult nature of precisely measuring the modulus defect, this agreement is satisfactory.Furthermore, it can be concluded that the mechanics is not dominated by clamping loss, because damping loss would lead to an amplitude dependent modulus defect [30], which was not detected.. Verifying the boxcar eraser Here, the effectiveness of the box car eraser is tested.The experiment was conducted in the same way as previously.The balance was held for 32 minutes each at the z-positions, ±2 mm, ±5 mm, and ±10 mm.After this time, the balance is controlled to the negative value of the original setpoint and left there for 8 minutes.After that, it is controlled to 0, and the force is measured. The results are shown in Fig. 22.To allow for easy comparison to the traces without erasing, the original fits (black lines in Fig. 21) are shown here as dashed lines with corresponding colors.After the boxcar eraser, the amplitudes are much smaller, and the integral between the curve and the F = 0 line is much smaller.The new amplitudes, F B are given in Tab. 4. One can see that, on average, the relaxation amplitude is a bit more than a quarter of the original amplitude.Note the minus sign in the table.After erasing the flexure, the sign of the anelastic relaxation changed and the residual forces were the opposite as they would have been without the erasing procedure. F/mN . The balance is stressed in equivalent manner as in Fig. 21.This time, after holding the original excursion for 32 minutes, the balance is controlled to the negative excursion and held there for 8 minutes, see the top graph in the inset.The colored data points show the measured force on the balance for three hours after erasing.The black lines are fits to Eq. ( 32).The dashed colored lines are the fits to the data without erasing procedure, i.e., the black lines in Fig. 21. Irreversible effects Analyzing the results of the measurements discussed in the preceding section, we noticed that, especially after large displacement, the flexure did not come back to the same zero position -even after a long time.This effect is known and is another reason, why large excursions of flexures are generally avoided.It is as if the spring had been reset after a large excursion.In the Kibble balance, such a reset of the spring after velocity mode is generally not a problem as long as the spring stays consistent in the subsequent mass-on and massoff measurements.However, the mass manipulation for the mass-on and mass-off measurements also induces disturbances to the balance, and it would be important to know if the disturbances are large enough to cause a systematic bias.For comparison, the red dotted line gives the magnitude of the anelastic-after effect at t * = 0, which is for all excursions shown here much larger than the irreversible effect. At this point, we do not know the precise mechanism that causes the irreversible effects.Many causes are discussed in the literature, such as dislocation pinning [45], work hardening, clamp effects, lattice-internal stick-slip effects, and more. Despite not knowing the physical mechanism, we can measure the irreversible effect and discuss how much it affects the Kibble experiment.Fig. 23 shows the results of three such measurements.For each measurement, the balance was held at a given excursion for 30 s and then controlled back to zero, where the force required to hold the balance at zero was recorded.The drift corrected force as a function of excursion is shown in Fig. 23.Three data sets were taken, two with larger excursions from 1 mm to more than 10 mm and one set with excursions from 0.1 mm to 1 mm.Besides a small discontinuity, between the set with smaller excursions and the two with larger excursions, caused probably by the drift subtractions, the data follow an exponential law, F irrev.∝ z β a with an exponent of β = 1.73.For a 50 µm excursion, the irreversible effect contributes a force of 2 nN ± 1.5 nN.The figure also shows, for comparison, the effect caused by the modulus defect, which is proportional to excursion; see the dotted red line.For small displacement, the anelastic effect immediately after bending, i.e., t * = 0 is much larger than the irreversible effect.A 50 µm excursion causes an anelastic aftereffect of 0.2 µN or about 100 times larger than the irreversible effect.Fortunately, as has been discussed in this article, the anelastic effect decays away and can be mitigated with an erasing procedure. Conclusion The purpose of this article is to explore the usage and limitations of flexure-based Kibble balances with the hopes of breaking ground for the widespread usage of flexures in the Kibble balance, similar to what the seminal paper [2] has done for conventional balances.To achieve this goal, we have reviewed the basic considerations necessary to understand the effect of anelastic relaxation on the Kibble balance measurement.With the Laplace transformation, it is possible to calculate the aftereffect of sinusoidal excitation on the stress produced by the flexure.To mitigate the anelastic aftereffect on the Kibble balance measurement, an erasing procedure can be employed after the velocity mode.The simplest erasing procedure is the boxcar eraser, holding the flexure at a specific deflection for a given time.A numerical calculation of the boxcar eraser was performed, and pairs of optimal deflection amplitude and hold time were found. The theoretical framework has been supported by measurements on a simple beam balance with a total of three flexures, one main and two end flexures.After straining the balance, the stress followed the relaxation predicted by theory.The application of a simple boxcar eraser significantly and consistent with the theory reduces the relaxation stress.After erasing, the largest excursion left an initial stress amplitude in the balance of approximately 2.5 µN.This amplitude can be suppressed by a filter in the data analysis such as the one discussed in [46].It can reduce the effect of drift using realistic measurement times to 2.4×10 −5 times the initial amplitude.Combining these two numbers leaves a systematic bias of 0.36 nN for the Kibble balance experiment.This number is much smaller than the 2 nN that were initially budgeted for the anelastic uncertainty for the Quantum Electro-Mechanical Metrology Suite, the new Kibble balance under construction at NIST. From the measurements, a modulus defect of the hardened Copper Beryllium alloy C17200 with TF00 temper can be calculated.We find it to be η = 1.2 × 10 −4 ± 3 × 10 −5 slightly larger than the values reported in the literature previously. While investigating the anelastic relaxation irreversible effects most prominent after large excursions were found.After a large excursion, the force required to servo the balance to the null position changed.Such effects are not unusual and were also seen previously in a Kibble balance with a knife edge, NIST-3 [6] in agreement with a statement made in [24].At this point, we have not found a convincing explanation for the irreversible effect.A systematic investigation showed that the force change has a power law dependence on the excursion with an exponent of 1.73. The irreversible effect after excursions in the velocity mode will be common to all weighings and should not cause a systematic bias. A possible systematic bias can occur by the irreversible effect caused by balance excursions that occur during mass exchanges.Since the excursions for the mass-on and mass-off measurements have different signs, a systematic bias will remain. However, the excursion in mass exchange should be small for a well tuned Kibble balance, and the irreversible effect should be in the single digit nanonewton level, as shown in Fig 23 .Thus, is is not deemed to be a limit for accuracy for the present requirements to performance of the QEMMS. Lastly, while thinking about these flexures and the physics that describes them, we had the idea of a magic flexure -a flexure geometry that promises to have very little anelastic relaxation.Unfortunately, at this point, we do not have the resources to investigate the idea experimentally.A preliminary numerical investigation, however, implies that it is possible to build flexures with very low loss.The key idea is to shape the cross-section of the flexure such that the region of maximal bending moves to a thinner part as the elastic modulus increases and compensates for the increase in the elastic modulus. Flexures are amazing elements for precision mechanics, and it is time for widespread adoption of them in the main and auxiliary pivots for Kibble balance.We firmly believe flexures are the best choice for pivots in high-precision mechanical experiments. Figure 2 . Figure2.Development of the stress (blue dashed line) for two strain stimuli.The top panel is for a static deflection that was held for time τ S .The bottom is a sinusoidal stimulus.At the same time, one period with identical strain amplitude was applied to the flexure.The region of interest is for t * > 0, which is shown in the insets.Note that the vertical axes of the insets have the same extent, showing that |σ a2 | < |σ a1 |/2. Figure 3 . Figure3.The attenuation factor, ξ, shows the ratio of the stress amplitude after a sinusoidal motion versus a constant excursion of the same amplitude.The value ξ is plotted for different ratios of signal duration, τs, to relaxation time, τ 1 .The integer n shows how many complete periods are in the signal duration. 2 Figure 5 . Figure 5. Development of the stress (blue dashed line) for the same two stimuli shown in Fig.2.This time, an erasing pulse of duration τr follows immediately after the stimulus.Here, the erasing pulse lasts a quarter of the original stimulus, τr = τs/4.A zoomed plot of the strain for t * > 0 is not necessary because then the stress produced by the flexure is zero. Figure 7 . Figure 7. Bode plot of the anelastic part of the response function, given by ρ := ( σ ε − E 0 )/E 1 .The solid line gives the response function for a generalized Maxwell model with time constants distributed between τ l = 10 s and τu = 5000 s, see Eq. (4).The dashed-dotted line shows the response function for the Maxwell model with a time constant that corresponds to the geometric mean of the limits of the distribution, i.e., τ 1 = √ τ l τu.The inset in the top panel gives the Nyquist plot of the two responses.The response of the generalized Maxwell model is vertically compressed compared to that of the Maxwell model.The lower plot shows the imaginary part of the response normalized by E 1 . Figure 8 . Figure 8. Stress given by Eq. (32) for the generalized Maxwell model after applying the boxcar eraser.Here τr = τs/4 and τs = 1920 s.The lines of different colors show different erasing amplitudes ranging from 0 to −εa/2.The main plot shows the data on a logarithmic time scale.The inset shows the same data in a linear time scale. Figure 9 . 1 r3Figure 10 Figure 9.The figure of merit defined by Eq. 33 as a color coded plot versus ε r3 /εa on the horizontal axis and τr/τs on the vertical axis.The plot on the right gives the figure of merit as a function of τr/τs for ε r3 /εa − 0.25.The plot on the top shows the figure of merit as a function of ε r3 /εa for τr = τs/4.The square and circular markers denote the minimum along these sectional lines.The global minimum occurs at τr = 0.95/τs and ε r3 = 0.It is marked with a black diamond. Figure 11 . Figure 11.A diagram of an equal arm balance beam suspended by a main flexure with the local gravitational acceleration, ⃗ g, pointing in the negative z direction.All flexures are shown in red.To simplify the model, the end flexures are assumed to deflect by φ, which is a good assumption for thin flexures. Figure 12 . Figure 12.Parameters of a flexure with load suspended in tension at the free end.The local gravitational acceleration, ⃗ g, points in the negative z direction.The symbols with 0 are values at the flexure end (s = L). Figure 13 . Figure 13.Contour of a general elliptical flexure.The dimensions for the figure are chosen to match our main flexure.Note the different scales in the horizontal and vertical axes of the figure.The parameter s is the distance along the neutral fiber axis of the flexure.The flexure is upside-down symmetric and the parameter h(s) is the thickness of the flexure.The top and bottom surfaces are given on the vertical axis.The two ellipses defining the flexure are shown with a dotted line.The minor and major axis of the upper ellipse are indicated by two dash-dotted lines.A circular contour, employed for our end flexures, is a special case in which the minor and major axis are equal. Figure 14 . Figure 14.Top plot: Results of Eq. 40 using the solutions to numerical solutions of the coupled differential equations in Eq. 37 as input.The points are calculated as a function of m, where F = mg.Bottom plot: By slightly varying the elastic modulus in the calculation shown in the top panel, the modulus sensitivity, S mod , can be obtained.It is plotted as a function of load.The calculation was performed for the main flexure (circles) and end flexure (squares). Figure 15 . Figure 15.Bending of a flat flexure under a torque of 1 mN m.The dimensions of the flexure were arbitrarily chosen to be L = 6 mm, h = 50 µm, and b = 20 mm .The analytical solution to the differential equation for flexures of constant cross-section can be found, e.g.in[37].To emphasize the region where the flexure bends, the curves are normalized to the values x(L), which are different for the four cases.They are in order of the legend 0.17 µm, 0.09 µm, 0.19 µm, and 0.10 µm. Figure 16.Normalized deflection (upper graph) and normalized normal stress (lower graph) of the main flexure numerically calculated from the differential equation system in Eq. 39 under variation of the suspended load, F w,0 = mg.The angle θ 0 was imposed as a boundary condition at the free end and the calculation was performed with E 0 = 128 GPa. Figure 17.Normalized deflection (upper graph) and normalized normal stress (lower graph) of the main flexure obtained from our Python code by changing the elastic modulus E 0 .As before, the angle θ 0 was imposed at the free end as a boundary condition.A load with m = 15 kg is suspended from the flexure. Figure18.The spring constant, κ, and the modulus sensitivity, mod , as a function of load for the main flexures for a larger range of loads than in Fig.14.The orange circles are obtained by numerically solving the differential equation for the given flexure shape.The green triangles were obtained with a commercial finite element analysis software.Besides small numerical inaccuracies, the general trend agrees, and more importantly, both methods find S mod = 0 mm 3 /rad for m ≈ 56 kg.At this load in tension, the flexure has a safety factor of 3.5 to yield, and even with a bending angle of θ 0 = 7 • , a safety factor of ≈ 2 is present. Figure 19 . Figure 19.Photograph of one end flexure through a window of the evacuated vacuum chamber The tapered piece of aluminum to the left is the structural part of the beam (mass m b ) connected to the main flexure, and the threaded rod connects to a dummy mass (mass mp).The total suspended load is m b + 2mp ≈ 15 kg.The top right shows a render of a monolithic end flexure with two perpendicular rotation axes, and the bottom right shows conceptually that the section of the beam balance (see Figure 11) framed by the red box is shown in the photograph. Figure 20 . Figure 20.Block diagram of the signals in the experiment.The host PC provides the control set point, z SP , to a proportional, integral, derivative (PID) controller on a field programmable gate arrays (FPGA), which compares it with the actual position reading from a quadrature encoded (AquadB) interferometer signal.The output of the controller is a voltage, u PID , connected to a current source board, which directs a current, i C , through the coil and a calibrated resistor.A digital volt meter (V) measures current as a voltage drop over the known resistor and the coil generates a known force, F C , in an electromagnetic setup with a permanent magnet rigidly fixed to the experiment's frame.This force acts upon the mechanism and can keep the mechanism in feedback controlled null state.The mechanism displacement, z M , from the ideal position, z SP , is monitored. Figure 21 . Figure 21.The anelastic relaxation for six different excursion amplitudes measured with the simple beam balance.The measurement procedure is shown in the inset.The top graph of the inset shows the balance excursions in mm.Each excursion is held for 32 minutes.After the balance has been servoed to zero, the force required to hold the balance at zero is recorded for 3 hours, indicated by the shaded regions.The corresponding data points are plotted as the colored data points in the main plot.The black lines are fits to Eq. (45) using F A as a prefactor. Figure 23 . Figure23.Points with error bars denote the irreversible effect measured after an excursion of the balance given by the horizontal axis in µm.The different colors and symbols of the points denote three different data sets.Each excursion was held for 30 s.The solid black line is a power law fit to the data, and the shaded area gives the 1-σ uncertainty band of the fit line.For comparison, the red dotted line gives the magnitude of the anelastic-after effect at t * = 0, which is for all excursions shown here much larger than the irreversible effect. Table 2 . The characteristic stiffness and sensitivity on the elastic modulus of main flexure and end flexures for the loads given in the last column.The main flexure has a single and the end flexures a double contribution to the total. Table 3 . Published values of the modulus defect, ∆E/E 0 , for CuBe flexures sorted by year of publication. Table 4 . (46)itude of the anelastic relaxation.The original excursion is given in the first column.The amplitude of the anelastic relaxation is labeled F A .The column with the header η gives the modulus defect.It is obtained using Eq.(46).Applying the boxcar eraser reduces the amplitude to F B .The ratio of the reduction is given in the last column.
15,495.2
2024-03-20T00:00:00.000
[ "Engineering", "Physics" ]
Screening Life Cycle Assessment of Tall Oil-Based Polyols Suitable for Rigid Polyurethane Foams : A screening Life Cycle Assessment (LCA) of tall oil-based bio-polyols suitable for rigid polyurethane (PU) foams has been carried out. The goal was to identify the hot-spots and data gaps. The system under investigation is three di ff erent tall oil fatty acids (TOFA)-based bio-polyol synthesis with a cradle-to-gate approach, from the production of raw materials to the synthesis of TOFA based bio-polyols at a pilot-scale reactor. The synthesis steps that give the most significant environmental footprint hot-spots were identified. The results showed the bio-based feedstock was the main environmental hot-spot in the bio-polyol production process. Future research directions have been highlighted. Introduction Most industrial polymers are presently produced from fossil resources that are non-renewable because they cannot be replenished at a rate comparable to the exploitation rate [1]. Moreover, the use of fossil resources has put the polymer industry under pressure due to environmental and sustainability issues [2][3][4]. Environmental considerations have been one of the main driving forces to stimulate the development of bio-based polymers [5,6]. Bio-based polymers represent a wide and highly diverse group of products, where the environmental profile is highly dependent on the used feedstock and thus the environmental profile is case-specific [3,6]. This stresses the necessity to carry out a case-specific Life Cycle Assessment (LCA) to determine the environmental profile of bio-based polymer. LCA is the leading approach to assess the environmental performance of bio-based products and materials [2,7]. According to the International Organization for Standardization (ISO) 14040 series, the LCA is a standardized technique for assessing the potential environmental aspects associated with product or service, by compiling an inventory of relevant inputs and outputs, evaluating the potential environmental impacts associated with those inputs and outputs, interpreting the results of the inventory and impact phases in relation to the objectives of the study [8,9]. It has become one of the main methods to inform developers, policymakers and the public about the potential environmental impacts of developed products and technologies. Moreover, it is important to evaluate the possible environmental impact of novel bio-based products and production technologies and approaches, while the developed process is still at a low technology-readiness level (TRL). Most of the environmental issues of a product or process are determined at their design and early technology development phase [10]. Different pathways are present to synthesize polymers from the bio-based feedstock, one of them being polymerization of bio-based monomers [11]. Polyurethane (PU) polymers are produced using this pathway. PU polymers present a broad spectrum of materials that are produced to meet the needs for various applications, from the automotive industry, building and construction, appliances, • TOFA epoxidation using ion exchange resin [38]; • chemical synthesis of TOFA based bio-polyols, their characterization and chemical structure and development of rigid PU foam thermal insulation from the said bio-polyols and their characterization (published by our group's researcher, Kirpluks et al.) [37]. This study aims to perform screening LCA, the cradle-to-gate environmental impact of three different TOFA based bio-polyols that have been demonstrated as suitable for the development of rigid PU thermal insulation that is based solely on tall oil polyols. The main goal can be subdivided into subgoals: (1) which synthesis steps give the most significant environmental footprint hot-spots, (2) to identify major gaps in the available inventory data. Materials and Methods The study generally follows the provisions set out in ISO 14040 and ISO 14044 [8,9]. The analysis was performed using SimaPro 9.0 software by Pré Consultants, Ecoinvent v3.5 was used for background processes. Description of the Production Processes TOFA based bio-polyol synthesis is a two-step synthesis, where the first step is epoxidation of double bonds (C=C) present in the tall oil using peracetic acid which is in-situ formed from acetic acid (CH 3 COOH) and hydrogen peroxide (H 2 O 2 ) in the presence of ion exchange resin catalyst, followed by epoxy ring-opening reaction and subsequent esterification of epoxidized TOFA (ETOFA). Both reactions are scaled up and are carried out in a pilot-scale reactor with a volume of 50 L. Detailed description about bio-polyol synthesis and characterization and rigid PU foam thermal insulation developed solely based on can be found in our previous papers by Abolins et al. [38] and Kirpluks et al. [37]. Step 1: TOFA Epoxidation For the epoxidation reaction, the molar ratio between (C=C/CH 3 COOH/H 2 O 2 ) was 1/0.5/1.5 and a catalyst Amberlite IR-120 H as 20% of TOFA mass was used. The calculated amount of acetic acid and ion exchange resin Amberlite IR-120 H, as the catalyst, was added to TOFA and heated to 40 • C. Then, the calculated amount of H 2 O 2 ) solution was added to the reactants dropwise over 30 min, maintaining the temperature not higher than 60 ± 2 • C. After the addition was completed, the reaction mixture was stirred at 1000 rpm, and the stirring was continued at a temperature not higher than 60 ± 2 • C. The end of epoxidation was to be considered when the epoxy group content reached a plateau. The unpurified epoxidized TOFA was washed with warm distilled water to remove acetic acid and H 2 O 2 ) residue. The ion exchange resin was removed from the reactor. Afterwards, purified and still moist epoxidized TOFA was concentrated by evaporation at 60 • C up to 15 mbar vacuum to remove excess moisture. As a result, ETOFA was obtained as a precursor for polyol synthesis. Step 2: Bio-Polyol Synthesis From previously synthesized ETOFA, three different polyfunctional alcohols, namely, trimethylolpropane (TMP), triethanolamine (TEOA) and diethylene glycol (DEG), were used for polyol synthesis. Following acronyms were used for each polyol: ETOFA/TMP; ETOFA/TEOA and ETOFA/DEG. Polyol synthesis was performed using epoxy ring-opening reaction, which was carried out in 120 • C for 2 h, and subsequent esterification reaction until the acid value of the mixture decreased below 10 mg KOH/g. After oxirane ring-opening reaction, temperatures were raised to 180 • C, 200 • C, and 200 • C in the case of ETOFA/TEOA, ETOFA/TMP and ETOFA/DEG, respectively. The molar ratios between reagents in polyol synthesis were 1:1; catalyst (LiOCl 4 ) was added as 0.5% of ETOFA mass. The given polyol synthesis process does not require additional purification and/or filtration steps. Characterization of the Developed TOFA Based Bio-Polyols The main characteristics of synthesized TOFA based bio-polyols, such as hydroxyl value, viscosity, acid value, moisture content, density, number average functionality and average molar mass are summarized in Table 1. ETOFA/DEG bio-polyol showed the lowest viscosity (Table 1) due to low OH group functionality which reduced amount of hydrogen bond formation between polyol moieties. The low hydroxyl value of ETOFA/DEG polyol is still sufficient for it to be used in rigid PU foam formulation development. The other two bio-polyols, ETOFA/TMP and ETOFA/TEOA have much higher OH values which were 390 ± 15 mg KOH/g and 500 ± 15 mg KOH/g, respectively. The viscosity of ETOFA/TMP and ETOFA/TEOA bio-polyols is relatively high which can be explained by the hydrogen bonding and high functionality. The relatively high viscosity might be a challenge at the industrial scale; thus, the developed bio-polyols would be more suitable as crosslinking reagents in rigid and soft PU formulations. Moreover, ETOFA/TEOA bio-polyol also exhibits autocatalytic properties due to the tertiary amine group in its chemical structure [37]. This would allow lowering or eliminating the use of amine-based catalysts in PU formulations. Goal and Scope Definition The purpose of this work is to perform screening of the environmental performance of different TOFA based bio-polyols. The performed LCA is based on a cradle-to-gate approach, from the production of raw materials to the synthesis of TOFA based bio-polyols at a pilot-scale reactor. The assessment is based on experimental data from the following synthesis methods developed at Latvian State Institute of Wood Chemistry: • TOFA epoxidation using ion exchange resin [38]; • Three different TOFA based bio-based polyol production [37]; The system boundary of TOFA based bio-polyols production is depicted in Figure 1. The use and the end of life phases are not included in the system boundaries, as they would remain the same as in the case of PU based solely on petrochemical polyols. In this case, the introduction of bio-based building block for PU development do not affect their end-of-life phase. In this study, the functional unit (FU) selected was 1 kg of TOFA based bio-polyol, capable of being used to make rigid PU thermal insulation foams with comparable properties to conventional PU foams. The presented bio-polyol production is a two-step process where the first step is oil epoxidation using ion exchange resin, followed by ring-opening with polyfunctional alcohol. Life Cycle Inventory, Limitations and Assumptions The Life Cycle Inventory (LCI) data for the foreground system (i.e., raw materials, water, chemicals, energy consumption) was based on primary data that were measured in the laboratory during the two-step bio-polyol synthesis at a pilot-scale reactor as described in Section 2.1. In this case, the different synthesis steps (Figure 1) carried out during polyol synthesis were only aimed at the production of TOFA based bio-polyols and no co-products were obtained. Thus, no allocation procedure was required. Background processes used in the LCA model are based on the Ecoinvent v3.5 database. The background system considers the production of raw materials (TOFA and chemicals) and energy used in the different production stages, as well as wastewater treatment (see Figure 1). To determine energy consumption for synthesis a Hobo Onset data logger U12-006 218 with CTV-A sensors was used. The detailed description of the installation and calculations is described in our previous paper by Fridrihsone et al. [39]. For electricity use, Latvian electricity grid was chosen from the Ecoinvent v3.5 database. The standard ISO 14040 (2006) requires that all of a study's limitations be transparently defined and discussed, clearly and adequately identified in accordance with the study's aim and scope [8]. One of the main limitations, but not limited, of LCA are incomplete or lack of reliable and process-specific data. The assumptions that were made to address the limitations are presented below: • The production of the catalyst LiOCl 4 was not available in the Ecoinvent v3.5, NaOCl 4 used as a proxy for LiOCl 4 . • The production of polyfunctional TMP is not available in the Ecoinvent v3.5. The production of TMP is assumed to be similar to the production of other polyols [40]. Pentaerythritol was used as a proxy. • The production of the ion exchange resin Amberlite IR-120 H was not available in the Ecoinvent v3.5, cation exchange resin dataset was used as a proxy. • The potential environmental impacts of chemical's packaging materials are not included and are assumed to be negligible. • The transportation of raw materials to the production site is not considered. These operations are deemed marginal, indicating that transport aspects of the chemical industry are minor to the LCA analysis as a whole [41]. The LCI data for TOFA based bio-polyol production at a pilot-scale reactor is depicted in Table 2. Life Cycle Impact Assessment Life Cycle Impact Assessment (LCIA) is a mandatory LCA step where potential environmental impacts throughout the product or process life cycle quantify the effect of the collected LCI data as inand out-flows [6]. In this research, two single-issue LCIA methods were used to convert input and output data into environmental impact categories. The cumulative energy demand (CED) was applied to investigate the use of non-renewable energy use involved in the production of bio-polyols. LCIA was carried out using Cumulative Energy Demand V1.11 (CED). The IPCC 2013 GWP 100a method, which is based on data released by the Intergovernmental Panel on Climate Change (IPCC), was chosen. The method expresses the emissions of greenhouse gases generated, in kilograms of CO 2 equivalent, over a time horizon of 100 years. Furthermore, as the method of environmental assessment, the ReCiPe 2016 Endpoint (H) V1.03/World (2010) H/A method was used at the endpoint level to screen the most impacted endpoint categories. Results and Discussion The interest in the bio-based materials is growing mainly due to the concerns of greenhouse gases GHGs emissions and non-renewable energy use in the industrial sectors. GHG and non-renewable energy use are important parameters to characterize the performance of a bio-based product in comparison to the petrochemical counterpart; however, the potential environmental impacts are adverse [41][42][43]. CED as a Screening Impact Indicator CED can be used as a good proxy indicator for environmental performance while having the lowest data requirements [44]. CED of a product or process represents the direct and indirect energy use in units of MJ throughout the life cycle [45]. CED takes into account primary energy use, both renewable and nonrenewable, and energy flows intended for both energy and material purposes [46]. This is an important aspect as, in general, the LCI phase of LCA is very laborious and time consuming [6,47]. Moreover, there might be challenges with data availability. Figure 2 presents the CED results for the developed TOFA-based bio-polyols. The lowest CED was for the ETOFA/TEOA polyol with 141 MJ/kg polyol, followed by ETOFA/TMP polyol with 148 MJ/kg and lastly the ETOFA/DEG polyol with 158 MJ/kg. The resulting CED for all three TOFA-based bio-polyols is similar as are the synthesis processes. The slight change in the CED is due to the synthesis temperature of the second synthesis step, TOFA/TEOA bio-polyol is carried out at the lowest temperature of 180 • C, while other two bio-polyols are synthesized at 200-205 • C in conjunction with longer synthesis time. In addition, this difference is contributed to the amount of ETOFA in each bio-polyol as this is the first synthesis step. In Table 1, the TOFA content in developed bio-polyols is reported. The use of fossil fuel energy resources for energy production is largely responsible for the depletion of fossil resources and global warming [48,49]. The total CED is composed of non-renewable CED (NRCED) (fossil, nuclear, non-renewable biomass) and renewable CED (biomass, wind, solar, geothermal, water). The NRCED of total CED comprised 61% for ETOFA/TEOA bio-polyol, 59% for ETOFA/TMP and ETOFA/DEG. The fossil energy contributed 91% of the total NRCED. The high percentage of NRCED is due to TOFA. Crude tall oil distillation is an energy-intensive process and non-renewable energy resources are used. The renewable CED is~40% for the bio-polyols. Renewable biomass category is the main contributor to the renewable CED category. As mentioned, TOFA is a product of crude tall oil distillation. Crude tall oil is co-product in the kraft pulping process where the pulp is produced from softwood renewable pine trees [36]. The main contributors to the total CED of TOFA-based bio-polyols are as follows: • ETOFA/TEOA: 67.9% is contributed to ETOFA, 24.2% to TEOA, 7.6% to electricity and 0.2% to catalyst; • ETOFA/TMP: 68.7% is contributed to ETOFA, 16.8% to TMP proxy, 14.5% to electricity and 0.03% to catalyst; • ETOFA/DEG: 69.7% is contributed to ETOFA, 15.9% to DEG, 23.6% to electricity and 0.2% to catalyst. ETOFA is an outcome of the first synthesis step: TOFA epoxidation with ion exchange resin. In the present study in the LCI phase, Ecoinvent v3.5 dataset was used for TOFA modelling, where it is a global dataset representing 100% of USA stand-alone tall oil distillation capacity in 2011, which includes five sites and larger than 90% of the European capacity [50]. The specific polyfunctional alcohols were chosen to obtain the desired properties of the synthesized products. For ETOFA/TMP bio-polyol, the production of TMP was not available in the Ecoinvent v3.5. The production of TMP is assumed to be similar to the production of other polyfunctional alcohols [40]. Depending on bio-polyol and its synthesis pathway, the contribution of electricity is from 8% to 24%. The impact of catalyst for bio-polyol synthesis has a negligible influence as the contribution is below 0.5%. However, it must be noted that proxy not a full dataset from Ecoinvent was used to model catalyst; thus, the contribution might be underestimated. In the present study, the aim was to identify the hot-spots of the bio-polyol production process and clearly, ETOFA production is the largest hot-spot and thus, in future studies, a separate LCA should be carried out analyzing the hot-spots of TOFA epoxidation process. There are several ways to perform epoxidation, from the conventional Prileshajev epoxidation process to use of more environmentally sound approaches, such as ion exchange resin (as in the present study) and chemo-enzymatic epoxidation. It would be valuable to perform comparative LCA of different TOFA epoxidation pathways to assess their environmental performance. Moreover, it would be valuable to perform that in conjunction with a technical-economic feasibility study, as for new synthesis method to be applied in industrial production, it has to offer several advantages in comparison to the conventional, such as better performance, environmental benefits and cheaper or equivalent price compared with the conventional method. 3.2. Life-Cycle Impact Assessment: IPCC 2013 GWP 100a Figure 4 summarizes the distribution of GHG emissions associated with the production of TOFA-based bio-polyols. The total amount of emissions for ETOFA/TEOA bio-polyol is 4.09 kg CO 2 eq/kg bio-polyol, being the lowest, while for ETOFA/TMP and ETOFA/DEG bio-polyols the total emissions are 4.48 and 4.77 kg CO 2 eq/kg bio-polyol, respectively. Wherein, the emissions associated with ETOFA production are 55%-57%, the amount attributed to polyfunctional alcohols varies notably. Polyfunctional alcohol used for ETOFA/DEG bio-polyol synthesis contributed 17% to the total GHG emissions, while for ETOFA/TMP the number was slightly higher with 21% and TEOA production for ETOFA/TEOA bio-polyol synthesis contributed 31% of the total GHG emissions of the said bio-polyol. The electricity attribution varies depending on the bio-polyol synthesis pathway. For ETOFA/TEOA the contribution is 13%, while for ETOFA/TMP and ETOFA/DEG bio-polyol the amount attributed is higher, with 23% and 26%, respectively. The GHG emissions associated with catalyst are below 0.005%. The LCI inventory was based on bio-polyol production at 50 L pilot-reactor which resulted in high-quality primary data for the bio-polyol synthesis inputs and outputs. It must be kept in mind that for large scale production of bio-polyols, different results will be yielded due to the differences in the production setup. In the present study, TOFA produced in Europe was used for polyol synthesis; thus, in an "ideal" LCA of the developed TOFA-based products, the TOFA production would be modelled from the data provided by the specific tall oil refinery. This would give a better representation. For example, Cashman et al. reported that the nonrenewable energy demand for cradle-to-gate crude tall oil distillation products is higher in the United States than in Europe, and thus, also in GHG emissions [36]. Life-Cycle Impact Assessment: ReCiPe 2016 Endpoint (H) V1.03/World (2010) H/A The environmental impacts at the ReCiPe endpoint level are aggregated into three types of damage: human health, ecosystem and resources. The aggregated environmental effect, written in normalized and weighted mPt, is expressed as the ReCiPe score. ReCiPe results at endpoint level reveal that for all three TOFA based bio-polyols, the largest environmental impact is to human health category with 86%, followed by ecosystems with 12% impact. The contribution of category resources formed only 2% of the total ReCiPe Score. The largest contributor to the Human health category is ETOFA synthesis with 60.4% for ETOFA/DEG bio-polyol, 61.5% for ETOFA/TEOA bio-polyol and 58.7% for ETOFA/TMP bio-polyol. Further research on the impacts of ETOFA synthesis should be carried out. For the Ecosystem category, the largest contributor was ETOFA with a contribution of around 80% for all polyols. Depending on the bio-polyol type, the contribution of polyfunctional alcohol for epoxide ring-opening in the second synthesis step was 12.5% for ETOFA/TEOA bio-polyol, 9.2% for ETOFA/TMP and 6.9% for ETOFA/DEG bio-polyol. The electricity used for synthesis contributed 6.3%, 11.6% and 12.7%, respectively. For Resources endpoint category, half of the total impact was from ETOFA for all TOFA-based bio-polyols; however, the impact of used polyfunctional alcohol differed significantly. Thus, TEOA contributed 41.7% of the Resource category impact, while for the ETOFA/TMP bio-polyol the contribution of polyfunctional alcohol was 27.5%, the DEG as polyfunctional alcohol contributed 25.5% to the total score of Resource category. The contributor of electricity also differed from 10.7% for ETOFA/TEOA bio-polyol to 23.5% for ETOFA/DEG bio-polyol. The ReCiPe endpoint results show that tall oil epoxidation with ion exchange resin is the main contributor for all Endpoint impact categories. A separate study should be carried out where ETOFA process is analyzed from the LCA viewpoint. Conclusions A screening LCA of TOFA-based bio-polyols suitable for rigid PU foam development was performed in this study. The study had some limitations as the data about bio-based feedstock production were obtained from the Ecoinvent database. However, bearing in mind these limitations, very clear conclusions can be drawn up: • the bio-based feedstock was the main environmental hot-spot in the bio-polyol production process; • the other large environmental hot-spots are the used polyfunctional alcohols for the ring-opening and the use of electricity for the bio-polyol synthesis; • the impact of catalyst is negligible; • the efforts to improve the environmental performance of bio-polyols should be put on the production phase of the ETOFA. Several future studies directions have been identified to perform separate LCA study on the epoxidation process of the TOFA to identify the environmental hot-spots in this bio-polyol synthesis stage. It would be valuable to compare different epoxidation processes to see which yields the lowest environmental impact. In future studies, it would be very useful to perform the LCA of the bio-polyols with data from the specific distillery where TOFA was sourced.
5,009.2
2020-10-09T00:00:00.000
[ "Materials Science" ]
In vivo immune interactions of multipotent stromal cells underlie their long-lasting pain-relieving effect Systemic infusion of bone marrow stromal cells (BMSCs), a major type of multipotent stromal cells, produces pain relief (antihyperalgesia) that lasts for months. However, studies have shown that the majority of BMSCs are trapped in the lungs immediately after intravenous infusion and their survival time in the host is inconsistent with their lengthy antihyperalgesia. Here we show that long-lasting antihyperalgesia produced by BMSCs required their chemotactic factors such as CCL4 and CCR2, the integrations with the monocytes/macrophages population, and BMSC-induced monocyte CXCL1. The activation of central mu-opioid receptors related to CXCL1-CXCR2 signaling plays an important role in BMSC-produced antihyperalgesia. Our findings suggest that the maintenance of antihypergesia can be achieved by immune regulation without actual engraftment of BMSCs. In the capacity of therapeutic use of BMSCs other than structural repair and replacement, more attention should be directed to their role as immune modulators and subsequent alterations in the immune system. Animal models All surgical procedures were performed under pentobarbital sodium (50 mg/kg i.p.) anesthesia. For the rat tendon ligation (TL) model, TL was achieved via an intraoral approach as described 5 . On the left intraoral site, a five-mm long incision was made posterior-anteriorly lateral to the gingivobuccal margin in the buccal mucosa, beginning immediately next to the first molar. The tendon of the anterior superficial part of the rat masseter muscle was gently freed and tied with two chromic gut (4.0) ligatures, 2-mm apart. The chronic constriction injury of the infraorbital nerve (CCI-ION) mouse model was produced according to Wei et al. 54 A 5-7 mm long incision was made along the gingivobuccal margin in the buccal mucosa. The ION is freed from surrounding connective tissues. At 3-4 mm from the nerve where its branches emerge from the infraorbital fissure, the ION was loosely tied with two chromic gut (4.0) ligatures, 2-mm apart. Behavioral testing All behavioral tests were conducted under blind conditions. Mechanical sensitivity of the orofacial region was assessed as described elsewhere 5,45 . A series of calibrated von Frey filaments were applied to the skin above the injured tendon or the corresponding contralateral side. An active withdrawal of the head from the probing filament was defined as a response. Each von Frey filament was applied 5 times at intervals of 5-10 seconds. The response frequencies [(number of responses/number of stimuli) X100%] to a range of von Frey filament forces were determined and a stimulus-response frequency (S-R) curve plotted. After a nonlinear regression analysis, an EF 50 value, defined as the effective von Frey filament force (g) that produces a 50% response frequency, was derived from the S-R curve (Prism, GraphPad) 5 . A leftward shift of the S-R curve, resulting in a reduction of EF 50 , occurred after TL. This shift of the curve suggests the development of mechanical hypersensitivity, or presence of mechanical 3 hyperalgesia and allodynia since there was an increase in response to suprathreshold stimuli and a decreased response threshold for nocifensive behavior. BMSC procedures BMSCs were obtained from donor rats as described 5 . The rats were sacrificed with CO 2 , the both ends of the tibiae, femurs and humeri were cut off by scissors. A syringe fitted with 18gauge needle was inserted into the shaft of the bone and bone marrow was flushed out with culture medium (alpha-modified Eagle medium, Gibco, Carlsbad, CA, USA; 10% fetal bovine serum, Hyclone, Logan, UT, USA). The bone marrow was then mechanically dissociated and the suspension passed through a 100-µm cell strainer to remove debris. The cells were incubated at 37°C in 5% CO 2 in tissue-culture flasks (100 X 200 mm) (Sarstedt, Nümbrecht, Germany), and non-adherent cells removed by replacing the medium. When the cultures reached 80% confluence, the cells were washed with PBS and harvested. The cell numbers were calculated by the Hemacytometer. For intravenous administration, 1.5 X 10 6 cells (1.5 M) in 0.2 ml PBS were slowly injected into one tail vein of the anesthetized animal over a 2-minute period using a 22-gauge needle. The property of expanded cells was assessed by flow cytometry with conventional markers 5 . Cryopreserved human primary BMSC (hBMSC) were purchased (KT-002, RoosterBio, Inc., Frederick, MD, USA). hBMSC vial from liquid nitrogen was immediately thawed in 37°C water bath. Cells were removed from water bath once only a small bit of ice was remaining. Cells were aseptically transferred into 50-ml centrifuge tube, and 4 ml of culture media were slowly added. Cells were centrifuged 200Xg for 10 min and the supernatant was carefully removed without disturbing the pellet. The cells were then resuspended in 45 ml of culture media, mixed well and seeded into three T75 vessels. After reaching 90% confluence the cells were passed into 6 10-cm culture plates. The cells were collected after reaching more than 90% confluence, and 1.5 M cells infused to the animals. For detecting the immunoreactivity with near-infrared fluorescence using the Odyssey Infrared Imaging System (Odyssey®CLx, LI-COR, Lincoln, NE), 50 µg protein samples were denatured by boiling for 5 min and loaded onto 4-20% Bis-Tris gels (Invitrogene). After electrophoresis, proteins were transferred to nitrocellulose membranes. The membranes were blocked for 1 h with Odyssey Blocking Buffer and then incubated with primary antibodies diluted in Odyssey Blocking Buffer at 4°C overnight, followed by washing with PBS containing 0.1% Tween 20 (PBST) three times. The membranes were then incubated for 1 h with IRDye800CWconjugated goat anti-rabbit IgG and IRDye680-conjugated goat anti-mouse IgG secondary antibodies (LI-COR) diluted in Odyssey Blocking Buffer. The blots were further washed three times with PBST and rinsed with PBS. Proteins were visualized by scanning the membrane with 700-and 800-nm channels. The loading and blotting of the amount of protein was verified by reprobing the membrane with anti-b-actin and with Coomassie blue staining. Immunocytochemistry Rat blood monocytes (2 x 10 6 cells/well) or rat BMSC (5 x 10 5 cells/well) were cultured on Nunc™ Lab-Tek™ 8 wells Chamber Slide System (Thermo Fisher Scientific). At 1 d after the seeding of cells, the culture medium was removed, and the cells were washed 3 times with PBS and fixed with 4% paraformaldehyde in PBS for 20 min at RT. Cells were washed 3 times with PBS, permeabilized with 0.1% Triton X-100 in PBS for 20 min at RT, and then blocked with 2% BSA in PBS for 30 min at RT. After washing 3 times with PBS, monocytes or BMSC were incubated with relevant primary antibodies overnight at 4°C. All primary antibodies were diluted with 2% BSA in PBS. After three times washing with PBS, cells were incubated with appropriate secondary antibodies IgG-Cy2 and/or IgG-Cy3 (1:500) for 2 h at RT. After washing, cultures were mounted using ProLong® Gold Antifade Reagent with DAPI (Cell Signaling Technology). Immunohistochemistry Rats were deeply anesthetized with pentobarbital sodium (100 mg/kg, i.p.) and perfused transcardially with 4% paraformaldehyde in 0.1 M phosphate buffer at pH 7.4. The same block of caudal brainstem tissues as that for western blot was removed, post-fixed, and transferred to 25-30% sucrose (w/v) for cryoprotection. Free-floating tissue sections (30-μm thick) were incubated with relevant antibodies with 1-3% relevant normal sera and single or double-labeling immunofluorescence was performed. Double-labeling immunofluorescence was performed with Cy2 and Cy3 (1:500, Jackson ImmunoResearch) or Alexa Fluor 488 (1:500, Invitrogen Molecular Probes) and Alexa Fluor 568 (1:500) after incubation with respective primary antibodies. Biotin-SP donkey anti-rabbit IgG (1:600, Jackson ImmunoReserch) and streptavidin Alexa Fluor 568 conjugate (1:600, Invitrogen Molecular Probes) were also used in some experiments. Control sections were processed with the same method except that the primary antibodies were omitted or adsorbed by respective antigens. The tyramide signal amplification (TSA) was used for double immunofluorescence of CXCR2 with MOR, NeuN, Iba-1 and GFAP. ELISA Rat CXCL1 Quantikine ELISA (R&D Systems) was performed on cerebrospinal fluid collected from medium-or PRI BMSC-treated TL rats according the manufacturer's protocol. RT 2 PCR Array Total RNAs from primary and 20P BMSCs as well as peripheral blood monocytes derived from BMSC or culture medium-treated rats were characterized in triplicates using The Rat Inflammatory Cytokines & Receptors RT 2 Profiler TM PCR Array (PARN-011A, Qiagen) following the manufacturer's protocol. Total RNAs were extracted using mirVana™ miRNA Isolation Kit (Applied Biosystems). Purified 1 µg of total RNA was used to prepare first-strand cDNA. The array was probed according to the manufacture's instruction in the StepOnePlus TM System (Applied Biosystems). Gene expression data were analyzed with the web-based RT 2 Profiler PCR Array Data Analysis software that performed ΔΔCt based fold-change calculations from the uploaded threshold cycle data (http://pcrdataanalysis.sabiosciences.com/pcr/arrayanalysis.php?target=upload). Proteome Profiler TM Array Rat cytokine antibody array (ARY008, R&D Systems) was performed on serum isolated from naïve, medium-or PRI BMSC-treated TL rats according the manufacturer's protocol. The serum (0.2 ml) was added to the array membranes coated with 29 specific cytokine antibodies. The membranes were incubated at 4°C overnight. Array images were collected and analyzed using the LI-COR Odyssey Infrared Imaging System. The relative protein levels were obtained by subtracting the background staining and normalizing to the positive controls on the same membrane. We paid special attention to the level of CXCL1 and the results of other cytokines are not shown. RNAi Ccl4 shRNA (Accession and 20 ml of minimum essential medium (MEM) alpha medium (Gibco) with 10% fetal calf serum was injected i.p. at 30 minutes before isolation. Peritoneal cells were collected with a 3ml syringe, plated on culture dishes, and kept at 37°C overnight. Monocytes isolation Heparinized rat blood was overlaid on Ficoll solution (Ficoll-Paque Plus, GE Life Sciences) and (TreeStar, Inc., Ashland, OR). The specific staining was measured from the cross point of the isotype control with a specific antibody graph. Brainstem microinjections 5 Rats were anesthetized with 2-3% isoflurane in a gas mixture of 30% O 2 balanced with 70% nitrogen and placed in a Kopf stereotaxic instrument (Kopf Instruments, Tujunga, CA). A midline incision was made after infiltration of lidocaine (2%) into the skin. A midline opening was made in the skull with a dental drill for inserting an injection needle into the target site. The coordinates for the rostral ventromedial medulla (RVM) were: 10.5 mm caudal to the bregma, 13 midline and 9.0 mm ventral to the surface of the cerebellum 66 . Microinjections were performed by delivering drug solutions slowly over a 10-min period using a 500 nl Hamilton syringe with a 32-gauge needle. The injection needle was left in place for at least 15 min before being slowly withdrawn. The wound was closed and animals were returned to their cages after recovering from anesthesia. For histology verification of the injection site, 30-µM coronal brainstem sections were stained with Neurotrace TM 500/525 Green fluorescent Nissal Stain (Invitrogen) (1:500 for 20 min). Supplemental Figure Legends Supplementary Fig. 1. a,b. Effect of a neutral opioid receptor antagonist 6-β-naltrexol on BMSC-produced attenuation of mechanical hypersensitivity. a. BMSC attenuated hyperalgesia in rats after tendon ligation injury (TL). EF 50 , the von Frey filament force (g) that produces a 50% response, was a measure of mechanical sensitivity. Primary BMSCs were infused i.v.
2,583
2017-08-31T00:00:00.000
[ "Biology" ]
Fabrication of hollow polymer microstructures using dielectric and capillary forces Electric Field Assisted Capillarity is a novel one-step process suitable for the fabrication of hollow polymer microstructures. The process, demonstrated to work experimentally on a microscale using Polydimethylsiloxane (PDMS), makes use of both the electrohydrodynamics of polymers subject to an applied voltage and the capillary force on the polymers caused by a low contact angle on a heavily wetted surface. Results of two-dimensional numerical simulations of the process are discussed in this paper for the special case of production of microfluidic channels. The paper investigates the effects of altering key parameters including the contact angle with the top mask, the polymer thickness and air gap, the permittivity of the polymer, the applied voltage and geometrical variations on the final morphology of the microstructure. The results from these simulations demonstrate that the capillary force caused by the contact angle has the greatest effect on the final shape of the polymer microstructures. Introduction Electric field assisted capillarity (EFAC) is a novel process for fabricating enclosed and hollow polymer microstructures (Chen et al. 2012;Tonry et al. 2015). It is an extension of the electrohydrodynamic induced patterning (EHDIP), which is also known as LISA or lithographically induced self-assembly. With EHDIP, polymers above glass transition temperature can be manipulated to mimick the patterning on a top mask by use of electric fields (Wu and Chou 2005). This process stems from an instability in the surface of a polymer subject a uniform electric field generated, for example, by using a flat top electrode subject to an applied voltage. It was later shown that, if the spatial distribution of this electric field was configured by shaping the top electrode, the polymer would replicate this shape (Chou et al. 1999). The patterned polymer could then be hardened by thermal or UV-curing curing and keep the shape of the original electrode. The EFAC process, presented in Fig. 1, is an extension of EHDIP: the initial stage (labels a and b in the figure) is the same as in EHDIP. With EFAC the top electrode is also a heavily wetted surface which causes the polymer to be subject to a large capillary force, through the Lippmann effect, when it reaches the top mask (label c). This effect forces the polymer to coat the top electrode forming a shell of a few micrometers thick (label d). The EFAC process is a single step encapsulation process that can be used to manufacture enclosed hollow microstructures, such as complex microchannels for microfluidic applications, waveguides for fiber optical communication systems and focusing lenses (e.g. LEDs) (Chen et al. 2012;Tonry et al. 2015). This article concentrates on the simulation of the formation of microchannels. There are two traditional methods for producing polymer microchannels. The first method, chemical etching, is based on processes used in micro fabrication in the microelectronics industry. It uses chemical reactions to remove a sacrificial layer forming a hollow microstructure (Peeni et al. 2006). This process relies on state-of-the-art control of the chemical reactions involved and deep understanding of microfluidics. Membrane-assisted micro-transfer molding is the second method whereby a membrane is bonded to microchannels which have been molded using soft lithography (Unger et al. 2000). This process requires two stages: firstly molding the channels and secondly capping them with a thin membrane. The EFAC process has advantages over both of these methods, as it is an electromechanical process rather than a chemical one and it is also a single step process. It is likely therefore to be cheaper than both methods in use. Current experimental work has shown that this process works for a variety of shapes, even producing complex structures such as microchannels and capsules, as shown later on, as well as a combination of these structures (Unger et al. 2000). The relatively few studies carried out on this promising process put a large uncertainty over the length of time taken to manufacture hollow structures and more experimental research needs to be done; however, an initial upper limit of 1 h for shaping and curing has been demonstrated (Chen et al. 2012). The model described in this paper was developed to further investigate the key operational parameters of the process and how it could be extended to industrial scale. 2 Description of the process EFAC makes use of the electrohydrodynamic and surface tension properties of liquid polymers. Experiments on the process so far have all used the low surface tension PDMS polymer, though the process should work using other polymers of similar surface tension. The process is presented schematically in Fig. 1 and explained here in more details. The experiment set up consists with a top patterned electrode and a bottom electrode. A low surface tension liquid polymer (PDMS) is coated onto the bottom electrode. Optional spacers can be used to separate the two electrodes so that there is an air gap in between the top electrode and the polymer surface. A hotplate is used to heat the polymer above glass temperature for thermally curable polymers (a). The electric field is generated by applying a potential difference across the two electrodes which destabilizes the thin polymer film and induces the microstructure growth upward towards the top electrode. The shape of the mask causes the electric field to be higher at the interface under the lower parts of the mask, then due to the higher electric field in these regions the charge density at the surface is higher causing the electric force to be greater compared to the surface at the higher parts of the mask. This variation in force causes the molten polymer to flow up in the regions closest to the mask. The electric field becomes even larger in these regions accelerating the process (b). The fluid eventually reaches the top mask where the capillary force becomes dominant due to a very low contact angle from the heavily wetted top mask. With a low contact angle, this capillary force is sufficient to cause the material to flow completely around the mask (c). After the hollow microstructures are formed, the polymer is cured and then released from the top electrode. Although not represented in the schematic, the formed hollow microstructures present a curved lower surface due to the system reaching equilibrium between surface tension and air pressure. The process works also with UV-cured polymers. In that case a hotplate is not necessary and the solidification of the polymer is carried out using UV flood exposure of the polymer. Numerical model The numerical model used to simulate the process was built using the COMSOL TM Multiphysics 4.3b software package using the electrostatics and laminar phase field modules. The laminar phase field module solves fluid flow and the position of the air/polymer interface while the electrostatics module resolves the electric field. The laminar phase field module in COMSOL solves the Navier-Stokes equations combined with a diffuse interface phase field model to simulate the multiphase flow. Due to Fig. 1 Schematic of the EFAC process the small length scales of microchannels and the high viscosities of polymers, the Reynolds number is assumed to be low and the flow laminar. The equations were solved using the finite element solver within COMSOL and the PARDISO direct solver, which is a parallel direct solver for sparse matrices. The diffuse interface phase field model (Yue et al. 2004) in COMSOL describes the interface based on the mixing energy of the fluid. This is in contrast to level set models, which use an artificial smoothing function to describe the fluid at the interface. The adoption of this model has two key benefits: the curvature at the interface is obtained directly from the method simplifying surface tension calculations and viscoelastic flows can be directly included in the mixing energy. This method separates the Cahn-Hilliard equation into two coupled Helmholtz equations that are solved to describe the surface in terms of the free energy. where / is the phase field variable, w the phase field helper variable, u the velocity, k the length scale of a volume element, c the surface tension coefficient, 2 pf the capillary width. Flow is solved using the laminar Navier-Stokes equations: The electric field is calculated by solving Poisson's equation for the voltage and the electric field is then calculated from the gradient of the voltage. The force f at the interface is a result of the dielectric forces (Landau and Lipshitz 1984). where f is the force per unit area, E the electric field, q density and e the permittivity. The material properties, given in Table 1, are for PDMS and are based on those used in experiments (Chen et al. 2012). A separation distance of 10 mm has been chosen between the bottom of the top electrode and the top surface of the deposited polymer. Although the viscosity value of the polymer is already high, an increased viscosity has been used to damp the flow and ensure convergence; this is due to the large forces concentrated around the interface. The simulations presented here are for a top electrode with rounded corners for the grooves of the pattern as shown in Fig. 2, along with the boundary conditions. The mesh of Fig. 2a) represents a single channel and a symmetry plane has been included. Rather than using sharp-angled corners which can lead to singularities in the electric field equation the corners have been smoothed. For simplicity it is also assumed that the air and polymer are both incompressible and the temperature is constant, as these would have little effect on the solutions. In addition, the viscoelastic nature of the liquid polymer has been neglected as we are only interested in the final shape of the polymer and an artificially high viscosity is already used. Figure 3 shows the simulated morphological change of the liquid polymer as it is electrostatically pulled towards the top electrode for a contact angle of 20°between the polymer and the surface of the top electrode. The force on the fluid is greater under the lower sections of the mask due to the increased electric field; this causes the polymer to flow upwards at these points (a). When it reaches the top mask (b) the surface tension causes the fluid to flow around the mask (c) finally reaching a steady state and coating the mask (d). This is the general evolution of the flow for complete cases. The final stage of the simulation is shown in Fig. 3d. Steady state is however reached at around 37 ms. Figure 4 demonstrates the evolution of the electric field over the same time period. Figure 5 shows the influence of contact angles ranging from 10°to 25°on the resulting manufacture hollow microstructures. A lower contact angle causes a thicker shell to form. Figure 5c shows that, if the contact angle is too large, the capillary force becomes too small to fully encapsulate the hollow microstructures. There is a large variation in the thickness of the polymer, with the top of the cap and sides being thinner than the corner. The larger air gap creates a deeper channel and an angular bottomed channel is formed instead of a rounded one as in the previous case. A layer of polymer remains on the bottom electrode as this is also a wetted surface. Figure 6b shows also the formation of a microchannel with a thicker initial polymer for an air gap of 10 lm. The same thin surface is kept on the surface of the top electrode but a much deeper channel is produced compared to the case of a thin initial polymer. Dimensions are shown here for both figures as the geometry is different to the original case. Influence of the permittivity value of the polymer The influence of the value of the permittivity of the polymer permittivity was also investigated as shown in Fig. 7. There cases were run, one with the polymer with a very low permittivity of 2 ¼ 1 (the same as air) and two other cases with relative permittivity values of 2.5 and 5. A permittivity of 1 is a useful verification case as no change in permittivity at the interface gives no dielectric force and so the polymer surface remains at its initial position. Indeed, Fig. 7a shows that the interface does not deviate from its initial condition. Comparing the influence on the final shape of the microstructure of the choice of permittivity, as shown in Fig. 7b and c, highlights that the final shape is minimally dependent on the permittivity of the polymer. Figure 7d shows the influence of all the different permittivity values used, with the lines representing the surface of the polymer. These results demonstrate that changing the permittivity of ig. 6 Influence of the air gap (a) and polymer thickness (b) on the formation of the hollow microstructure. Dimensions are in lm the polymer has a limited effect on the final shape. If the permittivity is too low there will not be enough force to overcome surface tension. With increasing permittivity values, the bubble inside the polymer changes shape, but only marginally. The speed of the initial stage is also increased with a higher dielectric due to a higher dielectric force. Figure 8 shows the influence of the voltage applied across the top electrode on the resulting microstructure with applied voltages of 100 V, 250 V, 500 V and 1000 V. Conditions for absence of electrical breakdown between the top electrode and the surface of the polymer were not verified. In the first case, shown in Fig. 8a, the dielectric forces at the interface are unable to overcome surface tension forces and so the surface of the polymer only exhibits a small deflection. The most apparent effect is the increasing speed of the process with increasing voltages where the polymer touches the bottom of the top electrode within 31 ms, 3.2 ms and 0.8 ms for voltages of 250 V, 500 V, and 1000 V, respectively. From an industry perspective the voltage is a key operating parameter to enable a high throughput process. Figure 8b-d also show that the final shape is altered in a similar fashion as the change of the permittivity values. This is to be expected from the dielectric force Eq. (4) as increasing the voltage increases the interfacial dielectric forces. Note also that, for the highest voltage case, a bubble remains in the left-hand corner of the mask. This represents a defect indicating an upper limit on the process speed for maximum process yield. Figure 9 shows the effect of small changes to the corner of the electrode shape. Three corners of radii of 2 lm, 3 lm and 5 lm were investigated. Although the variation between these cases is relatively small, the change in the local electric field is significant enough to alter the direction of the interfacial forces such that, for sharp corners, the top electrode is unable to create a fully enclosed structure. Figure 10 presents a comparison of the model to an electron micrograph image taken of the capsules created experimentally. Further information on the experimental setup can be found in Chen et al. (2012). Such structures have the advantage to offer over a small area the Fig. 3 shows tears at the top and at the side as highlighted by the dark circles. This loss of mechanical integrity is due to a narrowing of the cap on the numerical model for earlier time steps. Based on modelling results, the hole at the top of the capsule is likely due to insufficient material, whereas the side breakage seems to be a function of the mask shape. The cut apart capsules demonstrate a thicker surface compared to the model. However this is likely due to the increased surface tension force caused by the corners of the capsules, a fully 3D model would be required to fully investigate this phenomenon. No such thickening was seen for the long microchannel as shown in Fig. 11. These microstructures form a similar thin layer of only a few microns thickness as in the numerical model. The curved base of the channel as demonstrated in the model is also seen. Conclusions A numerical model has been developed using the Multiphysics simulation software package COMSOL TM . The simulation model allowed the capture of the key features of the EFAC process, such as the fabrication speed, morphology of the fabricated part and failure modes that can result from poor process parameters and material properties. This multiphysics model incorporated dielectric forces coupled with free surface flow algorithms. The model allowed for the effects of key process parameters and material properties to be investigated in the EFAC process. This paper explored the general behaviour of key process parameters including: • Initial thickness of the deposited polymer, • relative permittivity of the polymer, • air gap between polymer and the top electrode, • shape of the top electrode, • contact angle of between the liquid polymer and the top electrode, • voltage applied across the two electrodes. Of these process parameters, this study suggests that poor contact angle and electrode shape are more likely to cause process failure (i.e. not forming enclosed structures). Fabrication speed can be controlled by the voltage. However, if it is too quick then bubbles can form in the Fig. 9 Influence of the radius of the corner of the top electrode a radius = 2 lm, b radius = 3 lm and c radius = 5 lm The study also indicates that the shape of the bottom half of the microstructures can be controlled by altering the initial polymer deposited and the air gap between the polymer and the electrode. Though there is limited experimental data to validate against the model demonstrates the same behavior as experiments. Here we are looking at the behavior of the process within a parameter space to provide a critical understanding that can be used to inform future experimental work that has already demonstrated feasibility of this process at the laboratory scale. By understanding and then optimizing these parameters the EFAC process be potentially able to be used in high volume manufacturing, significantly reducing the cost of microchannel and enclosed polymer microstructures.
4,230.8
2019-03-27T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Continuous and Pulsed Quantum Control † : We consider two alternative procedures which can be used to control the evolution of a generic finite-dimensional quantum system, one hinging upon a strong continuous coupling with a control potential and the other based on the application of frequently repeated pulses onto the system. Despite the practical and conceptual difference between them, they lead to the same dynamics, characterised by a partitioning of the Hilbert space into sectors among which transitions are inhibited by dynamical superselection rules. We Introduction Coinsider a quantum system with a finite dimensional Hilbert space H , whose evolution U(t) = e −itH is generated by the Hamiltonian H. We are interested in some protocols which dynamically induce a partition of H into superselection sectors H µ = P µ H , in the sense that if the system is initially in some state belonging to one of the superselection sectors, i.e: |ψ ∈ H µ , it will remain in that sector during its evolution |ψ(t) ∈ H µ , as shown pictorially in Figure 1. More precisely, given a complete set of orthogonal projections {P µ } satisfying we want to engineer an effective dynamics generated by the block-diagonal Hamiltonian This evolution is a manifestation of a Quantum Zeno Dynamics (QZD), a generalisation of the quantum Zeno effect [1], consisting in the freezing of the state of a quantum system when it is subject to frequent measurements aimed at ascertaining if it is still in its initial state. In the case of non-selective measurements onto multi-dimensional subspaces H µ = P µ H a non-trivial evolution can take place inside each subspace, generated by the Hamiltonian (2), with P µ being the measurement projections. In this context the superselection sectors H µ are called quantum Zeno subspaces (QZSs) [2]. Figure 1. A pictorial representation of the partitioning of the Hilbert space H into QZSs H µ = P µ H . If the system is in a given QZS at the initial time t 0 , it will evolve coherently in this subspace and will never make a transition to the other QZSs. Strong Continuous Coupling The first protocol consists in adding to the Hamiltonian H a strong coupling to a control potential V, so that the dynamics is generated by a total Hamiltonian H K = H + KV, where K > 0 is the coupling strength. As K grows to infinity, the evolution generated by H K is equivalent to a QZD, with the QZSs determined by the eigenprojections of the control potential V. Such result is expressed formally in Theorem 1, where we also bound the error between the actual evolution of the system and the controlled evolution when K is large but finite. Theorem 1. Let H and V be Hermitian operators acting on a finite dimensional space Then, defining H Z as in Equation (2), we have as K → ∞. (Here and in the following the notation O(x) will stand for an operator A(x) depending on the real parameter x such that A(x) C |x| for x sufficiently small and nonvanishing, and for some positive constant C). The proof of the theorem makes use of an adiabatic theorem [3][4][5]. Pulsed Decoupling The second protocol consists in the application of periodic pulses to the system, implemented by an instantaneous unitary transformation U kick applied to the evolving state at time intervals t/n, as shown in Figure 2a. The idea at the basis of this procedure-and of the proof of Theorem 2-can be understood by looking at each step as an effective "rotation" of the Hamiltonian (see Figure 2b), so that the global effect over the whole time interval (0, t) is to average out of the Hamiltonian the off-diagonal part with respect to the eigenprojections of the unitary kick [3,6]. Such result is expressed formally in Theorem 2. Theorem 2. Let H be a Hermitian operator on a finite dimensional Hilbert space H , and let U kick be a unitary operator with the spectral decomposition Then, by defining H Z as in Equation (2) Example: Four-Level System As a particular example, consider a 4-level system, where H = C 4 , and a Hamiltonian H inducing Rabi transitions between adjacent levels (this scheme is very similar to that implemented in [7]): Such Hamiltonian will determine a time oscillation of the populations (see Figure 4a) P k (t) = k|e −itH |1 2 . Using Theorem 1, we can show now that it is possible to decouple levels |1 and |2 from |3 and |4 with a strong coupling between |3 and |4 : The eigenprojections of this potential are P 0 = |1 1| + |2 2| and P ± = 1 2 (|3 ± |4 ) ( 3| ± 4|), so that the Zeno Hamiltonian, H Z = P 0 HP 0 + P + HP + + P − HP − , is block-diagonal with respect to the QZSs The situation is pictorially represented in Figure 3. Figure 4b shows the behaviour of occupation probabilities P k (t) in the strong coupling regime: we can see oscillations between states |1 and |2 which belong to the same QZS, but the probability of a transition towards the states |3 and |4 vanishes since they do not belong to the initial QZS. The same result can be obtained by using instead the protocol considered in Theorem 2, with e.g. the unitary kick (a) K = 0 (b) K = 100 Ω Figure 4. Populations P k with Ω 1 = Ω 2 = Ω 3 ≡ Ω without control potential (a) and with the control potential turned on with K = 100Ω (b). In this example we have considered a particular Hamiltonian H generating the evolution of the system to be controlled. Note however that there are no assumptions on the structure of the Hamiltonian in our theorems, which are therefore valid in completely general situations, as long as we consider finite dimensional quantum systems. Author Contributions: All authors have contributed equally to this paper.
1,419.6
2019-06-24T00:00:00.000
[ "Physics" ]
A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP) model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence. INTRODUCTION Vehicle detection on urban roads is a crucial task in automatic traffic monitoring and control, environmental protection and surveillance applications (Yao et al., 2011).Beside terrestrial sensors such as video cameras and induction loops, airborne and spaceborne data sources are frequently exploited to support the scene analysis.Some of the existing approaches rely on aerial photos or video sequences, however in these cases, it is notably challenging to develop a widely applicable solution for the recognition problem due to the large variety of camera sensors, image quality, seasonal and weather circumstances, and the richness of the different vehicle prototypes and appearance models (Tuermer et al., 2010).The Light Detection and Ranging (LiDAR) technology offers significant advantages to handle many of the above problems, since it can jointly provide an accurate 3-D geometrical description of the scene, and additional features about the reflection properties and compactness of the surfaces.Moreover the LiDAR measurements are much less sensitive on the weather conditions and independent on the daily illumination.On the other hand, efficient storage, management and interpretation of the irregular LiDAR point clouds require different algorithmic methodologies from standard computer vision techniques. LiDAR based vehicle detection methods in the literature follow generally either a grid-cell-or a 3-D point-cloud-analysis-based approach (Yao and Stilla, 2011).In the first group of techniques (Rakusz et al., 2004, Yang et al., 2011), the obtained LiDAR data is first transformed into a dense 2.5-D Digital Elevation Model (DEM), thereafter established image processing operations can be adopted to extract the vehicles.On the other hand, in point cloud based methods (Yao et al., 2011), the feature extraction and recognition steps work directly on the 3-D point clouds: in this way we avoid loosing information due to projection and interpolation, however time and memory requirement of the processing algorithms may be higher. Another important factor is related to the types of measurements utilized in the detection.A couple of earlier works combined multiple data sources, e.g.(Toth and Grejner-Brzezinska, 2006) fused LiDAR and digital camera inputs.Other methods rely purely on geometric information (Yao et al., 2010, Yang et al., 2011), emphasizing that these approaches are independent on the availability of RGB sensors and limitations of image-to-point-cloud registration techniques.Several LiDAR sensors, however, provide an intensity value for each data point, which is related to the intensity of the given laser return.Since in general the shiny surfaces of car bodies result in higher intensities, this feature can be utilized as an additional evidence for extracting the vehicles. The vehicle detection techniques should also be examined from the point of view of object recognition methodologies.Machine learning methods offer noticeable solutions, e.g.(Yang et al., 2011) adopts a cascade AdaBoost framework to train a classifier based on edgelet features.However, the authors also mention that it is often difficult to collect enough representative training samples, therefore, they generate more training examples by shifting and rotating the few training annotations.Model based methods attempt to fit 2-D or 3-D car models to the observed data (Yao et al., 2011), however, these approaches may face limitation for scenarios where complex and highly various vehicle shapes are expected. We can also group the existing object modeling techniques whether they follow a bottom-up or an inverse approach.The bottomup techniques usually consist in extracting primitives (blobs, edges, corners etc.) and thereafter, the objects are constructed from the obtained features by a sequential process.To extract the vehicles, (Rakusz et al., 2004) introduces three different methods with similar performance results, which combine surface warping, Delaunay triangulation, thresholding and Connected Component Analysis (CCA).As main bottlenecks here, the Digital Terrain Model (DTM) estimation and appropriate height threshold selection steps critically influence the output quality.(Yao et al., 2010) Inverse methods, such as Marked Point Processes, MPPs, (Benedek et al., 2012, Descombes et al., 2009), assign a fitness value to each possible object configuration, thereafter an optimization process attempts to find the configuration with the highest confidence.In this way complex object appearance models can be used, it is easy to incorporate prior shape information (e.g.only searching among rectangles) and object interactions (e.g.penalize intersection, favor similar orientation).However, high computational need is present due searching in the high dimension population space.Therefore, applying efficient optimization techniques is a crucial need. In this paper, we propose an MPP based vehicle detection method with the following key features.(i) Instead of utilizing complex image descriptors and machine learning techniques to characterize the individual vehicle samples, only basic radiometric evidences, segmentation labels and prior knowledge about the approximate size and height of the vehicle bounding boxes are exploited.(ii) We model interaction between the neighboring vehicles by prescribing prior non-overlapping, width similarity and favored alignment constraints.(iii) Features exploited in the recognition process are directly derived from the segmentation of the LiDAR point cloud in 3-D.However, to keep the computational time tractable, the optimization of the inverse problem is performed in 2-D, following a ground projection of the previously obtained class labels.(iv) During the projection of the LiDAR point cloud to the ground (i.e. a regular image), we do not interpolate pixel values with missing data, but include in the MPP model the concept of pixel with unknown class.In this way we avoid possible artifacts of data interpolation. POINT CLOUD SEGMENTATION The input of the proposed framework is a LiDAR point cloud L. Let us assume that the cloud consists of l points: L = {p1, . . ., p l }, where each point, p ∈ L, is associated to geometric position, intensity and echo number parameters, as detailed in Table 1. The point cloud segmentation part consists of three steps.First, a density based clustering technique is adopted to remove clutter points (i.e. points not belonging to connected regions, like most reflections from walls), and vegetation is filtered out by using return number information.Let us denote by Vϵ(p) the ϵ neighborhood of p: where ||r − p|| marks the Euclidean distance of points r and p. Then with using |Vϵ(p)| for the cardinality of a neighborhood: where ϵ and τV threshold parameters depend on the point cloud resolution and density.For efficient neighborhood calculation, we need to divide the point cloud into smaller parts by making a nonuniform subdivision of the 3-D space using a k-d tree data structure. For estimating the vegetation, we utilize that trees and bushes cause multiple laser returns: Note that this step removes some points of car and buiding edges where the echo number is bigger than one, but we experienced that these regions do not significantly corrupt the vehicle separation process.We denote by Lcv ⊂ L the points labeled as clutter or vegetation: Second, we identify the ground points, by estimating the the best plane P in the cloud L\Lcv using the RANSAC-based algorithm of (Yang and Foerstner, 2010).This technique selects in each iteration three points randomly from the input cloud, and it calculates the parameters of the corresponding plane.Then it counts the points in L \ Lcv which fit the new plane and compares the obtained result with the last saved one.If the new result is better, the estimated plane is replaced with the new candidate.The process is iterated till convergence is obtained.Note that since the ground is usually not planar in a greater area, large point clouds should be divided into smaller segment, and the ground plane is estimated within each segment separately.Next, we label the point p ∈ L \ Lcv as ground as: where d(p, P) denotes the distance of point p from plane P, and the τP threshold depends on the geometric accuracy of the Li-DAR data.We denote by Lgr = {p ∈ L : µ(p) = ground} Third, for the remaining points in L \ (Lcv ∪ Lgr), a floodfillbased segmentation algorithm is propagated, which aims to detect the large connected building roofs.We mark the points selected by the algorithm with label 'roof', and compose the set: Meanwhile, the points of the remaining blobs of the cloud are labeled as vehicle candidates, if their height coordinate is less than the maximal vehicle height: AND zp < hmax. (1) To make the tuning of hmax less critical for the process, we used an overestimation of the possible vehicle heights.In this way we exclude obvious outliers, such as traffic lights, while further false possible points in the vehicle candidate set (denoted by L vl ) should be eliminated in a later step.Finally, points in L not clustered yet are merged into the clutter class. After the 3-D segmentation process, we stretch a 2-D pixel lattice S (i.e. an image) onto the ground plane, where s ∈ S denotes a single pixel.Then, we project each LiDAR point to this lattice, which has a label of ground, vehicle or building roof: This projection results in a 2-D class label map and an intensity map, where multiple point projections to the same pixel are handled by a point selection algorithm, which gives higher precedence to vehicle point candidates.On the other hand, the projection of the sparse point cloud to a regular image lattice results in many pixels with undefined class labels and intensities.In contrast to several previous solutions, we do not interpolate these missing points, but include in the upcoming model the concept of unknown label at certain pixels.In this way, our approach is not affected by the artifacts of data interpolation. Let us denote by χ(s) ⊂ L ⋆ the set of points of L ⋆ projected to pixel s.After the projection (Fig. 2), we distinguish vehicle, background and undefined classes on the lattice as follows: Note that for easier visualization, in Fig. 1 and 2 we have distinguished pixels of roof (red) and ground (blue) projections, but during the next steps, we consider them as part of the background class.We also assign to each pixel s and intensity value g(s), which is 0, if ν(s) = undefined, otherwise we take the average intensity of points projected to s.In the following part of the algorithm, we purely work on the previously extracted label and intensity images.The detection is mainly based on the label map, but additional evidences are extracted from the intensity image, where several cars appear as salient bright blobs due to their shiny surfaces. MARKED POINT PROCESS MODEL The inputs of this step are the label and intensity maps over the pixel lattice S, which were extracted in the previous section.We will also refer to the input data jointly by D. We assume that each vehicle from top view can be approximated by a rectangle, which we aim to extract by the following model.A vehicle candidate u is described by five parameters: cx and cy center coordinates, eL, e l side lengths and θ ∈ [−90 • , +90 • ] orientation (Fig. 3(c)).The vehicle population of the scene is described by a configuration of an unknown number of rectangles, which is realized by a Marked Point Process (MPP) (Descombes et al., 2009).Note that with replacing the rectangle shapes for parallelograms, the "shearing effect" of moving vehicles may also be modeled (Yao and Stilla, 2011), but in the considered test data this phenomenon could not be reliably observed. Denote by ω an arbitrary object configuration {u1, . . ., un} in Ω.We define a neighborhood relation ∼ in H: u ∼ v iff the distance of the object centers is smaller than a threshold.The neighborhood of u in configuration ω is defined as Nu(ω) = {v ∈ ω|u ∼ v} (hereafter, we ignore ω in the notation). Taking an inverse modeling approach, an energy function ΦD(ω) is defined on the object configuration space, which evaluates the negative fitness of each possible vehicle population.Thereafter, we search for the configuration estimate which exhibits the Minimal Energy (ME): . ΦD(ω) can be decomposed into subterms, which are defined on the the N − neighborhoods of each object in ω: The above neighborhood-energies are constructed by fusing various data terms and prior terms, as introduced in the following subsections in details. Data-dependent energy terms Data terms evaluate the proposed vehicle candidates (i.e. the u = {cx, cy, eL, e l , θ} rectangles) based on the input label-or intensity maps, but independently of other objects of the population.The data modeling process consists of two steps.First, we define different f (u) : H → R features which evaluate a vehicle hypothesis for u in the image, so that 'high' f (u) values correspond to efficient vehicle candidates.In the second step, , where (2) Observe that the Q function has a key parameter, d f 0 , which is the object acceptance threshold for feature f : u is acceptable according to the We used four different data-based features.To introduce them, let us denote by Ru ⊂ S the pixels of the image lattice lying inside the u vehicle candidate's rectangle, and by T up u , T bt u , T lt u , and u the upper, bottom, left and right object neighborhood regions, respectively (see Fig. 3).The feature definitions are listed in the following paragraphs. The vehicle evidence feature f ve (u) expresses that we expect several pixels classified as vehicle within Ru: where |Ru| denotes the cardinality of Ru, and 1 {.} marks an indicator function: 1{true} = 1, 1{false} = 0. The external background feature f eb (u) measures if the vehicle candidate is surrounded by background regions: where the min2nd operator returns the second smallest element from the background filling ratios of the four neighboring regions: with this choice we also accept vehicles which connect with at most one side to other vehicles or undefined regions. The internal background feature f ib (u) prescribes that within Ru only very few background pixels may occur: Demonstration of the f ve , f eb and f ib feature calculation can be followed in Fig. 3(e). Finally, the intensity feature provides additional evidence for image parts containing high intensity regions (see Fig. 3(b) and (f)). where Tg is an intensity threshold. After the feature definitions, the data terms ) can be calculated with the Q function by appropriately fixing the corresponding d f 0 parameters for each feature. Prior terms In contrast to the data-energy functions, the prior terms evaluate a given configuration on the basis of prior geometric constraints, but independently of the input label and intensity maps. We used three types of prior terms in the model, realizing nonoverlapping, width similarity and alignment (weak & strong) constraints between different objects. First, we have to avoid configurations which contain multiple objects in the same or strongly overlapping positions.Therefore, measure an overlapping coefficient I(u, v), which penalizes intersection between different object rectangles (see Fig. 4(a)): , and derive the overlapping term as: Second, to prevent us from merging contacting vehicles into the same object candidate, we penalize rectangles with significantly different width (e l ) parameters in local neighborhoods (Fig. 4(b)): We set T l as the half of the average vehicle width. Third, we favor, if objects in a local neighborhood are aligned i.e. they form regular lines or rows.This later effect can be often observed either by parking cars, or by vehicles waiting at crossroads or in traffic jams.Note that the alignment assumptions cannot be used as hard constraints, since we should always expect some irregularly oriented vehicles in the scene.However, we propose to reward the object groups meeting the alignment criterion, in two ways.On one hand, we moderately favor, if the orientation of u is similar to most of its neighbors; and moderately penalize, if not (Fig. 4(c)): with a small 0 < γ θ weight.We used T θ = 40 • and γ θ = 0.1. On the other hand, we strongly favor, if the central point of u (denoted by c(u)) is close either to the major (l M v ), or minor (l m v ) axis lines of its neighbors v ∈ Nu.We shall consider here cases when vehicles park or run parallel or perpendicular to the road side.The corresponding energy term is obtained as follows.φ at p (u, Nu) = 0 if |Nu| < Nmin, otherwise: where ζM (u, v) (resp. ζm(u, v)) is the normalized distance of c(u) and l M v (resp.l m v ) as shown in Fig. 4(d).TM and Tm depend on the resolution of the lattice, and we used Nmin = 4.Note that the fulfillment of the axis alignment criterion is not necessary, however, if it is satisfied, we give further rewards as explained in the next section. Integration of the energy components As introduced before, the data energy terms provide different feature based conditions for the acceptance of the vehicle candidates, while the prior terms penalize/favor given configurations based on preliminary expectations about geometry and object interactions.In general, we prescribe that the vehicles satisfy each of the four feature constraints from Sec. 3.1 (i.e.all energy subterms are negative), therefore, we derive the joint data term (first row of (3)) by the maximum operator, which is equivalent to the logical AND in the negative fitness domain.However, if the axis distance criterion is satisfied (φ at p (u, Nu) > 0.5), we are less strict regarding the data terms, and only investigate the internal and external background energy parts (see eq. ( 4)).Finally, we use the remaining prior energy functions, as additive terms in ΨD (second row of (3)).Based on these arguments, the local object energies are calculated by the following formula: where the ΥD term is responsible for considering or avoiding the f it and f ve features, depending on φ at p : ) ) . OPTIMIZATION We estimate the optimal object configuration by the Multiple Birth and Death Algorithm (Descombes et al., 2009) as follows: Initialization: start with an empty population ω = ∅. Main program: set the birth rate b0, initialize the inverse temperature parameter β = β0 and the discretization step δ = δ0, and alternate birth and death steps. 1. Birth step: Visit all pixels on the image lattice S one after another.At each pixel s, if there is no object with center s in the current configuration ω, choose birth with probability δb0. If birth is chosen at s: generate a new object u with center [cx(u), cy(u)] := s, and set the eL, e l and θ parameters randomly.Finally, add u to the current configuration ω. 2. Death step: Consider the actual configuration of objects ω = {u1, . . ., un} and sort it by decreasing values depending on the data term.For each object u taken in this order, compute ∆Φω(u) = ΦD(ω/{u}) − ΦD(ω), derive the death rate as follows: and remove u from ω with probability dω(u). Convergence test: if the process has not converged yet, increase the inverse temperature β and decrease the discretization step δ with a geometric scheme, and go back to the birth step.The convergence is obtained when all the objects added during the birth step, and only these ones, have been killed during the death step. EVALUATION AND PARAMETER SETTINGS We evaluated our method in four aerial LiDAR data sets (provided by Astrium GEO-Inf.Services -Hungary), which are captured above dense urban areas.For accurate Ground Truth (GT) generation, we have developed an accessory program with graphical user interface, which enables us to manually create and edit a GT configuration of rectangles, which can be compared to the output of the algorithm. As for parameter settings, the data term thresholds were set based on a limited number of training samples (around 10% of the vehicles in each test set), using similar Maximum-Likelihood strategies to (Benedek et al., 2012).The prior term parameters, which prescribe the significance of the object interaction constraints, must be determined by the user: our applied values have been given in Sec.3.2.Regarding the MBD optimization settings, we followed the guidelines from (Descombes et al., 2009). To perform quantitative evaluation, we have measured how many vehicles are correctly or incorrectly detected in the different test sets, by counting the Missing Objects (MO), and the Falsely detected Objects (FO).These values are compared to the Number of real Vehicles (NV), and the F-rate of the detection (harmonic mean of precision and recall) is also calculated. For comparison, we have selected a bottom-up grid-cell-based algorithm from (Rakusz et al., 2004) Some qualitative results are shown in Fig. 5, and the quantitative evaluation is provided in Table 2.The proposed MPP model surpasses the DEM-PCA method by 7% regarding the F-rate, due to the fact that our method results in significantly less false objects.We can also observe in Fig. 5 that the vehicle outlines obtained with the MPP model are notably more accurate. CONCLUSION This paper has proposed a novel MPP based vehicle extraction method for aerial LiDAR point clouds.The efficiency of the approach has been tested with real-world LiDAR measurements, and its advantages versus a reference method has been demonstrated.The authors would like to thank Astrium GEO-Information Figure 1 :⃝Figure 2 : Figure 1: Workflow of the point cloud filtering, segmentation and projection steps.Test data provider: Astrium GEO-Inf.Services -Hungary c ⃝ ISPRS Figure 3: Demonstration of the (a)-(b) input maps (c) object rectangle parameters and (d)-(f) dataterm calculation process we construct φ f d (u) data driven energy subterms for each feature f , by attempting to satisfy φ f d (u) < 0 for real objects and φ f d (u) > 0 for false candidates.For this purpose, we project the feature domain to [−1, 1] with a monotonously decreasing function: φ f d (u) = Q Figure 4 : Figure 4: Demonstration of the prior constraints used in the proposed model Figure 5 : Figure 5: Comparison of the detection results with the DEM-PCA model and the Proposed MPP method, for the point cloud segment marked as Set#1 in Table2.Circles denote missing or false objects. , called later as DEM-PCA, which consists of three consecutive steps: (1) Height map (or Digital Elevation Model) generation by ground projection of the elevation values in the LiDAR point cloud, and missing data interpolation.(2) Vehicle region detection by thresholding the height map followed by morphological connected component extraction.(3) Rectangle fitting to the detected vehicle blobs by Principal Component Analysis. Services -Hungary for test data provision.This work was supported by the Hungarian Research Fund (OTKA #101598), the APIS Project of EDA and the i4D Project of MTA SZTAKI.The second author was partially funded by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. applies three consecutive steps: geo-tiling, vehicle-top detection by local maximum filtering and segmentation through marker-controlled watershed transformation.The output is a set of vehicles contours, however, some car silhouettes are only partially extracted and a couple of neighboring objects are merged into the same blob.In general, bottom-up techniques can be relatively fast, however construction of appropriate primitive filters may be difficult/inaccurate, and in the sequential work flows, the Table 2 . Circles denote missing or false objects. Table 2 : Numerical comparison of the detection results obtained by the DEM-PCA and the proposed MPP models.Number of Vehicles (NV), Missing Objects (MO) and False Objects (FO) are listed for each data set, also and in aggregate.
5,530
2012-07-20T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Application of Metabolomics in the Study of Natural Products Abstract LC–MS-based metabolomics could have a major impact in the study of natural products, especially in its metabolism, toxicity and activity. This review highlights recent applications of metabolomics approach in the study of metabolites and toxicity of natural products, and the understanding of their effects on various diseases. Metabolomics has been employed to study the in vitro and in vivo metabolism of natural compounds, such as osthole, dehydrodiisoeugenol, and myrislignan. The pharmacological effects of natural compounds and extracts were determined using metabolomics technology combined with diseases models in animal, including osthole and nutmeg extracts. It has been demonstrated that metabolomics is a powerful technology for the investigation of xenobiotics-induced toxicity. The metabolism of triptolide and its hepatotoxicity were discussed. LC–MS-based metabolomics has a great potential in the druggability of natural products. The application of metabolomics should be broadened in the field of natural products in the future. Graphical Abstract Introduction Compounds which are derived from natural sources, e.g., plants, animals and microorganisms, are defined as natural products [1]. Many natural products show the pharmacological or biological activities. Furthermore, natural products are also an important source of inspiration for development of potential novel drugs [2]. Therefore, understanding the activity mechanism of natural products is necessary. In the past few decades, the activity mechanism of natural products has witnessed extensive study in application of modern technology, including metabolomics. However, some natural products are harmful to mammals in spite of their pharmacological or biological activities, such as triptolide (TP). Therefore, understanding the toxicity mechanism of natural products is necessary for their clinical applications. Metabolomics is an area of investigation that measures changes in small molecules downstream of the genome and proteome, which captures the terminal alteration of endogenous metabolites [3]. The most common platforms used for metabolomics include gas chromatography-mass spectrometer (GC-MS), liquid chromatograph-mass spectrometer (LC-MS), and nuclear magnetic resonance (NMR). Each platform shows its advantages and disadvantages [4]. Compared with the complicated sample preparation (e.g., derivatization) of GC-MS and limited sensitivity of 1 H NMR, ease of sample preparation and high sensitivity make LC-MS among the most widely used platforms in metabolomics. In the past few years, metabolomics has been used to study natural products metabolic map and natural products-induced activity and toxicity [5]. Firstly, metabolomics can be used to identify metabolic map derived from natural products, which can unbiased determination of metabolites in biofluids (serum, bile, and urine), or extracts from tissues or excreta (feces). Secondary, metabolomics has been successfully introduced into the activity and toxicity studies of natural products. A large number of studies have been performed employing metabolomics to study the activity and toxicity of natural products. Metabolic Map of Osthole Osthole is widely distributed in Angelica pubescens and Cnidium moonieri (L.) Cussion, which shows therapeutic effects on hyperglycemia [27], non-alcohol fatty liver disease [28], and cancers [29]. Recently, the complete metabolic profiling of osthole was elucidated using ultraperformance chromatography electrospray ionization quadrupole time-of-flight mass spectrometry (UPLC-ESI-QTOFMS)-based metabolomics. Forty-one osthole metabolites were identified and structurally elucidated in vitro and in vivo, of which twenty-three were novel metabolites (Fig. 3). Novel metabolites discovered by UPLC-ESI-QTOFMS-based metabolomics were shown by red arrows (Fig. 3). CYPs screening showed that CYP3A4 and CYP3A5 were the primary enzymes contributing to the metabolism of osthole [5]. Hydroxylation, hydrogenation, demethylation, dehydrogenation, glucuronidation, and sulfation were the major metabolic pathways for the metabolism of osthole [5]. Metabolism Map of DDIE DDIE is a major benzofuran-type neolignane in Myristica fragrans Houtt., which shows various biological activities, including inhibition of hepatic drug metabolism enzyme [30] and anti-bacterial activities [31]. Metabolomics revealed that a total thirteen DDIE metabolites were characterized, including seven novel metabolites [15]. Recombinant CYPs screening showed that CYP1A1 were the primary enzymes contributing to the metabolism of DDIE [15]. In addition, demethylation and ring-opening reaction were the major metabolic pathways for the metabolism of DDIE [15]. Metabolism Map of MRL MRL is a bioactive 8-O-4 0 -neolignan in Myristica fragrans Houtt., which can decrease the production of nitric oxide induced by lipopolysaccharide [32]. UPLC-ESI-QTOFMSbased metabolomics revealed that a total of twenty-three MRL metabolites (nineteen were newly identified) were determined in both in vivo and in vitro [12] (Fig. 4). Hydroxylation and demethylation were the major metabolic pathways in vitro and in vivo, respectively. Recombinant CYPs screening showed that CYP3A4 and CYP3A5 were the primary enzymes responsible for the metabolism of MRL [12]. These results provided important information for the metabolism of 8-O-4 0 -neolignans in Myristica fragrans Houtt. [12]. Biological Activity of Osthole Earlier studies reported the protective effects of osthole on various metabolic diseases, such as hyperlipidemic and fatty liver [28,33]. To determine the potential endogenous metabolites influenced by 40 mg/kg osthole treatment, the mouse serum was analyzed by the approach of UPLC-ESI-QTOFMS-based metabolomics. In the loading scatter Splot of orthogonal projection to latent structures-discriminant analysis (OPLS-DA), two endogenous metabolites, lysophosphatidylethanolamine (LPE) 18:2 and LPE22:6, contributed to the separation of osthole-treated group from vehicle group (Fig. 5). Further targeted metabolomics found that more lysophosphatidylcholines (LPCs) and LPEs were significantly decreased after 3 and 24 h Application of Metabolomics in the Study of Natural Products 323 Bupleurotoxin Rhizoma Coptidis The anti-diabetic effect of berberine [65,66] treatment of 40 mg/kg osthole [14]. These results indicated that osthole could cause the decrease of plasma LPCs and LPEs levels, which might be involved in its potential benefit effects on adipogenesis [14]. Biological Activity of DDIE DDIE exhibited various anti-bacterial activity in previous study [31]. The levels of endogenous metabolites that were affected by DDIE treatment were examined by UPLC-ESI-QTOFMS-based metabolomics. Two top increased ions, 2,8-dihydroxyquinoline and its glucuronide, were found in the S-plot of OPLS-DA [15]. These evidences suggested that DDIE might perform its pharmacological effects through regulating gut microbiota, which might contribute to its anti-inflammatory and anti-bacterial activities [15] ( Fig. 6). 2,8-Dihydroxyquinoline and its glucuronide were related with gut microbiota. In previous studies, both 2,8dihydroxyquinoline and its glucuronide were significantly elevated in mouse urine after tempol treatment [34], and Fenugreek Galactomannan Trigonella foenum-graecum L. Effect of Nutmeg on Colon Cancer Nutmeg is the seed of Myristica fragrans Houtt., which shows various therapeutic effect in gastrointestinal Fig. 2 Chemical structure of some natural products. The natural products' metabolic map has been evaluated by using metabolomics disorders [37]. In previous study, nutmeg could protect against dextran sulfate sodium-induced colitis in mice [38]. Furthermore, nutmeg was known to exhibit anti-microbial activity against Helicobacter pylori and Escherichia coli [39,40]. These results provided a hint that nutmeg might prevent colon cancer via its anti-microbial potential. UPLC-ESI-QTOFMS-based metabolomics revealed the accumulation of uremic toxins (such as cresol sulfate, cresol glucuronide, indoxyl sulfate, and phenyl sulfate) in colon cancer [41]. Uremic toxins could be generated by the disorder of gut microbiota, and it was associated with the elevated interleukin-6 (IL-6) levels [41]. Anti-microbial nutmeg treatment attenuated the levels of uremic toxins and decreased IL-6 levels in colon cancer (Fig. 7). This study suggested that modulation of gut microbiota using nutmeg or other dietary intervention might be effective for the treatment of colon cancer [41]. Hepatoprotective Effect of Nutmeg The aqueous extract of nutmeg can protect rats against isoproterenol-induced hepatotoxicity in previous study Fig. 3 Metabolic map of osthole by using metabolomics. Metabolites marked with asterisk (*) represent isomers were observed. Novel metabolites discovered by UPLC-ESI-QTOFMS-based metabolomics were shown by red arrows. Reproduced with permission from [14] Application of Metabolomics in the Study of Natural Products 327 [42]. Macelignan from Myristica fragrans Houtt. can protect cisplatin-induced liver injury through the activation of c-Jun N-terminal kinase (JNK) [43]. Furthermore, the antioxidant ability of Myristica fragrans Houtt. was related with the inhibition of the lipid peroxidation and superoxide free radicals activity [44]. These results provided a hint that nutmeg might have hepatoprotective activity. A thioacetamide (TAA)-induced acute liver injury in mice was used to explore the mechanism of the protective effects of nutmeg extract. UPLC-ESI-QTOFMS-based metabolomics revealed that treatment with nutmeg led to the recovery of a series of LPCs and acylcarnitines disrupted by TAA exposure [45]. Gene expression analysis demonstrated the protective effect of nutmeg was achieved by the activation of peroxisome proliferator-activated receptor alpha (PPARa). Nutmeg could not protect TAA-induced liver injury in Ppara-null mice, suggesting that its protective effect was dependent on PPARa (Fig. 7) [45]. Furthermore, a neolignane MRL from nutmeg also showed protective effects on TAA-induced liver damage. This study . Reproduced with permission from [14] indicated that lignin compounds were the bioactive ingredients of nutmeg [45]. Metabolomics Explores the Toxicity of Natural Products As a key component of systems biology, metabolomics plays an increasingly important role in mechanistic elucidation of the toxicity of natural products, which can be used to determine (I) the toxic metabolisms and (II) the altered endogenous metabolites. (I) The potential toxic metabolisms of nature products include GSH-conjugated metabolites [5] and N-oxide metabolites [46]. (II) The altered endogenous metabolites might include bile acids, acylcarnitines, lipids, amino acids, long chain fatty acids, and dicarboxylic acids. Metabolomics can be used to evaluate hepatotoxicity [47], cerebral lesion [48], and nephrotoxicity [49], and evaluate the safety of natural products, such as noscapine and Butea monosperma extract (Table 1) [50,51]. Using the powerful technology, the toxicity and safety of various natural products have been evaluated, such as noscapine [50], bupleurotoxin [48], celastrol [52], TP [5], and various extracts of natural products (Table 1 and Fig. 8). Hepatotoxicity of TP TP and TN are two of the main bioactive ingredients isolated from Tripterygium wilfordii Hook. f.. Although two compounds showed the similar chemical structures, their toxicities were different. It was reported that severe hepatotoxicity can be induced by TP in animals and humans [53], but the toxicity was not induced by TN in animals [54]. The activities of aspartate transaminase (AST) and alanine transaminase (ALT) in the serum were dramatically increased in TP-treated group at the dose of 1 mg/kg, while AST and ALT levels were not changed in TN-treated group at the same dose [5]. In order to understand the differences of TP-and TN-induced hepatotoxicity, the metabolic pathways of TP and TN were compared in HLM by UPLC-ESI-QTOFMS based metabolomics. Twenty-five drug metabolites were identified by metabolomics for both TP and TN, and eight were found to be novel. Metabolomics showed that although hydroxylation and demethylation were the major metabolic pathways for TP and TN, there were significant metabolic differences between TP and TN [5]. Furthermore, TP showed significantly lower metabolic rate in liver microsome than TN. This study reveals that the hydroxyl group at C-14 in the molecular structure of TP plays an important role in TP-induced hepatotoxicity ( Fig. 9) [5]. Hepatotoxicity of Noscapine Noscapine is a phthalideisoquinoline alkaloid isolated from opium, which has been clinically used as an efficient cough suppressant [55]. In recent years, the inhibitory potential of noscapine towards the growth of various tumors had been demonstrated using in vitro and in vivo methods, including glioblastoma, colon cancer, and non-small cell lung cancer [56,57]. The safety of noscapine was still controversial because methylenedioxyphenyl group had been widely accepted as an important structural alert [50]. Some studies suggested that noscapine was potentially carcinogenic [58]. Therefore, the purpose of the present study was to investigate the bioactivation of noscapine. UPLC-ESI-QTOFMS-based metabolomics was used to analyses the in vitro incubation mixtures, urine and feces samples from mice treated with noscapine [50]. In vitro GSH trapping revealed the existence of an ortho-quinone reactive intermediate (Fig. 10). However, the reactive intermediate of noscapine was not discovered in vivo. The GSH, AST, ALT, and alkaline phosphatase (ALP) levels obtained from noscapine-treated mice did not show significant alterations. All these results indicated that noscapine did not induce hepatotoxicity in mice. These results provide important information for the development of noscapine for anti-tumor therapy because of its safety [50]. Application of Metabolomics in the Study of Natural Products 331 Conclusions Metabolomics has been defined as ''the comprehensive and quantitative analysis of all metabolites'', which can be used for the study of natural products' druggability, including the metabolites derived from natural products, metabolic changes-induced by natural products, and toxicity related to natural products exposure. In the past decade, natural products studies have been greatly aided by the using of LC-MS-based metabolomics. In the future, metabolomics can give more valuable information in the research of natural products, especially when combined with isotopes tracers, genetically modified mice, and a systems biology analysis. First, the combination of metabolomics with stable isotope tracers can be successfully used to both study the metabolism of natural products and monitor metabolite flux in vivo and in vitro. Second, genetically modified mice are also valuable tools to understand the role of specific genes in the xenobiotic metabolism, active and toxicology mechanisms of natural products. Furthermore, metabolomics integrating with other omics, such as genomics and proteomics, can serve as an effective tool for investigating the mechanisms of natural products toxicity and activity. Metabolomics technology is also widely used for drug discovery and development. Importantly for the pharmaceutical industries, the technology has advanced to find hundreds of endogenous and exogenous metabolites in urine, plasma, and tissue extracts. Metabolomics has the potential to make a powerful impact in preclinical drug development studies, including the identification of new targets, the elucidation of the mechanism of action of new drugs, the development of safety and efficacy profiles, as well as the absorption, distribution, metabolism, and excretion (ADME) of new drugs. More importantly, toxicity is a leading cause of attrition at all stages of the drug development process. Metabolic profiling has the potential to identify toxicity early in the drug discovery process, which can save time and money for pharmaceutical companies.
3,175.2
2018-06-29T00:00:00.000
[ "Medicine", "Chemistry" ]
The Camouflage Machine: Optimising protective colouration using deep learning with genetic algorithms The essential problem in visual detection is separating an object from its background. Whether in nature or human conflict, camouflage aims to make the problem harder, while conspicuous signals (e.g. for warning or mate attraction) require the opposite. Our goal is to provide a reliable method for identifying the hardest and easiest to find patterns, for any given environment. The problem is challenging because the parameter space provided by varying natural scenes and potential patterns is vast. Here we successfully solve the problem using deep learning with genetic algorithms and illustrate our solution by identifying appropriate patterns in two environments. To show the generality of our approach, we do so for both trichromatic and dichromatic visual systems. Patterns were validated using human participants; those identified as the best camouflage were significantly harder to find than a widely adopted military camouflage pattern, while those identified as most conspicuous were significantly easier than other patterns. Our method, dubbed the ‘Camouflage Machine’, will be a useful tool for those interested in identifying the most effective patterns in a given context. Introduction Colouration has been an exceptionally useful phenotype for biological research, from genetics and development to ecology and evolution 1 . However, moving beyond colour per se to patterning, it is also a difficult phenotype to characterise. While a colour can be represented in a relatively lowdimensional space based on spectral characteristics, photoreceptor sensitivities or psychophysical measurements 2 , a pattern (a visual texture) is a high-dimensional attribute 3,4 . The problem of characterisation is particularly acute when the interest is in a colour pattern shaped by the perception of signal receivers as well as ecology, as will be the case for camouflage or biological signals, because a single colour pattern may need to be represented in multiple perceptual spaces. Yet characterisation is just the starting point for an even greater problem if we wish to evaluate the effectiveness of a colour pattern for a visual function such as camouflage or signalling: the difficulty of searching a high-dimensional space for the optimal solution. Here, we 2 show how residual deep neural networks (RDNNs) 5 , combined with genetic algorithms, can be harnessed to classical psychophysical techniques to search a high-dimensional spatiochromatic space for optimal camouflage and signalling patterns. To help understand the depth of the problem, take the study of animal camouflage. Research has typically experimentally tested a small set of pattern types relevant to a specific functional hypothesis, or has identified ecological correlates of extant patterns, i.e. patterns seen in nature 6,7,8,9 . Such studies necessarily omit possible patterns that evolution has not realised because of phylogenetic or developmental constraints, and so cannot identify the influence (if any) of such constraints. Furthermore, without comparison to the optimal pattern(s), it is hard to identify the extent to which an observed pattern is subject to trade-offs with other functions such as thermoregulation or UV-protection 10,1 . Defining a framework that could effectively characterise patterns, and realistically evaluate these patterns in terms of their visibility in a given context, would be an exceptionally useful research tool. Not only would this provide insight into animal camouflage, but also allow the assessment of whether the signals that animals use to display, variously, their qualities to mates or unprofitability to predators, are optimised for conspicuity. These may be subject to trade-offs that render maximal conspicuity suboptimal and/or favour tuning of the signal to particular receivers at particular distances 11,12,13,14 . In the human domain, our method may be useful in the development of bespoke camouflage for specific contexts, maximizing the visibility of warning signs, or helping to reduce visual clutter due to infrastructure. The main purpose of this paper is to propose and test a new method that can identify effective patterns for a given environment. Depending on context and requirements, the method is applicable for finding patterns that will be effective either for camouflage or to be highly conspicuous. Historically, methods used to evaluate patterns tend to be based on binary comparison or measuring detection speed and accuracy, typically on computer screens. This is useful if there are only a few patterns to compare, but if the aim is not to constrain the space of possible patterns artificially then this approach is inadequate. Our method proposes gathering data, using human participants, on a subset of the parameter space and then, using residual deep neural networks 5 to interpolate between samples, to predict the difficulty in finding empirically untested patterns. To make the method highly applicable to real-world scenarios, we constructed naturalistic stimuli and, for realism, projected them on a screen large enough to fill the visual field. We used backgrounds taken from photographs of both temperate forest and scrub desert with foreground occlusion layers and targets inserted into the scenes using blue screening ("chroma key"), a method commonly employed in the film industry. We were also keen that the textures on the targets that we used had biological plausibility. To achieve this, we used two-component reaction- 3 diffusion equations. These systems, originally proposed by Alan Turing 15,16 , consist of semi-linear parabolic partial differential equations capable of creating a vast array of textures, including the camouflage patterns of animals 17,18 . Textures were colour-mapped using one colour (represented as an RGB triplet) for each of the two components, creating two-colour, natural-looking patterns. Besides the human visual system, which is trichromatic, we also tested simulated dichromatic stimuli, because most mammals are dichromats 19 . Targets in our main experiments were constructed using nine dimensions (three for each of the two colours and three for texture), resulting in a parameter space containing a total of 6.18 x 10 17 possible patterns. Since our parameter space was so large, we were unable to exhaustively or randomly select targets with sufficient diversity. Therefore, we implemented a genetic algorithm (GA) to optimise the colour and texture parameters, based on participants' responses trial by trial, for hardest or easiest to see stimuli 20 . Our first three experiments were pilot experiments conducted to validate the genetic algorithm using an increasing number of optimised dimensions: the first experiment optimised for targets with single trichromatic colours; the next experiment tested the optimiser with greyscale reaction-diffusion textures; and the final pilot optimised for two colours, but using a fixed pattern. Our hypothesis was that, over experimental generations, the reaction times to targets would gradually increase or decrease depending whether targets were optimised for camouflage or conspicuity, respectively. Analysis using General Linear Mixed Models showed support for a working GA and we proceeded with our main experiment. The main experiment followed a 2x2 design with two types of backgrounds (temperate forest or semi-arid desert) and two colour vision conditions (trichromatic or simulated dichromatic); examples of the stimuli are illustrated in Figure 1. The results of the main experiment were used to train RDNNs which we then used to predict reaction times (a measure of difficulty) for a far greater number of patterns than had been observed by the human participants. A final experiment was conducted to validate the method. Results Genetic algorithm optimises for camouflage or conspicuity. The three pilot experiments confirmed that the genetic algorithm was capable of optimising target colour and texture for both concealment and high visibility. General linear mixed models (GLMMs) showed that trials became significantly harder over time when optimising for concealment, while optimising for visibility yielded easier to find targets ( Table 1). The effects of trial number on log-transformed reaction times were analysed by fitting general linear mixed models using the lme4 package 21 in R 22 . Nested models were compared using the change in deviance on removal of a term. A positive estimate coupled with a significant p-value suggested that targets became harder to see over the course of the experiment, while negative estimates indicated that targets became easier to see. It should be noted that estimates and standard deviations are presented as log-transformed reaction times. For example, an estimate of 1.86e-4 indicates that the target in the final trial was approximately one second harder to find than the target in the initial trial. In the main experiment, the optimiser produced significantly harder/easier results according to settings (see Table 1, experiments 4a-d), except in the dichromat desert condition optimised for easiest to see targets (p = 0.5321); we address this in the discussion below. trained with all of the samples collected from the main experiment. To provide for a measure of 6 accuracy in our predictions (an estimate of standard error) we created 100 bootstraps of our networks for each of the four conditions. The bootstrap method is a test or metric that uses random sampling with replacement. The bootstrap method allows assignment of accuracy measures, defined here in terms of variance and is particularly useful when the value of interest is, as in the present case, a complicated function 24 . By averaging the bootstrapped networks predictions we calculate both a data-dependent smoothing of the reaction time function and an estimate of our certainty of its estimate. Each network was trained on a random sample of 90% of the data and validated with the remaining 10%. Artificial observers were created to predict pattern difficulty. Predicting the full parameter space poses a computational challenge due the vastness of the space. We therefore created 100 "artificial observers" based on each of the 100 models, using a similar genetic algorithm to the one discussed above. The artificial observers were used to generate 1,000 optimised samples each. Averaged reaction times for the combined 100,000 samples were obtained using all 100 models. The top 25 patterns that were identified for each condition and optimisation setting are illustrated in Figure 2. Figure years and proven in temperate forest areas 25 . We therefore considered it an appropriate control that avoids political sensitivities created by comparisons to any current military patterns. Furthermore, the average colour of the background was khaki, which has been used by numerous militaries (including the British) from the late 19th century, making it also an important control pattern. Statistics were obtained using GLMMs, where the model, with the effect of treatment included, provided a significantly better fit than one without it (Δdeviance = 65.848, d.f. = 3, p = 3.304e-14). Post-hoc analysis (Tukey HSD) showed clearly (see Figure 4) that the hardest patterns identified by our method were significantly harder to detect than DPM (p = 0.0256) and the average colour (p = 0.0474). Similarly, the easiest patterns according to our method were 10 significantly easier to detect than DPM (p < 0.001) and the average colour (p < 0.001). Discussion The method presented here, named the Camouflage Machine, was successfully used to identify patterns that were better, in terms of camouflage, than an existing military camouflage pattern and the average background colour, commonly regarded as a good solution for concealment 26 . The Camouflage Machine provides an effective and efficient way to search very large parameter spaces in order to establish optimal patterns for camouflage, as well as conspicuity, in various environments. As illustrated by our simulated dichromat experiments and use of two very different backgrounds, the method generalises to different colour vision systems and across dissimilar environments. It is important to note that the Camouflage Machine need not identify a single best concealed/visible pattern, but can reveal multiple, similarly effective solutions. It is equipped to deal with the possibility that natural images contain too much variation to expect any method, including evolution, to come up with a unique, best solution. Figure 2 illustrates that there is considerable visual variability between patterns within conditions, but not in terms of predicted 11 difficulty (see Figure 3). In all cases, we found that the standard error of the predictions was less than 17 milliseconds, constituting what we believe to be an indistinguishable perceptual difference in the context of visually complex and non-affective stimuli 27,28 . It is also interesting to note that the predicted mean reaction times for the easiest to find patterns in each condition are equivalent. We believe this should be expected because a sufficiently salient stimulus in a complex scene should exhibit a pop-out effect 29,30,31 . Although dichromat targets optimised for concealment were significantly harder to detect than trichromat ones, consistent with our previous findings on uniformly coloured stimuli 32 , it should be stressed that our results are for trichromats using the visual information available to a dichromat, not natural dichromats neurophysiologically adapted to, and familiar with, using that level of information. Previous studies have used evolving prey 33,34,35 ; however, an important benefit of the Camouflage Machine is that far larger parameter spaces can be explored, effectively predicting data for unseen stimuli. Although deep neural networks are capable of modelling a large parameter space, establishing optima in a principled way remains a challenge. While it is technically possible to exhaustively predict every possible pattern in a given parameter space 32 , it is certainly impractical in a reasonable timescale for the space described in this study. Our solution involves combining genetic algorithms with the deep neural networks, effectively training "artificial observers". Artificial observers allow us to be able to navigate the parameter space in a principled way and establish the hardest and easiest colour pattern combinations within reasonable timescales. For example, the predicted two-colour stimuli (optimised for concealment) were able to outperform an existing military pattern 25 developed specifically for the (temperate forest) environment used in the experiment. We found that our genetic optimiser worked well in producing increasingly harder or easier to find patterns However, in a single condition, dichromat stimuli optimised for conspicuity in the semi-arid desert, an improvement in pattern detectability across all trials was not found. We believe the explanation for this stems from the narrower range of patterns that provide significant levels of concealment; in other words the optimiser has to deal with a space where most patterns are highly visible and so was already at ceiling performance for the majority of trials. The Camouflage Machine offers a novel and useful tool for scientific and commercial applications. Biologists will be interested in testing various hypotheses about the colouration of animals in specific environments. For example, finding an optimal concealing pattern in an environment and comparing it to the camouflage of animals inhabiting that environment, could reveal more about their visual ecology 36,37,38,39,17,40,18 . Our method also applies to the development of bespoke camouflage for human applications, including, for example, hiding visually unappealing infrastructure 41 . Maximising conspicuity will be beneficial in safety applications, such as more 12 salient warning signs and clothing for (motor)cyclists. The Camouflage Machine is also capable of contributing to the development of dual-purpose applications, where both concealment and visibility is simultaneously required. An example of this is hunter's clothing, where the wearer should be concealed from dichromatic game, but highly visible to other hunters 42 . Similarly, the Camouflage Machine can be used to further investigate distance-dependent defensive colouration 43 . Introducing viewing distance as a variable in the models would allow identification of patterns that are conspicuous close up, but become concealed at a distance [11][12][13][14] . While we deliberately limited ourselves to two colours and a simple (spherical) shape, it is clearly possible to include a larger number of colours and more complex shapes. Added to this, measures other than reaction time can be used, for example aesthetic preference 44 . Conclusion The impracticality of using large arrays of patterns has previously been a limiting factor in camouflage research and studies of the adaptive value of colouration more generally 1 . We believe that our proposed method will enable larger-scale studies to be carried out. With the aid of genetic algorithms and deep neural networks, we have also demonstrated a novel approach to psychophysics, carried out using multiple dimensions. We have achieved this using a modest number of optimised samples collected from relatively few participants. Using the Camouflage Machine, it is possible to identify clusters of global optima efficiently for both concealment and conspicuity. Participants A total of 95 participants (71 female, 24 male) were recruited from members of the University of Bristol. All participants had normal or corrected-to-normal vision. Informed consent was obtained from all participants as stated in the Declaration of Helsinki. All experiments were approved by the Ethics Committee of the University of Bristol's Faculty of Science. Stimuli The creation of stimuli used the same approach as Fennell and colleagues 32 13 were cropped to 1920x1080 pixels prior to further processing. Between the foreground and background, a target layer was constructed from colours and textures (see below). We preprocessed the blue screen images in order to create a mask for all possible locations for the centres of targets. The derived mask allowed rapid location selection and introduction of occlusion in the foreground. A bespoke program, written using the Psychtoolbox-3 extension 45 for Matlab 46 , was used to construct and present the stimuli, and to collect experimental data. During all experiments, stimuli were dynamically constructed from the three layers. Backgrounds were randomly chosen from a pool of 64 images (per geographical location). Using the associated mask, a location for the target was randomly selected. Based on the number of backgrounds and potential target positions there were a very large number of potential unique stimuli. The target was always a sphere with a radius of 64 pixels. After applying colours and texture (specific to the experiments described below), we added pseudo-realistic shading in order to produce a spherical look. The shape of a sphere was chosen as it was straightforward to create and provide it with a scene-appropriate shading. Maintaining the spherical shape throughout the experiments managed the potential confound of a target appearing different from varying angles. Where dichromatic images were used, representations of the stimuli were created using the protan equation 47 , which simulates a trichromatic representation of an image perceived by a protanopic dichromat. Textures We implemented the Gray-Scott model of reaction diffusion 48 in CUDA 49 C. Diffusion rates were fixed at 0.073 and 0.031, respectively, and every image started from the same white noise template with 10,000 iterations. The feed parameter varied between 0 and 0.25, while the kill parameter was set between 0 and 0.07. The resulting space yielded a large number 14 of diverse textures that could be represented by a greyscale image. Many of the realised textures were homogenous and so only a subset (n = 6,809) of the textures was selected for the study (see Figure 5). to fit an exponential curve with one term i.e. of the form f(x)=a*exp(b*x) in order to obtain the a and b coefficients. The coefficients were used in a CUDA C program as limits to generate our pattern space. Unwanted patterns that were created by the CUDA program (e.g. those that are completely black) were removed. Bottom: Examples of textures used in the study. 15 One of the characteristics of the Gray-Scott model was that the feed and kill parameters did not provide a smooth space (i.e. adjacent patterns can be drastically different), which was necessary for optimisation. We re-parameterised the texture space to make it smoother using the following procedure. Each pattern was analysed using a Log-Gabor filter bank, providing a 24-dimensional representation of each pattern in terms of spatial frequency and orientation 50 . The resulting values were normalised and binned, in order to reduce the number of textures that could be considered perceptually very similar. This reduced the number of textures to 2,196. The 24-dimensional representation was then reduced to 3 dimensions, using t-Distributed Stochastic Neighbor Embedding (t-SNE), an effective method for dimensionality reduction popular in machine learning 51 , with perplexity of 1000 and learning rate of 1000, derived empirically. The motivation for using t-SNE rather than another popular method such as Principal Component Analysis (PCA) was that, unlike PCA, t-SNE is a probabilistic non-linear method that allowed us to create a texture space where local structure (rather than overall variance) is preserved. Optimisation Parameters for the first generation of stimuli were randomly selected from the parameter spaces for each of the experiments identified below (e.g. three for each colour and three for each pattern), Tournament-based selection involves running "competitions" between members of a population, chosen at random, where the winner of each competition, the member with the best fitness, is selected for crossover. A larger tournament size reduces the probability that weak individuals will be selected (since there is a higher probability that a stronger individual is also in that tournament), thereby increasing selection pressure. Offspring, through the crossover process, received 50% of genes from each parent e.g. the best two individuals from the tournament, selected randomly. This was followed by a mutation rate of 10%, which assigned random values (mutations) to genes, randomly. The genetic algorithm was run for various numbers of generations dependent upon the experiments described below. General procedure Images were projected on to a 1900x1070 mm screen (Euroscreen, Halmstad, Sweden) from 3100 mm using a 1920x1080 pixel HD (contrast ratio 300,000:1) LCD projector (PT-AE7000U; Panasonic Corp., Kadoma, Japan). For Yxy measurements of projected colours, see Table S1 in Participants were asked to indicate on which side of the screen they saw the target, using the left and right shift keys on a keyboard. Each trial had a 10 s timeout; if this was reached, the experiment automatically advanced. The inter-trial interval was set to 2 s. Failure to respond was recorded as a failure and the experiment moved on the next stimulus. Reaction times were recorded to the nearest millisecond and errors indicating choice of the wrong side of the screen were logged. Experiments For each experiment (unless stated otherwise), half of the participants saw targets optimised for increasing difficulty, while the other half were presented with targets optimised for increased visibility. Occlusion levels were maintained between 25 and 50% of the target, chosen randomly from a uniform distribution. Experiment 1 had 10 participants (eight females, two males) with targets of a single colour presented on temperate forest backgrounds in trichromatic colour, optimised over 500 trials. Experiment 2 had 10 participants (eight females, two males) featuring monochrome stimuli with evolving textures presented on temperate forest backgrounds, optimised over 500 trials. Experiment 3 had 10 participants (seven females, three males) who were shown targets with a fixed disruptive texture and two colours against a temperate forest background in trichromatic colour, optimised over 1000 trials. In this experiment, all participants were shown targets optimised to be hard to see. After we confirmed that the optimiser worked, the main experiment (Experiment 4) followed a 2x2 design with two types of backgrounds (temperate forest or desert scrub) and two colour vision conditions (trichromatic or dichromatic). Forty participants (seven males, 33 females) were randomly divided between the four conditions. Each participant completed 1000 trials. Deep neural networks The residual deep neural networks were written in Python 3 (Python Software Foundation, consists of zeros in all vector locations except for a single 1 in the location used to uniquely 17 identify the category. Input colours, both trichromat and simulated dichromat, were represented as RGB triplets, with simulated dichromat values consisting of R and G channels of the same value. An alternative colour space such as CIELab or HSV could have been used, but as neural networks form their own internal representations of distances 54 the choice of colour space is irrelevant. Residual blocks each comprised two dense layers, a dropout layer and a summation layer, containing 768 units each, and an output layer consisting of a single variable representing difficulty as reaction time ( Figure 6). We used the built in 'rmsprop' optimiser from Keras with the 'mean squared error' loss function, on difficulty, to train the networks, based on a batch size of 128 for 500 epochs. In order to establish the number of residual blocks to use, networks were trained with one, two, four and six residual blocks. When training a network model, a proportion of the dataset is "heldout" for validation. The training loss is the error on the training set of data, in the present case calculated using mean squared error, while the validation loss is the error, calculated in the same way, after running the held-out validation set through the trained network. As the number of epochs increases, it is expected that both the validation and training error will drop. Put simply, if validation losses are compared across different models trained with the same data, the model with the lower loss would be preferred. Here, mean validation losses were calculated for 100 bootstrapped neural networks after 500 training epochs using mean squared error (Figure 7). Statistics to compare the effects of residual block number were calculated using random permutation tests, based on 100,000 resamples. p-values were adjusted for multiple comparisons with False Discovery Rate 55 . We found that neural networks with two residual blocks produced 18 significantly lower error rates compared to networks with one or six residual blocks, in all four experimental conditions (Table 2). While networks with two residual blocks produced significantly lower error rates compared to networks with four residual blocks in temperate forest conditions, the difference was not significant in semi-arid desert conditions. Therefore, applying Occam's razor, we used networks with two residual blocks as it was simpler. Validation experiment The top 25 hardest and easiest to find patterns predicted by our method from the temperate forest trichromat condition were paired with 25 DPM and 25 averaged patterns ( Figure 8) for an experimental run with human participants. One run contained each pattern four times in a random order (totalling 100 trials), supplemented by four randomly selected patterns from each condition presented at the start as practice trials. We recruited 25 participants (15 female, 10 male) for the validation experiment, where each run was presented to a single participant. In all other aspects the experiment was identical to those described above. Code availability The Matlab for running experiments and Python code to train and predict from deep neural networks is available from the authors upon request.
6,083
2020-01-14T00:00:00.000
[ "Computer Science", "Engineering" ]
Overview of Explainable Artificial Intelligence for Prognostic and Health Management of Industrial Assets Based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses Surveys on explainable artificial intelligence (XAI) are related to biology, clinical trials, fintech management, medicine, neurorobotics, and psychology, among others. Prognostics and health management (PHM) is the discipline that links the studies of failure mechanisms to system lifecycle management. There is a need, which is still absent, to produce an analytical compilation of PHM-XAI works. In this paper, we use preferred reporting items for systematic reviews and meta-analyses (PRISMA) to present a state of the art on XAI applied to PHM of industrial assets. This work provides an overview of the trend of XAI in PHM and answers the question of accuracy versus explainability, considering the extent of human involvement, explanation assessment, and uncertainty quantification in this topic. Research articles associated with the subject, since 2015 to 2021, were selected from five databases following the PRISMA methodology, several of them related to sensors. The data were extracted from selected articles and examined obtaining diverse findings that were synthesized as follows. First, while the discipline is still young, the analysis indicates a growing acceptance of XAI in PHM. Second, XAI offers dual advantages, where it is assimilated as a tool to execute PHM tasks and explain diagnostic and anomaly detection activities, implying a real need for XAI in PHM. Third, the review shows that PHM-XAI papers provide interesting results, suggesting that the PHM performance is unaffected by the XAI. Fourth, human role, evaluation metrics, and uncertainty management are areas requiring further attention by the PHM community. Adequate assessment metrics to cater to PHM needs are requested. Finally, most case studies featured in the considered articles are based on real industrial data, and some of them are related to sensors, showing that the available PHM-XAI blends solve real-world challenges, increasing the confidence in the artificial intelligence models’ adoption in the industry. General Progress in Artificial Intelligence Artificial intelligence (AI) continues its extensive penetration into emerging markets, driven by untapped opportunities of the 21st century and backed by steady and sizeable investments. In the last few years, AI-based research shows much concentration in areas such as large-scale machine learning (ML), deep learning (DL), reinforcement learning, robotic, computer vision, natural language processing, and internet of thing [1]. According to the first AI experts report in the "One-hundred-year study on artificial intelligence", AI ability will be heavily embodied in education, healthcare, home robotics, The Need for Explainable Artificial Intelligence Explainable artificial intelligence (XAI) is a discipline dedicated in making AI methods more transparent, explainable, and understandable to end-users, stakeholders, nonexperts, and non-stakeholders alike to nurture trust in AI. The growing curiosity in XAI is mirrored by the spike of interest in this search term since 2016 and the rising number of publications throughout the years [38]. The Defense Advanced Research Projects Agency (DARPA) developed the XAI Program in 2017, while the Chinese government announced the Development Plan for New Generation of Artificial Intelligence in the same year, both promoting the dissemination of XAI [40]. The general needs for XAI are as follows: (i) Justification of the model's decision by identifying issues and enhancing AI models. (ii) Obedience of the AI regulation and guidelines in usage, bias, ethics, dependability, accountability, safety, and security. (iii) Permission for users to confirm the model's desirable features, promote engagement, obtain fresh insights into the model or data, and augment human intuition. (iv) Allowance for users to better optimize and focus their activities, efforts, and resources. (v) Support for the model development when it is not yet considered as reliable. (vi) Encouragement for the cooperation between AI experts and external parties. Common XAI Approaches While there are many definitions linked to XAI, this work concentrates only on the most employed notions of interpretability and explainability. On the one hand, interpretability refers to the ability to provide human-understandable justification for the one's behavior. Thus, interpretable AI points to the model's structures which are transparent and readily interpretable. On the other hand, explainability describes an external proxy used to describe the behavior of the model. Hence, explainable AI refers to post-hoc approaches utilized for explaining a black-box model. The first definition explicitly distinguishes between black-box and interpretable models. The second definition takes a broader connotation where explainability is accented as a technical ability to describe any AI model in general and not only black-box identification. XAI approaches are classified according to an explanation scope [41]. Intrinsic models are interpretable due to their simplicity such as in linear regression and logic analysis of data (LAD), while post-hoc approaches interpret more complex nonlinear models [32,33]. Examples of post-hoc approaches are local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP). An approach can be categorized as (i) AI-model specific or (ii) employable in any AI model or model agnostic [14,42]. Class activation mapping (CAM), for example, can only be utilized after CNN. Layer-wise relevance propagation (LRP) and gradient-weighted CAM may be employed in any gradient-based models. Therefore, the explanation by the XAI model can either cater to local data instances or to the whole (global) dataset [41]. For example, SHAP may generate both local and global explanations, while LIME is only suitable for local explanation. Review Motivation The main objective of this work is to present an overview of XAI applications in PHM of industrial assets by using preferred reporting items for systematic reviews and metaanalyses (PRISMA, available online: www.prisma-statement.org, accessed on 4 October 2021) guidelines [43]. PRISMA is an evidence-based guideline that ensures comprehensiveness, reducing bias, increasing reliability, transparency, and clarity of the review with minimum items [44,45]. PRISMA is a 27-checklist guideline that needs to be satisfied as best as possible for the best practice in systematic review redaction. However, in the systematic review presented in the present study, items 12, 13e, 13f, 14,15,[18][19][20][21][22], and 24 of the PRISMA methodology were omitted as they were not dealt with here; see prisma-statement.org/PRISMAstatement/checklist.aspx (accessed on 19 November 2021) for details on these items. The rationalities motivating the compilation of this review are the following: (i) Global interest in XAI: According to our survey, the general curiosity toward XAI has surged since 2016 [14]. Figure 1 shows the interest expressed for the term "explainable AI" in Google searches, with 100 being the peak popularity for any term. (ii) Specialized reviews: In the early years, several general surveys on XAI methods were written [32,34]. More recently, as the discipline grows, more specialized works emerged. Reviews on XAI have been related to drug discovery [31], fintech management [35], healthcare [30,33,36], neurorobotics [39], pathology [28], plant biology [37], and psychology [29]. Thus, it is necessary to produce an analytical compilation of PHM-XAI works, which is still absent. (iii) PHM nature and regulation: PHM is naturally related to high-investment and safetysensitive industrial domains. Moreover, it is pressing to ensure the use of wellregulated AI in PHM. Hence, it is necessary for XAI to be promoted as much as possible and its know-how disseminated for the benefit of PHM actors. Framework A single person performed the search, screening, and data extraction of the articles considered in this study. Thus, no disagreement occurred in all the steps mentioned. Only peer-reviewed journal articles on PHM-XAI of industrial assets between 2015 and 2021 in English language were selected. Databases Five publication databases consisting of ScienceDirect of Elsevier (until 17 February 2021), IEEE Xplore (until 18 February 2021), SpringerLink (until 22 February 2021), Scopus (until 27 February 2021), and Association for Computing Machinery (ACM) Digital Library (until 28 May 2021) were explored. Advanced search was used, but since the database features are different, a specific strategy was adopted. In IEEE Xplore, search was conducted in the "abstract" and "document title" fields only as they are the most relevant options. The database also authorizes search within the obtained results in the "search The review goals are achieved by addressing the following points: (i) General trend: This is related to an overview of the XAI approach employed, the repartition of the mentioned methods according to PHM activities, and the type of case study involved. (ii) Accuracy versus explainability power: According to DARPA, the model's accuracy performance is inverse to its explainability prowess [40]. (iii) XAI role: This must assist or overload PHM tasks. (iv) Challenges in PHM-XAI progress: Crosschecks were done with the general challenges raised in [14,32,34,38] associated with: (a) The lack of explanation evaluation metrics. The absence of human involvement for enhancing the explanation effectivity. (c) The omission of uncertainty management in the studied literature. The remainder of this paper is organized as follows: In Section 2, the methodology is introduced, followed by the results presentation in Section 3. Then, the discussion is elaborated in Section 4. Finally, the concluding remarks are presented in Section 5. Framework A single person performed the search, screening, and data extraction of the articles considered in this study. Thus, no disagreement occurred in all the steps mentioned. Only peer-reviewed journal articles on PHM-XAI of industrial assets between 2015 and 2021 in English language were selected. Databases Five publication databases consisting of ScienceDirect of Elsevier (until 17 February 2021), IEEE Xplore (until 18 February 2021), SpringerLink (until 22 February 2021), Scopus (until 27 February 2021), and Association for Computing Machinery (ACM) Digital Library (until 28 May 2021) were explored. Advanced search was used, but since the database features are different, a specific strategy was adopted. In IEEE Xplore, search was conducted in the "abstract" and "document title" fields only as they are the most relevant options. The database also authorizes search within the obtained results in the "search within results" field. Wildcard was not used in IEEE Xplore even though it was permitted. Comprehensive search in the "title", "abstract", and "keywords" fields were performed in ScienceDirect and Scopus; "title", "abstract", and "author-specified keywords" fields for ScienceDirect; and "search within article title", "abstract", and "keywords" fields for Scopus. However, unlike Scopus, ScienceDirect does not support wildcard search; therefore, it was only employed in Scopus. In SpringerLink, the "with all the words" field was utilized altogether with wildcards. In ACM, both the ACM full-text collection and ACM guide for obtaining the literature were examined. The "Search within" option in the "title", "abstract", and "keywords" was executed with wildcard. Once performed, the screening of duplications was performed by using the Zotero software (www.zotero.org, accessed on 4 October 2021). The full research strategy is listed in Appendix A. Steps of Our Bibliographical Review The following screening steps were executed one after another for obtaining a result, with each screening step starting in the title, then the abstract, and next the keywords: (S1) Verify whether the article type is research or not. (S2) Exclude non-PHM articles by identifying absence of commonly employed PHM terms such as prognostic, prognosis, RUL, diagnostic, diagnosis, anomaly detection, failure, fault, or degradation. (S3) Discard non-XAI articles by identifying absence of commonly used XAI terms which are explainable, interpretable, and AI. (S4) Eliminate non-PHM-XAI articles by identifying the absence of both PHM and XAI terms as, respectively, indicated in steps (ii) and (iii) above. (S5) Remove articles related to medical applications or network security. Then, the context of the articles was examined on the remaining works for final screening and so to retain only the desired articles. The data extracted from the articles were gathered in a Microsoft Excel file with each column corresponding to each investigated variable. Directly retained variables were: "author", "publication year", "title", "publisher", and "publication/journal name". Further information extracted from the article context analysis is as follows: (i) PHM activity category: This corresponds to either anomaly detection, prognostic, or diagnostic, with structural damage detection as well as binary failure prediction being considered as diagnostic. (ii) XAI approach employed: This is related to the category of the XAI method. (iii) Recorded performance: This is associated with the reported result. Some papers clearly claim the comparability or the superiority of the proposed method over other tested methods. In the case where comparison was not conducted, the reported standalone results for accuracy, precision, F1 score, area under the receiving operating characteristic curve (AUC) score, area under precision-recall curve (PRAUC) score, or the Cohen kappa statistic score were referred to Table A4 in Appendix A and classified as either "bad", "fair", "good", and "very good". When mixed performance of good and very good was recorded for the same method, it was quantified as only "good". When a method was superior to the rest, it was classified as "very good" unless detailed as only "good". Some results were appreciated based on the problem at hand, for example using the mean square error (MSE), root mean square error (RMSE), and mean absolute error (MAE) as direct comparisons is not possible. (iv) XAI role in assisting PHM task: This regards the role of XAI in strengthening PHM ability. (v) Existence of explanation evaluation metrics: This is stated as presence or not of a metric. (vi) Human role in PHM-XAI works: This is considered as existence of the mentioned role or not. (vii) Uncertainty management: This is linked to if uncertainty management in any of the stages of the PHM or XAI approaches increases the possibility for adoption by user due to additional surety. (viii) Case study type (real or simulated): Real was considered when the data of a case study came from a real mechanical device, whereas simulated was considered when data were generated utilizing any type of computational simulation. Outputs The outputs were presented in the following forms: (i) Table: Selected and excluded articles with variables sought. (ii) Pie chart: Summary of the PHM activity category, explanation metric, human role, and uncertainty management. (iii) Column graph: Summary of the PHM-XAI yearly trend, XAI approach employed, recorded performance, and XAI role in assisting a PHM task. Framework We selected 3048 papers from the databases according to the applied keywords with their respective number (absolute frequency) as shown in Table A3 of Appendix A. Note that 288 articles were screened out as duplicates. Out of the 2760 remaining, 25 papers were screened out as they are editorial papers or documents related to news. Then, 70 papers were selected according to criteria (S1)-(S5) described in Section 2.3 (steps of our bibliographical review) from the remaining 2735 articles. Lastly, only 35 papers were selected as other 35 articles were deemed not relevant with the reviewed topic after context verification. The final selected and excluded studies can be found, respectively, in Tables A1 and A2 of Appendix A. PRISMA Flow Diagram As mentioned, the selected and excluded articles based on the criteria for inclusion are disclosed, respectively, in Tables A1 and A2. The PRISMA flow diagram of the selection and screening processes is displayed in Figure 2. The repartition of the selected articles' PHM domain as well as their publisher are presented in Figures 3 and 4, respectively. The repartition of the excluded articles' PHM domain as well as their publisher are presented in Figures 5 and 6, respectively. As noted from Figure 3, diagnostic research holds the biggest share in PHM-XAI articles. Figure 4 illustrates IEEE and Elsevier publishers as being the biggest sources of the accepted articles. PRISMA Flow Diagram As mentioned, the selected and excluded articles based on the criteria for inclusion are disclosed, respectively, in Tables A1 and A2. The PRISMA flow diagram of the selection and screening processes is displayed in Figure 2. PRISMA flow diagram of the search strategy for our review on PHM-XAI.3.3 ("*" indicates that "n = " in the database field corresponds to the total number of records from all the databases specified below; and "**" states that the Zotero software was used for duplication analysis). Numerous unselected publications, though related to XAI, correspond to proces monitoring research, as shown in Figure 5. These works were excluded as they are closel related to quality context rather than failure of products. Some works are focused on prod ucts instead of the industrial assets. Furthermore, the anomaly described is seldom asso ciated with process disturbance rather than failure degradation. Studies concerning th Numerous unselected publications, though related to XAI, correspond to process monitoring research, as shown in Figure 5. These works were excluded as they are closely related to quality context rather than failure of products. Some works are focused on products instead of the industrial assets. Furthermore, the anomaly described is seldom associated with process disturbance rather than failure degradation. Studies concerning the network security were also omitted. In addition, most of the excluded articles come from the Elsevier and IEEE publishers as confirmed by Figure 6, further showing that these publishers are the main sources of many XAI-related articles. General Trend As shown in Table A1 of Appendix A and summarized in Figure 7, the accepted articles according to the publication year show an upward trend, with a major spike in 2020, indicating a growing interest in XAI from the PHM researchers. However, the number of accepted articles is still very small, reflecting the infancy state of XAI in PHM, compared to other research fields such as cyber, defense, healthcare, and network securities. XAI is especially beneficial to the latter domains as it helps in fulfilling their primary functions of protecting lives and assets-contrasted to PHM research, where it is predominantly focused in facilitating financial decision making. In the healthcare field, for example, the efforts to evaluate explanation quality are presently an active topic, which is not the case of PHM [46]. The understanding of XAI is also limited in PHM, partly due to comprehensible distrust in using AI in the first place, compounded with the amount of investment needed to build AI systems that is yet to be proven in real life. In fact, manufacturing and energy sectors, associated closely with PHM, are amongst the slowest in adopting AI [47]. Thus, AI only thrives in PHM research. In brief, more exposure and advocation of XAI in PHM are needed to nurture trust in the AI usage, improving day to day the operational efficiency and enabling the overall safeguard of industrial assets and lives. Note that 70% of the included PHM-XAI works come from ScienceDirect a Xplore as testified by Figure 4. Most of the excluded articles in the final stage a from the mentioned databases as shown in Figure 6. These observations suggest t two databases concentrate XAI-related works. It is commendable for a specialized in other publishers to promote the use of XAI in PHM through dedicated sym and special issues, which are still scarce. XAI Interpretable models, rule-and knowledge-based models, and the attention Note that 70% of the included PHM-XAI works come from ScienceDirect and IEEE Xplore as testified by Figure 4. Most of the excluded articles in the final stage also come from the mentioned databases as shown in Figure 6. These observations suggest that these two databases concentrate XAI-related works. It is commendable for a specialized journal in other publishers to promote the use of XAI in PHM through dedicated symposiums and special issues, which are still scarce. XAI Interpretable models, rule-and knowledge-based models, and the attention mechanism are the most employed methods as illustrated in Figure 8. These methods existed well before XAI become mainstream. Then, their implementations became well documented and common. Interpretable approaches consist of linear models widely used before the introduction of nonlinear models. Rule-and knowledge-based models possess the traits of expert systems which became widespread earlier and led to the popularity of AI [48]. The attention mechanism was developed in the image recognition field to improve classification accuracy [49]. Sensors 2021, 5, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journ from the mentioned databases as shown in Figure 6. These observations suggest t two databases concentrate XAI-related works. It is commendable for a specialized in other publishers to promote the use of XAI in PHM through dedicated sym and special issues, which are still scarce. XAI Interpretable models, rule-and knowledge-based models, and the attention nism are the most employed methods as illustrated in Figure 8. These method well before XAI become mainstream. Then, their implementations became we mented and common. Interpretable approaches consist of linear models widely fore the introduction of nonlinear models. Rule-and knowledge-based models the traits of expert systems which became widespread earlier and led to the popu AI [48]. The attention mechanism was developed in the image recognition field to classification accuracy [49]. Other techniques such as model agnostic explainability and LRP are less but are anticipated to permeate in the future due to their nature. They could be u any black-box models. Furthermore, the performance of the AI models is not al these techniques. Model agnostic acts as an external method to the model to be e Other techniques such as model agnostic explainability and LRP are less explored but are anticipated to permeate in the future due to their nature. They could be used with any black-box models. Furthermore, the performance of the AI models is not altered by these techniques. Model agnostic acts as an external method to the model to be explained while LRP requires only the gradient flow of the network. LAD is another interesting technique due to its potential combination with fault tree analysis that is seldom utilized in complex risk management such as in the aerospace and nuclear industries. The lack of coverage in LAD entails more investigation from the researchers on this topic. The diagnostic domain occupies the majority share amongst the accepted works as presented in Figure 3. Looking at the XAI-assisted PHM column in Table A1 of Appendix A, it can be deduced that XAI boosts diagnostic ability. Drawing a parallel between the information from Figure 3 and Table A1, it may be inferred that XAI is particularly appealing to diagnostic as it can be applied directly as a diagnostic tool or in addition to other methods. XAI could provide additional incentive to diagnostic whose main objective is to discover the features responsible for the failure as shown in Figure 9. This interesting point signifies that the diagnostic tasks in these papers are dependent on XAI. Therefore, XAI is not only a supplementary feature in diagnostics but also an indispensable tool. The same phenomenon is observed in anomaly detection as presented in Figure 9. Knowing the cause of anomaly could potentially avoid false alarms, preventing resource wastage. Thus, XAI might be employed to execute PHM tasks and explain them. pealing to diagnostic as it can be applied directly as a diagnostic tool or in addition to other methods. XAI could provide additional incentive to diagnostic whose main objective is to discover the features responsible for the failure as shown in Figure 9. This interesting point signifies that the diagnostic tasks in these papers are dependent on XAI. Therefore, XAI is not only a supplementary feature in diagnostics but also an indispensable tool. The same phenomenon is observed in anomaly detection as presented in Figure 9. Knowing the cause of anomaly could potentially avoid false alarms, preventing resource wastage. Thus, XAI might be employed to execute PHM tasks and explain them. Figure 9. Distribution of the XAI assistance in the indicated PHM task. Table A1 reveals that some XAI approaches directly assist the PHM tasks achieving excellent performance. Furthermore, the recorded PHM performance of both XAI and non-XAI methods (works that depend on XAI for explanation only) are mostly very good for diagnostics and prognostics, as depicted in Figure 10. In brief, no bad results were recorded as confirmed by Figure 10. Whether the results are contributed by XAI or not, it can safely be concluded that explainability does not affect the tasks' accuracy in the studied works. The outcomes and reported advantage of XAI as a PHM tool are important steps in eradicating the skepticism and mistrust of the industry in the AI usage. These facts might intensify the assimilation of AI in the industry. Table A1 reveals that some XAI approaches directly assist the PHM tasks achieving excellent performance. Furthermore, the recorded PHM performance of both XAI and non-XAI methods (works that depend on XAI for explanation only) are mostly very good for diagnostics and prognostics, as depicted in Figure 10. In brief, no bad results were recorded as confirmed by Figure 10. Whether the results are contributed by XAI or not, it can safely be concluded that explainability does not affect the tasks' accuracy in the studied works. The outcomes and reported advantage of XAI as a PHM tool are important steps in eradicating the skepticism and mistrust of the industry in the AI usage. These facts might intensify the assimilation of AI in the industry. Sensors 2021, 5, x FOR PEER REVIEW Figure 10. Distribution of the performance of AI models according to the indicated task. PHM Real industrial data are mostly used in case studies to demonstrate the effec of XAI as reflected in Figure 11a. Furthermore, the studies reflect the outreach o diverse technical sectors such as aerospace, automotive, energy, manufacturing, tion, and structural engineering fields. These positive outlooks prove that the a PHM-XAI combinations are suitable to solve real-world industrial challenges wit a good performance, boosting the confidence in the AI models' adoption. PHM Real industrial data are mostly used in case studies to demonstrate the effectiveness of XAI as reflected in Figure 11a. Furthermore, the studies reflect the outreach of XAI in diverse technical sectors such as aerospace, automotive, energy, manufacturing, production, and structural engineering fields. These positive outlooks prove that the available PHM-XAI combinations are suitable to solve real-world industrial challenges with at least a good performance, boosting the confidence in the AI models' adoption. PHM Real industrial data are mostly used in case studies to demonstrate the effectiveness of XAI as reflected in Figure 11a. Furthermore, the studies reflect the outreach of XAI in diverse technical sectors such as aerospace, automotive, energy, manufacturing, production, and structural engineering fields. These positive outlooks prove that the available PHM-XAI combinations are suitable to solve real-world industrial challenges with at least a good performance, boosting the confidence in the AI models' adoption. Human Role in XAI A very small role was played by humans in the examined works as illustrated in Figure 11b. Human participation is vital for evaluating the generated explanation, as it is intended to be understood by them. This involvement helps in the assimilation of other human-related sciences to PHM-XAI such as human reliability engineering (HRA), psychology, or even healthcare, further enriching this new field [50]. Furthermore, human involvement is encouraged for the development of interactive AI, where the expert's opinion strengthens or debates the generated explanation, presenting an additional guarantee in AI performance. Explainability Metrics Note that the usage of explanation evaluation metrics is nearly nonexistent as presented in Figure 11c. The explanation evaluation method engineered for the PHM usage is practically absent according to our study. These metrics are vital to the researchers and developers when evaluating the explanation quality. It is recommended that adequate assessment metrics for PHM explanation, considering security and safety risk, maintenance cost, time, and gain are developed and adopted. Such metrics should require the collaboration of all PHM actors to satisfy the need of each level of hierarchy. From this angle, XAI experts could be inspired by the work performed in the HRA domain, which studies the human-machine interaction in reliability perspective [50]. An overview of explanation metrics and methods is presented in [51], whereas the effectiveness of explanation from experts to nonexperts is studied in [52], and a metric to assess the quality of medical explanation was proposed in [53]. Uncertainty Management Various types of uncertainty management methods are adopted in different stages in the studied works on the PHM-XAI area as detailed in Table A1. Nevertheless, note that, in Figure 11d, much improvement is still required in this area. Uncertainty management gives additional surety to users to adopt PHM-XAI methods compared to point estimation models. Furthermore, uncertainty quantification is vital to provide additional security to AI infrastructure against adversarial examples, either unintentionally or motivated by the attack. This quantification might minimize the risk of wrong explanation being produced from unseen data due to adversarial examples. Conclusions In this work, a state-of-the-art systematic review on the applications of explainable artificial intelligence linked to prognostics and health management of industrial assets was compiled. The review followed the guidelines of preferred reporting items for systematic reviews and meta-analyses (PRISMA) for the best practice in systematic review reporting. After applying our criteria for inclusion to 3048 papers, we selected and examined 35 peer-reviewed articles, in the English language, from 2015 to 2021, about explainable artificial intelligence related to prognostics and health management, to accomplish the review objectives. Several interesting findings were discovered in our investigation. Firstly, this review found that explainable artificial intelligence is attracting interest in the domain of prognostics and health management, with a spike in published works in 2020, though still in its infancy phase. The interpretable model, rule-and knowledge-based methods, and attention mechanism are the most widely used explainable artificial intelligence techniques applied in the works of prognostics and health management. Secondly, explainable artificial intelligence is central to prognostics and health management, assimilated as a tool to execute such tasks by most diagnostic and anomaly detection works, while simultaneously being an instrument of explanation. Thirdly, it was discovered that the performance of prognostics and health management is unaltered by explainable artificial intelligence. In fact, the majority of works that related both approaches achieved excellent performance while the rest produced only good results. However, there is much work to be conducted in terms of human participation, explanation metrics, and uncertainty management, which are nearly absent. This overview discovered that most real, industrial case studies belonging to diverse technical sectors are tested to demonstrate the effectiveness of explainable artificial intelligence, signifying the outreach and readiness of general artificial intelligence and explainable artificial intelligence to solve real and complex industrial challenges. The implications of this study are the following: (i) PHM-XAI progress: Much unexplored opportunity is still available for prognostics and health management researchers to advance the assimilation of explainable artificial intelligence in prognostics and health management. (ii) Interpretable models, rule-and knowledge-based models, and attention mechanism: These are the most widely used techniques and more research involving other approaches could give additional insight into the prognostics and health management community in terms of performance, ease of use, and flexibility of the explainable artificial intelligence method. (iii) XAI as PHM tool and instrument of explanation: explainable artificial intelligence could be preferred or required within prognostics and health management compared to standalone methods. (iv) PHM performance uninfluenced by XAI: The confidence of prognostics and health management practitioners and end users in the artificial intelligence model's adoption should be boosted. (v) Lack of human role, explanation metrics, and uncertainty management: Efforts need to be concentrated in these areas amongst other in the future. Moreover, the development of evaluation metrics that can cater prognostics and health management needs is urgently recommended.
7,200.2
2021-12-01T00:00:00.000
[ "Computer Science" ]
Measurement of Li-Ion Battery Electrolyte Stability by Electrochemical Calorimetry Recent work describing the use of high precision coulometry combined with isothermal heat flow calorimetry has shown promise in studying electrolyte reactivity in Li-ion batteries. In this paper we describe what we term an “integration/subtraction” technique for determining the electrolyte reactivity as a function of cell voltage in Li-ion full pouch cells. We apply this method to the characterization of a base electrolyte blend of ethylene carbonate and ethyl methyl carbonate (EC/EMC 3/7 w/w) with 1 M LiPF 6 . Wethenshowhowtheparasiticthermalpowerandcoulombicefficiencyareaffectedbytheadditionofthereactivecarbonatesvinylenecarbonateand1-fluoro-ethylenecarbonatetothebaseelectrolyte.WeshowhowthismethodcandiscriminatetheeffectivenessofadditivesusedinLi-ioncellsasafunctionofcellvoltageandcyclelife.©TheAuthor(s)2017.PublishedbyECS.ThisisanopenaccessarticledistributedunderthetermsoftheCreativeCommonsAttribution4.0License(CCBY,http://creativecommons.org/licenses/by/4.0/),whichpermitsunrestrictedreuseoftheworkinanymedium,providedtheoriginalworkisproperlycited.[DOI:10.1149/2.1651704jes]Allrightsreserved. Li-ion cell technology has reached a level of maturity witnessed by the pervasive applications of the energy storage technology in everything from cell phones to electric vehicles. However there will always be a need and desire to achieve higher energy density, longer cycle life and better safety. Inevitably the cell electrolyte chemistry is a key factor in achieving these goals. Presently there is considerable effort applied to higher voltage Li-ion cells utilizing different positive electrodes. There is also considerable effort being applied to the incorporation of silicon and silicon-based materials into the negative electrode to increase cell energy density. In these examples the electrolyte chemistry and its stability are critical concerns. Therefore techniques or tools that enable sensitive measurement and discrimination of different electrolyte chemistries are very useful and important. For example, cell chemistries beyond Li-ion, such as Li-air, Li-S or Mg-based cell chemistries will rely on unique passivating layers that attempt to protect the reactive metal electrode. Sensitive thermal techniques that can measure the heat produced by irreversible electrolyte reactions with highly reducing or oxidizing electrodes will be important in determining the effectiveness of engineered passivation layers. Recently Dahn and co-workers have shown how isothermal heat flow calorimetry can be used to discriminate the effects of electrolyte additives in Li-ion cells. 1 They also demonstrated a strong voltage dependence on the parasitic thermal power; it increases significantly at higher voltages due to electrolyte oxidation. 1 In their method, a mathematical model was developed and fit to experimental thermal data. By simultaneously fitting multiple experimental datasets obtained at different currents, the authors were able to separate the parasitic thermal power from the total cell thermal power. While the method described in the present paper, which we term "integration/subtraction", also discriminates parasitic thermal power from the total thermal power, we also simultaneously acquire useful coulombic efficiency data as well as other cell parameters. In an earlier publication we described a method for extracting the parasitic thermal energy per cycle from the total cell thermal energy per cycle using symmetric coin cells of graphite and Li 4 Ti 5 O 12 . 3 In that work, the cell's energy hysteresis over one full cycle is subtracted from the total thermal energy over a full cycle as measured with the calorimeter. We term this method the integration/subtraction method for convenience. In this paper we extend this method to Li-ion full pouch cells with capacities of approximately 250 mAh. We arrive at an average parasitic thermal power for each of a series of narrow voltage ranges. The effect of vinylene carbonate (VC) and 1-fluoro-ethylene carbonate (FEC) as additives to a base electrolyte on parasitic thermal power is shown. Experimental Machine wound pouch cells were used in this work. They were obtained from LiFUN Technology (Xinma Industry Zone, Golden Dragon Road, Tianyuan District, Zhuzhou, City, Hunan Province, PRC, 412000) as sealed dry cells with a nominal capacity of 250 mAh. The negative electrode in these cells is an artificial graphite and the positive electrode was a high voltage LiCoO 2 . Table I lists the detailed cell information. The cells were balanced to a 4.4 V charge voltage. The cells were first opened in a dry room and then dried at 70 • C under vacuum overnight. The cells were then filled with 0.9 g of electrolyte in a dry room with an operating dew point of −50 • C. The cell filling procedure employed brief, periodic vacuum degassing in order to allow the electrolyte to access all void volume within the cell's electrodes. The weight before and after the electrolyte filling procedure was recorded in order to ensure the weight of electrolyte added to each cell was consistent. The pouch cell was then sealed under vacuum in a MSK-115A vacuum sealing machine (MTI Corp.) The cells were allowed to stand for 24 hours prior to cycling to ensure complete wetting, no charge was applied during standing. The base electrolyte used in this work was 1 M LiPF 6 in a 3/7 (w/w) blend of ethylene carbonate (EC) and ethyl methyl carbonate (EMC) obtained from BASF and used as received. VC (Novolyte) and FEC (BASF) were also used as received. All solvents, salts and blends were stored in an argon glove box located within a dry room. Table II lists the cells and electrolytes used as well as the corresponding IDs used in the figures and text. Isothermal Heat Flow Calorimeter The heat flow calorimeter is a TAM III (Thermally Activated Module, TA Instruments) in which 12 calorimeters were inserted. The temperature used throughout this work was 37 • C. The TAM III is capable of controlling the bath temperature to within a few micro-degrees centigrade. Reference 3 describes the method and modifications made to the TAM III to allow in-operando calorimetry measurements on Liion cells. The thermal stability of the instrument with these modifications was described in an earlier paper. 3 A difference noted in this work compared to the earlier work using coin cells was the observation of "cross talk" between the calorimeters. In coin cell work typically a few dozens to 100 μW of thermal power is produced by a cell. The 250 mAh pouch cells typically could produce 5 to 6 mW. This cross talk affected the overall baseline stability in some calorimeters but the effect was no more than several μW. A four-wire configuration was used to supply charging and discharging currents. In contrast to our earlier work using coin cells the wiring through the lifters to the pouch cell was different. In charging and discharging the pouch cells, 20 mA was typically used which was nominally a C/11 rate. While the four wire configuration will compensate the voltage drop for the lead resistance there is no compensation for the resistive heating that will occur in the current-carrying leads. The resistive heating of our initial 32 AWG current-carrying leads was being registered by the TAM and we therefore replaced them with 14 AWG wire coated with a polyimide resin. In order to ensure the lead wires did not produce heat at the current levels used in this work the four leads were shorted together and placed into the calorimeter and various currents up to 50 mA were applied and the thermal power recorded. At 20 mA the heat flow was negligible. Internal calibration of individual calorimeters is provided in the as received equipment but we modified this procedure to better duplicate the response of the cell under measurement. The general calibration procedure was described earlier for coin cells. 3 In this work the procedure was very similar with the exception of using a calibration cell that was fabricated by embedding a 250 precision resistor in a dry pouch cell containing the flat wound jelly roll. Typically a current of 3 mA was applied to the calibration cell producing a thermal power of 2.25 mW. The calorimeter gain was then set to this value. The baseline or zero heat configuration was set by inserting a calibration cell and allowing approximately 24 hours for thermal stabilization. The baseline was then set to zero heat flow. In a previous publication we reported an error of +/−1 μW for measurements done on coin cells. 3 In the present paper, where the heat flow is much greater and the measurement time much longer, we compared the stability of the baseline thermal power of calibration cells. For example; the calorimeter baseline was measured before and after a measurement of a pouch cell and the variation or drift of the baseline was used as an error in the thermal power. We found that the baseline stability was 5 μW or better between measurements. This is greater than what we found with coin cells and this appears to be the result of the much longer times between baseline calibrations, which could be as long as one month when measuring a large number of voltage segments on pouch cells, and possibly the change from 32 AWG to 14 AWG current leads. Methods and Cycling Protocol The treatment of data to arrive at a parasitic energy by this method has been described in detail earlier. 3 Here we briefly describe the methods and variations used in the present work. Equation 1 describes the sources of heat flow in a full cell with intercalation electrodes. When a cell is charged or discharged the total heat flowing into or out of the battery is a result of: 1) entropy changes occurring in the intercalation materials, or reversible heating, which is given as the first term in Eq. 1 multiplied by the rate of change in the intercalant, 2) all sources of polarization (e.g. contact resistance, charge transfer resistance, electrolyte resistance and diffusional impedance) which is given as the last term in Eqs. 1, and 3) parasitic reactions occurring within the cell originating from, for example, reduction or oxidation of electrolyte components. This is given by dQ p /dt in Eq. 1. The term dQ 0 /dt represents the calorimeter baseline heat flow. Our objective is to eliminate the reversible as well as joule heating to arrive at a value for the parasitic thermal power. In the method used here, which is different than that used by Downie, 2 the reversible heating is eliminated by integrating the total thermal power from the cell with respect to time over a full cycle to yield a thermal energy for that cycle. If the reversibility of the cycle is greater than 99% the reversible heating will cancel to a good approximation. The result is a thermal energy for that cycle composed of joule heating and parasitic reaction heat. All sources of joule heating, as described above, will be reflected by the area between the charge and discharge curve as described earlier. 3 By integrating the voltage hysteresis in a plot of voltage versus capacity the joule heating from that cycle can be determined. Subtracting the thermal energy from polarization sources from the result of integrating the thermal power with respect to time gives the parasitic thermal energy for that cycle. Equation 2 shows the method used. yields a parasitic thermal energy which we can convert to an average parasitic thermal power by dividing the parasitic thermal energy, Qp, by the total cycle time (t c + t d ). Assuming constant currents, the average parasitic thermal power (P P ) obtained with the integration/subtraction can be succinctly stated as: whereP is the average calorimeter power across charge and discharge, I is current,V is average voltage, t is time and the c and d subscripts correspond to charge and discharge respectively. Assuming perfect coulombic efficiency and charge/discharge currents of identical magnitude, one gets the approximation in Eq. 3, which can be directly compared to Eq. 1 since η = V 2 when averaged over a full cycle. Cycling Protocol Downie and co-workers used narrow voltage range cycling of full cells to explore the voltage dependence of parasitic thermal power. 1,2,4 We also apply narrow voltage segments. In a typical experiment the cells are cycled 10 times between the voltages of 3.0-3.8, 3.5-3.8, 3.6-3.9, 3.7-3.9, 3.8-4.0, 3.9-4.1 and 4.0-4.25. The current for all cycles was 20 mA, nominally a C/11 for a full voltage range of 3.0-4.25 V. Ten cycles per voltage segment resulted in a nominally stable coulombic efficiency and average parasitic thermal power for a given voltage segment as will be shown and discussed below. The last cycle was used to construct a plot of parasitic thermal power versus average voltage. After the ten cycles were completed the cell was left in an open circuit condition for 12 hours before proceeding to the next voltage segment. We note that in the method we use here coulombic efficiency, or inefficiency, is simultaneously collected with the thermal data for each voltage segment. Results As noted above, cells were cycled in multiple limited voltage ranges. Figure 1 shows how the limited voltage segments overlay perfectly on the full range voltage curve, indicating excellent reversibility in cell E37-c1. Similarly excellent reversibility was found for all other cells in this study. Figure 2 shows overlays of narrow range cycling and full range cycling for the voltage and thermal measurements obtained with cell E37-2VC-c2. Four different narrow ranges are exemplified and, as in Fig. 1, there is a very good match between the narrow and full range voltage curves. Figure 2 also shows that the same is true for the thermal data and that the entropic and parasitic reaction are to a large extent identical for narrow range and full range cycling. The conclusions reached with the narrow range cycling should therefore be broadly applicable to cells undergoing full range cycling. The source of the pouch cells used in this work was the same source used by Dahn and co-workers in a series of recent publications. 1,2,4 They noted that the negative electrode has an overhang of 1.5 mm, or is 3.0 mm wider than the positive electrode in these pouch cells. In our cells the overhang was 1.0 mm. This overhang becomes either a reservoir of Li+ or a sink for Li+ depending upon the state of charge and the direction (charge or discharge) one is attempting when cycling the cell. The result of this overhang is a slow equilibration of the state of charge as Li+ can either diffuse into the overhang region or diffuse out of the overhang region. The above references suggest that the complete equilibration of the state of charge from mass transport processes could require 1000 hours. 5 Ideally this overhang should be minimized and the speed of the measurement, or the number of cycles required to reach a stable CE or dQ/dt, would be less. While our overhang is less than that of Dahn and co-workers we still observe the effects of this overhang. Thus the 10 cycles at each voltage segment represents a compromise between time and the stability of the CE and parasitic thermal power. Ten cycles typically requires almost 4 days (∼96 h) of cycling for some voltage segments. Figures 3a and 3b show the parasitic thermal power and CE respectively for 10 cycles for cell E37-c2 cycling between 3.8 V and 4.0 V and between 4.0 V and 4.2 V. This cell was cycled 10 times at room temperature between the limits of 3.0 V and 4.25 V prior to inserting into the TAM. Note that while the CE and parasitic thermal power (dQ/dt) are not completely stable, the changes near the end of the 10 cycles become small. The average thermal power for the voltage segment 3.8 V-4.0 V changes by only 2 μW (2%) from cycles 5 to 10 and the change for the 4.0 V-4.2 V voltage segment was 16 μW or about 6%. The 10 th cycle for each voltage range measured was then used in a plot of parasitic thermal power versus average voltage. Figure 4 shows the average parasitic thermal power and coulombic inefficiency (CIE, calculated as 1-CE) as a function of average voltage for cells E37-c1 and c2. For each of the narrow voltage segments, the dt ) without any explicit dependence on current or voltage. As detailed in this manuscript, the average parasitic power has a strong dependence on voltage. It is therefore of interest to determine whether a dependence on current is found experimentally. Figures 5a and 5b show plots of average parasitic thermal power and CIE respectively as a function of voltage at two different currents, 10 mA and 20 mA, for control cell E37-c4. The cell was cycled for 50 cycles (30 mA, 3.0 V-4.25 V) at room temperature (RT), then underwent narrow range cycling in the calorimeter, returned to RT for another 50 cycles, then finally placed in the calorimeter for narrow range cycling with currents alternating between 10 mA and 20 mA. Figure 5a shows the average parasitic thermal powers at a given average voltage are practically identical for both currents. The absence of dependence on current is consistent with previous high precision coulometry studies, which showed the CE (or CIE) of cells as being time dependent as opposed to cycle (or current) dependent. Figure 5b shows the CIE is indeed different across currents since the time spent in a cycle is directly proportional to the current. The data in Figure 5 was acquired from a control cell having undergone a number of narrow cycles at 37 • C and 100 cycles at RT and thus considered a cell with relatively stable electrode passivation layers. Performing the same experiment without precycling the cells could have resulted in misleading results due to the faster decay of parasitics in the early cycles and the time required to perform the experiment. Even after precycling the cells, the decay of parasitics with passivation could lead to differences across currents if the currents were sufficiently small (and therefore the times sufficiently long). Vinylene carbonate (VC) is a well-established electrolyte component in Li-ion cells 6,7 and has been shown to affect cycle life and capacity retention. The function of VC in the cell is less clear with some authors suggesting its activity is centered on the negative electrode and others indicating the benefit is on the SEI formed at the positive electrode. 6,7 Figure 6a shows the parasitic thermal power of cells with VC (E37-2VC-c1, 2) and without (E37-c1, 2) cycled in narrow voltage ranges. Figure 6 shows a dramatic difference in the parasitic power and CIE, especially in the higher voltage regions, suggesting the function of VC is also to provide passivation of the positive electrode. A nearly five-fold decrease in parasitic power is observed at the average cell voltage of 4.2 V. In a previous paper we used CE data to calculate an apparent reaction enthalpy from symmetric cells. When using symmetric cells, where both electrodes are the same and therefore the electrolyte reactions are the same, this approach is feasible and represents an "aggregate reaction enthalpy" of all the irreversible electrolyte reactions. In the present case this is not possible as it is expected that the parasitic reactions on the positive electrode are much different than those occurring on the negative electrode. Furthermore, materials produced at one electrode may have enough solubility to be transported from one electrode to the other. In the present work we only regard CE, or CIE, as a correlation factor. That is, if the cell experiences an increase in parasitic power and the increase in parasitic power is associated with the loss of lithium then an increase in CIE is expected. Inspection of Figures 6a and 6b show this correlation to be true. The data in Figure 6 suggest a significant difference in capacity fade should exist between the cells with and without VC if the parasitic thermal power measured by this method represents capacity loss from irreversible Li loss from electrolyte reactions. Figure 7 shows the normalized capacity versus cycle number for a cell with (E37-2VC-c3) and without (E37-c3) VC cycled in full voltage range (3.0 V-4.25 V). A very clear difference in fade exists between the cells confirming the parasitic reactions directly result in capacity fade in full cells. Using the same methods we also investigated the differences, if any, between two common additives, VC and FEC. Two cells with 2% VC (E37-2VC-c1, E37-2VC-c2) and two cells with 2% FEC (E37-2FEC-c1, E37-2FEC-c2) were cycled 10 times at room temperature on a Maccor cycler and then inserted into the calorimeter. Figures 8a and 8b show the thermal parasitic power of each pair of cells and the CIE respectively. The difference in parasitic power for the two electrolytes is very small and nearly within the experimental error at low cell voltages but at the higher cell voltages the cells with 2% VC show lower parasitic power. Similarly the CIE at low voltages is within experimental error and only at the highest voltages does the difference in CIE depart from experimental error. Both results suggest that VC is more effective at establishing a more protective SEI layer at the higher cell voltages. The differences found in parasitic thermal Parasitic Thermal Power as a Function of Cycle Number Control cells E37-c2, c4, and c3 were cycled for 10, 50, and 180 cycles at room temperature (22 • C). They were subsequently placed in the calorimeter at 37 • C and narrow range cycling was performed and parasitic thermal powers were calculated. Figure 9 shows the parasitic thermal power of these cells and all remain very similar in thermal power as a function of average cell voltage. Prolonged cycling at room temperature had little effect on the measured parasitic thermal power. This is likely due to an Arrhenius-type dependence of reaction rate on temperature, where the impact of temperature increase was larger than the impact of RT cycling. Nevertheless there is a time-dependent decay of parasitic power, as can be seen by careful comparison of the parasitic power of cell E37-c4 in Figures 9 and 5. The parasitic powers in Figure 9 were measured after 50 cycles at RT and those in Figure 5 were the second round of calorimeter measurements performed after an additional 50 cycles at RT. The second round measurements are slightly lower. The small and monotonic change in parasitic powers with cycling history show that the measurement of parasitic thermal power even at low cycle numbers can be predictive of the capacity retention when capacity loss is a function of electrolyte reactivity. Effect of Additives on Cell Polarization In addition to the coulombic efficiency, average parasitic thermal power or parasitic thermal energy this technique also provides a measure of the cell polarization. As discussed above the voltage hysteresis, representing all sources of cell polarization, is subtracted from the total thermal energy over an entire cycle. Thus the cell polarization is also determined. This allows us to assess the effect of electrolyte or electrolyte additive on the total cell polarization. Figures 10a and 10b show the parasitic thermal power and average voltage hysteresis as a function of the average voltage of the narrow cycling respectively. Both 2% VC and 2% FEC have identical voltage hysteresis to a control cell and therefore have no effect on cell polarization. VC at a 10 wt% level causes an increase in the cell polarization while FEC at a 10 wt% level does not. Figure 10a shows the parasitic power for cells with 2% and 10% VC are identical within the accuracy of the measurement. Increasing the VC content under these experimental conditions therefore does not reduce parasitic thermal power. This is consistent with the hypothesis that no benefit is obtained from having more VC than is required for forming a passivation layer on the electrode materials. 8 Numerical Narrow Cycling Figure 2 shows that both the voltage curve and the thermal signal are very close to being a subset of the full range cycling. This suggests near perfect reversibility with state of charge, i.e. that there is no hysteresis of entropic events with capacity. This assumption would allow the application of Eq. 1 to a subset of full range cycling. Indeed, one could perform the integration/subtraction method allowing the calculation of parasitic power through the cancellation of entropic and impedance contributions on a subset of the full range data. While this was done experimentally through narrow cycling within voltage ranges, it is also possible to do this numerically within capacity Table II for electrolyte formulations. ranges. The average voltage within a given capacity range is given by: where V is voltage, c capacity, and c 0 and c 1 are the capacities at the beginning and end of the segment respectively. Combining Equations 2 and 4 yields the average parasitic thermal power (P p ) for a subset: [5] where P is the power measured by the calorimeter, t time, t o and t 1 the times corresponding to c 0 and c 1 on charge respectively, and t 2 and t 3 the times corresponding to c 1 and c 0 on discharge respectively. By utilizing Equations 4 and 5 one can then calculate the dependence of parasitic power on average voltage from a single full range cycle. The only tunable parameter in the calculation is c = c 1 − c 0 , the width used for the numerical loop. Of course, the thermal and electrochemical datasets must be carefully aligned in time, thereby allowing to translate a capacity from the cell into a time on the thermal data set. Figure 2 shows that the experimental narrow voltage ranges yield capacity ranges between 50 mAh and 100 mAh. Figure 11 shows an example of the application of the numerical approach on a range similar to the experimental range shown in Fig. 2b. A complete parasitic power versus average voltage curve may be calculated with a given capacity width, c, by taking arbitrarily small steps in capacity. Figure 12 shows a parasitic power curve calculated using electrochemical and thermal data from a full range cycle of cell E37-2VC-c using a variety of capacity widths. One can see the result is not very sensitive to the capacity width. Wider widths smooth out local variations but the effect of the edges extends further into the data. Edge effects stem from the transient section of the thermal data on the edges of the full range (near 3.0 V and 4.25 V) and are approximately 20 mAh in width based on inspection of Fig. 2. The large negative values below 3.7 V are seen as artifacts caused by edge effects. It should be noted that Equation 5 is the derivation of the general case for calculating parasitics and that "integration/substraction" method is the specific case with the capacity width corresponding to the whole cycle and that the "average" method recently proposed by Glazier et al. is the specific case with the capacity width set to zero. 9 The general behavior of the numerically calculated parasitic power increases with higher voltage in agreement with the experimental results. Surprisingly, negative parasitic power values are obtained for values near the middle of the full range voltage curve (∼3.8 V). The dip in parasitic power near 3.8 V is consistent with the experimental results, indeed Fig. 7 shows a dip in parasitic power and more significantly in CIE near 3.9 V. However, it is unclear whether the negative values obtained in parasitic power are physical or an artifact of the methodology, further study is needed. Figure 13 shows that in the higher voltage range there is good agreement between the numerically calculated parasitic powers and the experimental results for cells with control electrolyte, 2% VC, and 2% FEC. This methodology therefore yields the dependence of parasitic power on voltage without the need to perform several cycles in narrow voltage ranges as exemplified here or the need to perform several cycles in narrow voltage ranges at various very slow currents as performed by Downie et al. 1 This approach could therefore prove to be a powerful screening method for the stability of electrolytes at high voltages. Summary A calorimetric method for characterizing the parasitic electrolyte reactions in graphite/LiCoO 2 Li-ion full cells was described. The method yields cell voltage dependence, coulombic efficiency and cell polarization in addition to parasitic thermal power or energy. Cells assembled with a blend of 1M LiPF 6 in EC/EMC 3/7 showed an increase in parasitic thermal power with increasing cell voltage indicating the increased electrolyte reactivity at higher voltage. The addition of the additives VC and FEC dramatically reduced the parasitic thermal power, especially at high voltages. The differences shown by this calorimetric method between cells with only the base electrolyte and cells with additives was also manifested in capacity retention confirming the basis for the parasitic thermal power differences being due to electrolyte reactions involving lithium consumption. A novel method for calculating parasitic power as a function of voltage using the electrochemical and thermal data from a single full range cycle was also presented, allowing accelerated electrolyte screening.
6,644.4
2017-01-01T00:00:00.000
[ "Materials Science" ]
4-particle amplituhedron at 3-loop and its Mondrian diagrammatic implication This article provides a direct calculation of the 4-particle amplituhedron at 3-loop order, by introducing a set of practical tricks. After delicately rearranging each piece of this calculation, we find a suggestive connection between positivity conditions and Mondrian diagrams, which will be quantitatively defined. Such a pattern can be generalized for all Mondrian diagrams among all those contribute to the 4-particle integrand of planar N=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=4 $$\end{document} SYM to all loop orders, as a subsequent work 1712.09994 will show. Introduction The amplituhedron proposal for 4-particle integrand of planar N = 4 SYM to all loop orders [1,2,3,4] is a novel reformulation which only uses positivity conditions for all physical poles to construct the loop integrand. At 2-loop order, as the first nontrivial case, we have just one (mutual) positivity condition D 12 ≡ (x 2 − x 1 )(z 1 − z 2 ) + (y 2 − y 1 )(w 1 − w 2 ) > 0, (1.1) where are all possible physical poles in terms of momentum twistor contractions, and x i , y i , z i , w i are trivially set to be positive for the i-th loop. The resulting integrand is the double-box topology of two possible orientations, and it is symmetrized for two sets of loop variables [2]. As the loop order increases, its calculational complexity grows explosively due to the highly nontrivial intertwining of all L(L−1)/2 positivity conditions of D ij 's. As far as the 3-loop case, it is done under significant simplification brought by double cuts [2], still there is considerable complexity that obscures its somehow simple mathematical structure, as we will reveal in this article and the subsequent work [5]. As an illuminating appetizer, we reformulate the 2-loop case in the following. As usual, let's preserve z 1 , z 2 for imposing D 12 > 0, and triangulate the space spanned by x 1 , x 2 , y 1 , y 2 , w 1 , w 2 . We introduce the ordered subspaces characterized by, for instance: which is a d log form (omitting the measure factor) of the orderings x 1 < x 2 , y 1 < y 2 and w 1 < w 2 . In this particular subspace, positivity condition (1.1) unambiguously demands where x 21 ≡ x 2 − x 1 and so forth. Here, x 21 , y 21 , w 21 can be treated as genuinely positive variables which replace the original x 2 , y 2 , w 2 . Then the relevant d log form for z 1 , z 2 is simply 1 z 2 (z 1 − z 2 − y 21 w 21 /x 21 ) = x 21 z 1 z 1 z 2 D 12 , (1.4) analogously, for X(12)Y (12)W (21) we have x 21 z 1 + y 21 w 12 z 1 z 2 D 12 . (1.5) A seemingly farfetched observation is, after we flip W (12) to W (21), the additional term y 21 w 12 appears in the numerator above due to the orderings of y 1 , y 2 and w 1 , w 2 are now opposite, allowing one to orient the double box "vertically", as explained diagrammatically below. In figure 1, we have chosen two perpendicular directions for x and y, while the z and w directions are opposite to those of x and y respectively. Then we assign each loop with a number as usual, but now these numbers have a meaning of orderings of positive variables. Since loop number 2 is below 1, we naturally interpret this as y 2 > y 1 , and similarly w 1 > w 2 . In this way, it is straightforward to conclude that, if we flip w 1 > w 2 back to w 2 > w 1 , there is no consistent way to place loop numbers 1, 2 vertically so the double box can be only oriented horizontally! After we sum the numerators above over W (12) and W (21) respectively, namely 1 w 1 (w 2 − w 1 ) (x 21 z 1 ) + 1 w 2 (w 1 − w 2 ) (x 21 z 1 + y 21 w 12 ) = x 21 z 1 + y 21 w 1 w 1 w 2 , (1.6) Fundamentals of Positive d log Forms First, we will extend the fundamentals of positive d log forms in [2], as the minimal techniques necessary for the posterior sections. It is known that, for a generic positive variable ranging from a to b (a < b), its d log form is given by finally, if b = a for the two special cases above, we have which will be named as the completeness relation. It has a natural interpretation as the sum of projective lengths of two complementary positive intervals. Note that we have treated a as a constant above, while it could also be a positive variable. In that case we only need an additional form 1/a, so the completeness relation now becomes 1 where the LHS characterizes nothing but two ordered subspaces in which x > a and x < a respectively. A trivial generalization of (2.4) for n x i 's satisfying x 1 . . . x n ≷ a is then here, for example, x 1 . . . x n > a is characterized by Another less straightforward generalization of (2.4) for x 1 +. . .+x n ≷ a is where both parts of the LHS can be proved recursively. If we assume they hold for x 1 +. . .+x n−1 > a and x 1 +. . .+x n−1 < a respectively, to obtain the form of x 1 +. . .+x n > a we must separate it into two parts as where in the first line, x n is positive in the first term, but greater than (a−x 1 −. . .−x n−1 ) in the second. The sum in the second line nicely returns to the form for n x i 's. To obtain the form of x 1 +. . .+x n < a, we can simply insert into the form for (n−1) x i 's, note that (a−x 1 −. . .−x n−1 ) is treated as one positive variable above. These two forms, as well as completeness relation (2.8), are often used in the subsequent derivation. It is also convenient to introduce the co-positive product of forms. For example, for y > x 1 , . . . , x n , to obtain its form we can divide it into n! parts with respect to n! ordered subspaces in which x σ 1 < . . . < x σn and {σ 1 , . . . , σ n } is a permutation of {1, . . . , n}. Then we need to simplify by induction, here X n = {x 1 , . . . , x n }. Now let's focus on x n 's location in each permutation while omitting those of x 1 , . . . , x n−1 , it is straightforward to regroup the sum in order to reach (2.12) therefore here the symbol ∩ is the co-positive product operation. This product denotes the intersected subspace of a number of different subspaces as one form. If we evaluate the residue of I n at y = ∞, it returns to which is the completeness relation of n positive variables as all x i 's are trivially less than infinity. Analogously, for y < x 1 , . . . , x n we need to simplify with the aid of . (2.17) If we evaluate the residue of J n at y = 0, it returns to which is also the completeness relation as all x i 's are trivially greater than zero. In fact, (2.13) and (2.17) can be trivially obtained, if we switch to the perspective which considers x 1 , . . . , x n < y and x 1 , . . . , x n > y respectively instead. Such an equivalent but much simpler approach can be further generalized to as well as where we have used the expressions in (2.8). A mixed product of these two types is, for example, . From these formulas of co-positive products, it is easy to observe that: for n forms that impose positivity conditions on a number of variables and these conditions involve only one common variable, denoted by y for instance, we have I 1 ∩ · · · ∩ I n = I 1 × · · · × I n × y n−1 , which is trivial to prove from the perspective above. When there are two or more common variables, this simplification is no longer valid in general. Two such examples are given below: , (2.23) as well as . (2.24) It is known that a d log form can be interpreted as the projective volume, which establishes its relevance with forms. But in our context, the concept of volume appears more often as the cancelation of spurious poles, for amplitudes or integrands that are rational functions. This is in fact the key mechanism making summing different terms from the triangulation of amplituhedron possible. The Trick of Intermediate Variables at 3-loop Now, we are ready to introduce the trick of intermediate variables to handle the 3 intertwining positivity conditions of the 4-particle amplituhedron at 3-loop. This is not the final answer that we pursuit, but it divides a difficult problem into two parts in a pedagogical way, and it is a nice mathematical warmup for the more precise description. These positivity conditions are Without loss of generality, let's work in the ordered subspace X(123) so that x 1 < x 2 < x 3 . Then D 12 > 0 unambiguously demands for instance. Depending on the sign of (y 2 −y 1 )(w 1 −w 2 ), we have where c 12 and c 21 are defined as the positive intermediate variables. The corresponding forms are then in ordered subspaces (Y (12)W (21)+Y (21)W (12)) and (Y (12)W (12)+Y (21)W (21)) respectively, and the symbols Z + and Z − are related by the completeness relation here the identity I 12 denotes no positivity condition is imposed on z 1 , z 2 . Therefore, in subspace X(123), for D 12 , D 13 , D 23 > 0 we need to figure out the product where the products involving y-and w-space are easy, so we mainly focus on the products of Z ± 's. There are 2 3 = 8 such triple co-positive products, as listed below: and now we will determine them one by one. As c 31 is treated as a positive variable, instead of a rational function of other positive variables as it actually should be, the discussion above leads to the sum note that we have divided the c 21 -space. This leads to . (3.10) In general, we find that for c ij , c jk , c ik with respect to T 1 , T 2 , T 4 , T 5 , T 7 , T 8 , it is most convenient to divide the c ik -space. While for c ij , c jk , c ki with respect to T 3 , T 6 , there is no need to divide any of them. Then for T 3 , since z 1 > z 2 +c 21 > z 3 +c 32 +c 21 already implies z 3 < z 1 +c 13 , Z − 13 becomes redundant, which leads to now we see indeed there is no need to consider any c ij . Then for T 4 , it demands that z 3 is less than both (z 1 +c 13 ) and (z 2 +c 23 ) while z 1 and z 2 are restricted to the subspace of z 1 > z 2 +c 21 . Analogously, we have the following discussion: Then for T 5 , z 3 +c 32 < z 2 < z 1 +c 12 implies that (z 1 +c 12 ) must be greater than (z 3 +c 32 ) while z 1 and z 3 are restricted to the subspace of z 1 > z 3 +c 31 . Analogously, we have the following discussion: which leads to . (3.15) Then for T 6 , similar to T 3 , there is no need to consider any c ij , the sum is simply . Then for T 7 , similar to T 5 , (z 1 +c 12 ) must be greater than (z 3 +c 32 ) while z 1 and z 3 are restricted to the subspace of z 3 < z 1 +c 13 . Analogously, we have the following discussion: . (3.20) Now we have known all eight T i 's. A consistency check via the completeness relation gives where we have used (2.22), similarly we also have (dropping all 1/c ij prefactors) These relations, in fact, serve as an equivalent approach to obtain all other T i 's one by one after we know T 1 and T 3 , following the sequence below: In addition, we have also observed that from all other T i 's can be obtained via flipping c ij to −c ji in the denominator with respect to flipping each Z − ij to Z + ji , as well as setting c ij to zero in the numerator. Therefore, T 8 is named as the master form. There is still another equivalent approach to get the master form which divides the z-space instead of the c-space. Defining we find the sum is then as expected. Both ways to get the master form using the completeness relations and dividing the z-space can be generalized beyond 3-loop. Once it is known, we can apply the observation above to get all 2 L(L−1) 2 co-positive products of arbitrary Z ± 's. This observation has not been proved, but it turns out to be valid at 4-loop. In appendix A, we use the latter way to get the master form at 4-loop and after that, we check this observation explicitly via two examples, as a mathematical exercise of curiosity. A Naive Sum Next, we continue to sum the former results over all ordered subspaces, and we find this naive sum which takes the advantage of intermediate variables "almost" reaches the correct answer, as it can reproduce 96 out of the total 120 monomials in the latter. To figure out the co-positive products involving y-and w-space, we define, for instance: in which the orderings of y 1 , y 2 and w 1 , w 2 are the same or opposite respectively. According to (3.6), each Z + ij is associated with an S(ij), as well as Z − ij with an A(ij). Then we explicitly figure out the products of S's and A's with respect to all T i 's as note that in particular, above we have used Y (13) ∩ Y (32) ∩ Y (21) = 0 and so forth. These results are for subspace X(123) only, and we need to consider all other ordered subspaces of x, such as where switching 2 ↔ 3 for x, y, z, w leads to switching T 3 ↔ T 5 and T 4 ↔ T 6 as can be easily verified, and the rest pieces are similarly given by where (Correct answer) × Denominator as well as and we have defined the product of all physical poles as Denominator ≡ x 1 x 2 x 3 y 1 y 2 y 3 z 1 z 2 z 3 w 1 w 2 w 3 D 12 D 13 D 23 . (4.8) Since D ij contains 8 monomials, the correct answer has (2×8+4)×6 = 120 monomials, and the sum has 96 so their difference has 4×6 = 24 monomials. It is important to notice that terms such as x 2 x 3 z 1 z 2 (−y 1 w 1 ) are not dual conformally invariant by themselves, but grouped as x 2 x 3 z 1 z 2 D 13 they are. This tentative answer simplified by the trick of intermediate variables captures more than we expect. Even though it oversimplifies the complexity of c ij 's which are functions of x, y, w, it still gives most parts of the correct answer. If we manually heal the dual conformal invariance, it is then correct. Remarkably, even if it does not give the full numerator, it can wipe off the subspace division of all positive variables, which frees it from spurious poles. After we refine the calculation in order to reach the correct answer, we will return to discuss the diagrammatic interpretation of (4.6). Refined Co-positive Products To precisely describe the 4-particle amplituhedron at 3-loop, we need to further refine co-positive products for each ordered subspace of y and w, based on the former discussions using intermediate variables. These seemingly lengthy results can be nicely rearranged in order to manifest its simple mathematical structure, namely the Mondrian diagrammatic interpretation. From the previous setting we know that, for each ordered subspace of x, there are eight T i 's, namely the co-positive products in terms of intermediate variables. From (4.2), each of T 1 , T 8 corresponds to six ordered subspaces of y and w, while each of T 2 , T 3 , T 4 , T 5 , T 7 , T 8 corresponds to four, so that in total their number is 6×6 = 36 as expected. If we abandon intermediate variables, in principle we have to figure out 36 co-positive products instead of 8, as elaborated in the following. For T 1 , the six different ordered subspaces lead to six different sets of c ij 's. First for Y (123)W (123), the condition c 31 > c 32 +c 21 is now replaced by (y 32 + y 21 )(w 32 + w 21 ) x 32 + x 21 > y 32 w 32 where x 32 , x 21 , y 32 , y 21 , w 32 , w 21 are positive variables in this subspace (as usual, we first work in X(123)). The Correct Sum and its Mondrian Diagrammatic Interpretation Collecting the 36 co-positive products for all ordered subspaces of y and w, we can continue to sum these results, and this time, it indeed reaches the correct answer. Instead of a brute-force summation, for each piece we delicately separate the contributing and the spurious parts. The former manifest the Mondrian diagrammatic interpretation with which they nicely sum to (4.6), while the latter sum to zero at the end. = (prefactors) × (x 21 x 32 z 1 z 2 D 13 + x 21 x 32 z 1 z 2 (y 21 w 32 + y 32 w 21 )), where again we will drop the prefactors that simply encode its information of ordered subspaces as well as physical poles. The first term above denotes the seed diagram which pictorially is a horizontal ladder, the first diagram given in figure 2. According to the contact rules conceived in the introduction, since boxes 1, 2 have a horizontal contact and so do boxes 2, 3, we can trivially read off the factor x 21 x 32 z 1 z 2 D 13 from that ladder diagram. In fact, this factor originates from x 21 x 32 z 12 z 23 D 13 in the ordered subspace Z(321), before we sum over all subspaces of z that admit it. As we have known Y (123)W (123) forbids any vertical contact of boxes (or loops), so we only have a horizontal ladder for this subspace, while the rest terms are spurious. For later convenience, we can define for W (321). It is clear that for different orderings of w, although their positive variables are different, the factors corresponding to any contact between boxes are the same. For example, both W (213) and W (231) admit the third diagram in figure 2, so the relevant w factors are w 12 and (w 13 +w 32 ) respectively, both of which equal to (w 1 −w 2 ). We also see that Y (123)W (321) admits all six diagrams, since the orderings of y and w are completely opposite. Let's sum the six spurious parts over subspaces of w for Y (123), which gives and as usual the prefactors are dropped. For the sum of each seed diagram over all subspaces that admit it, we will present examples of two distinct topologies below. First, for the first diagram in figure 2, x 21 x 32 z 1 z 2 D 13 trivially remains the same after we sum it over subspaces of y and w, since the completeness relation gives then it becomes x 2 x 3 z 1 z 2 D 13 after we sum it over subspaces of x that admit it, since admitting X x 21 x 32 = X(123) x 21 x 32 = 1 and this is the correct answer as one of those in (4.6). Then, for the third diagram in figure 2, x 31 x 32 z 1 z 2 y 21 w 12 becomes x 2 3 z 1 z 2 y 2 w 1 since admitting Y admitting W y 21 w 12 = 1 y 3 w 3 Y (12)W (21) y 21 w 12 = 1 y 1 y 2 y 3 w 1 w 2 w 3 y 2 w 1 , (6.11) as well as admitting X x 31 x 32 = X(σ(12) 3) x 31 x 32 = 1 and this is another one in (4.6). The rest four diagrams of different orientations are similar. We can continue the separation for the rest five orderings of y, each of which contains six orderings of w. Since we still work in X(123), the general seed diagrams for different orderings of y are given in figure 3, where some boxes are kept blank as the ordering of x alone can only fix part of numbers filled in these boxes. Straightforwardly, for Y (132) we have Summary: a Mondrian Preamble By separating the contributing and the spurious parts of each form in all ordered subspaces and assigning the former with corresponding Mondrian factors, which follow simple rules given by between any two loops labelled by i, j, we obtain the seed diagrams. If we assume the spurious terms will always sum to zero at the end, there is no need to sum the seed diagrams over all ordered subspaces since they are already topologically valuable. There is a simple way to find seed diagrams: let's work in simply one ordered subspace X(12)Z(21)Y (12)W (21) at 2-loop, as the first nontrivial example. Then, it is clear that D 12 is trivially positive so there is no positivity condition to be imposed. But as a physical pole D 12 must appear in the denominator, which identically turns the form into As usual, dropping the prefactors which contain all physical poles, we precisely obtain two 2-loop ladders of horizontal and vertical orientations (the vertical one is shown in figure 1). The 3-loop example is more interesting. Similarly in ordered subspace X(123)Z(321)Y (123)W (321), we can separate the triple product as D 12 D 13 D 23 = x 21 z 12 · x 32 z 23 · D 13 + y 21 w 12 · y 32 w 23 · D 13 + x 31 z 13 · x 32 z 23 · y 21 w 12 + x 21 z 12 · x 31 z 13 · y 32 w 23 + x 21 z 12 · y 31 w 13 · y 32 w 23 + y 21 w 12 · y 31 w 13 · x 32 z 23 , which precisely correspond to the six diagrams in figure 2 (including two ladders and four tennis courts). Here, for notational compactness we have defined x 31 ≡ x 32 +x 21 for instance, as x 32 and x 21 are primitive positive variables in this subspace while x 31 is not. In general, Mondrian diagrams of higher loop orders satisfy this neat pattern: the product of all D ij 's can be expanded as a sum of all topologies and orientations in an ordered subspace of which the orderings of x, z are completely opposite, so are those of y, w. However, there are more subtle issues to be clarified, and we will continue to discuss them more systematically in the subsequent work.
5,839.4
2017-12-28T00:00:00.000
[ "Physics" ]
Adaptation of functional systems theory for the operative analysis of information flow The article is dedicated to the features of the development of the information economy, which relies on high technology to further the continuity of the information flow, and therefore the business continuity of individual economic entities. In particular, the authors suggest modifying and optimizing the process of transmission and processing of data within a single marketing system based on the theory of functional systems. The authors give a description of the functioning of the info graphics model for marketing information system, which allows to structure the external input information at two flows at the first stage, and at the second to create a database of relative indicators to forecast the activity of the company as a whole. Author research was conducted in the marketing sector, a separate organization. Moreover, under the marketing researches is the systematic gathering, processing and analysis of data about issues relating to marketing products and services, competition, consumers, prices, opportunities of the company and others, oriented to specification of available data or to obtaining new information to make managerial decision. Also changes in political, legal, economical, scientific and technological, demographical and other conditions of operation of an enterprise are targets of market research. Introduction We hear more and more that the emergence of so-called "new economy" which functioning doesn't adhere to the general logic of classical economic theory. Many intellectual systems created by outstanding researchers and economists of the past lose the analytical importance, stop being the true instrument in development of actions of economic policy. The rising information economy found a new type of a resource -information representing at all debatability of such statement, rich type of a factor of production. For the first time the term "information economy" appeared in the works of Stanford Center employee Mark Porat in 1976 who used this term to refer to a new branch of the economy concentrated on the modern databases and applications. These days the term "new economy" incorporates also hi-tech manufacturing which are difficult to classify as only ITsector or information-producing sector. Microelectronics, electronic mechanical engineering, instrument making, a robotics, production of the telecommunication equipment are all parts of these branches. The main signs separating "new economy" from traditional system, probably, should be considered the increasing rate (exceeding growth rate of GDP) of information volume gain, pressure of the financial sphere upon other sectors of economy, the transnational nature of globalization of world economy determined by pervasive property of information and the Internet. One more important aspect of modern economy is the sharp aggravation of environmental problems. The impossibility of the environment to be restored after pollution, created by an industrial civilization, and impossibility of mankind to solve this problem is a sign that the relative abundance of a new factor of production -the information -simply spreads thin both labor and capital resources, taking them away from where they are really needed. The transaction motive becomes defining in "new economy", emergence of new clusters is possible without loan of resources from former reproduction contours. It is carried out due to redistribution of effective selection of relevant information, training, financing of new quality, and then and replication of the achievements [1,2] Methods Economic science in the "new economy" is obliged to provide answers to the many issues that concern modern economists and the general public. Information creates new effects that science has not yet explained, but the actual problem lies in the fact that they themselves are subject to the effects of rapid changes, so any explanation or theory proposed by a short period may require additional research and substantial modification. The expansion of the scope and possibilities of the information economy casts doubt on a number of dependencies, previously scientifically discovered. In particular, it refers to the concept of the A. Phillips curve. Indeed, in recent years, the US economy showed an increase in the rate of economic growth, which was 3.6% and the unemployment rate fell to 4.6%, while according to the standard macroeconomic theoretical constructions noninflationary level for the US economy is 5.5%. If unemployment falls below this level, according to economic theory, inflation is expected to be slightly higher than the observed values. And it just turned out to be quite low, and it did not exceed the average level of 2% per year. Thus, it must be noted that there is acceleration of economic growth with falling inflation and unemployment. In addition, it is interesting to note that the high-tech sectors of American economy saw a decrease of employed individuals, that is, in some sectors, unemployment even increased slightly. It is possible to assume that these changes are caused by just the spread of the information economy in various spheres of life and industries. [3,4,5] Similar patterns of economic science has not yet been observed, and rightly associate this phenomenon with a change in the ratio of factors of production and the emergence of a special type of reproduction -the information and the corresponding sector. For the first time in decades undermined the economic development of the doctrine of its cyclical nature. The trend of becoming more like the line and takes the form of a cyclical downturn slowdown. Thereby amplitude of oscillation is certainly reduced. If the Phillips curve is not working, it is necessary to automatically say that the instruments of economic policy, in particular monetary, should also be subject to adjustment, as, apparently, they have no independent value in the information economy, and they must be adapted to the independent dynamics of microeconomic agents today's global economy. The information economy is a self-contained and self-sufficient force, this force increases productivity in many sectors of the economic system. In the second half of the 1990sthe US economy showed growth in agricultural productivity by an average of 2.2%, while the 25 years it increased by no more than 1% per year, while the average in other industries -3.4 -3.6 %. For example, one of the leading experts on the measurement of labor Robert Gordon believed that in the period between 1995 and 2000 productivity gains of the US economy were due to the exceptionally high growth rate of the production of computers, which could not but affect the whole economic system. Today, the Internet seems to have a real rival to traditional distribution channels, retail chains and supermarkets, etc. "new economy" based on computers, it reproduces itself. On the one hand, it allows you to increase productivity, improve access of the general public to the immanent social functions and benefits, but on the other -gives rise to some of the effects of which still do not exist, which change the motivation and psychology of individual and corporate, corporate behavior. To summarize what has been said. There are many factors affecting the side of destruction, depending A. Phillips. These include: strengthening of the dollar as a reserve currency in the 1990s, the investment boom that has not led to increased costs, increased labor market mobility, allowing to restrain wage growth, changes in methods of accounting and finance, and other methods. All of the above, combined with the new institutions, emerging information economy, acting as a shock absorber, has created a new economic dynamics that require new descriptive models. Even in 1995, Professor M. Obstfeld and K. Rogoff offered, as is often claimed, the foundation of the new macroeconomics, which allegedly deprived of deficiencies contained in the Mundell-Fleming model of an open economy. The essence of this new theory is that it considers the economy as a normal equilibrium model that takes into account the socalled "market failures" and nominal rigidities. The proposed model is based on the idea of intertemporal utility maximization of individual agents that make up the macroeconomic system. Thus, macroeconomic decisions are deterministic micro level. About this macroeconomics in general, and George wanted. Maynard Keynes, and a similar theory can be considered as a response to criticism of Robert Lucas. Asking M. Obstfeld and K. Rogoff model takes into account the imperfections of the financial markets, that is, contains significant institutional constraints. This fact serves as an undeniable advantage in the use of these models, but it does not make them perfectly suitable for all times, take into account information distortion effects of the accumulation of information and the interaction of different types of resources (we are talking about labor, capital, land, information and entrepreneurial skills). [6,7] It is important to consider that the information economy generates new effects associated with deviation of behavior, the nature of the available information. In other words, a growing number of behaviors, using misleading information or deliberately distorting information, economic opportunism of alleged abuses in the use of a particular relevant information about competitors, markets and technologies. Factors of production becomes even speed receipt and processing of relevant information, and an element of competition -planned disinformation. Economic science cannot ignore these problems. Change undergo old economic category, terminological system, the interpretation of certain concepts. With some degree of conditionality can talk about two models of business conduct -the US and Japan. If we make a comparison of these two models, for example, for the purposes of management, market strategy and the strategy of scientific research, which, in our view, are crucial components of modern Japanese and American firms, you will find the following picture. For Japanese companies as the management objectives are the increase in market share, increase in production are renewed and the third -the turnover of capital; for the US -in the first place capital turnover, followed by an increase in the value of shares, and in third place -increase in market share. (For European companies in the first place stands the turnover of capital, but in the second -extension of the product range and increasing the efficiency of production and marketing. This model takes an intermediate position, and therefore we cannot touch her.) Thus, the goal of US firms are reduced to short-term financial performance, focused on short-term profit, and management objectives by Japanese firms did not give priority to financial performance, but rather are focused on the product and its position in the market. For example, US companies are leaders in sales of products such as "cash cows", i.e. products that have reached maturity in the life cycle and bringing sustainable profits. Japanese firms are ahead of the US in the field of enterprise products such as the "problem", as the Japanese market psychology long-term oriented, focused on development rather than a quick profit. In Japan, compared to the US strategy in the field of research work is reduced to the pro-conducting basic research in the field of new technologies and research to develop new products. In the United Statesstudies on the modernization and improvement of current products and to develop new technologies that have the ability to fast commercialization. That is the same system the first violin plays the pursuit of short-term profit, which determines the strategy of behavior in all other spheres of activity, in the other -the primary role for the long term, the development of all the current problems of compliance shall be verified on the basis of long-term goals. Just present dichotomy was indeed valid throughout the 1980s, and even to a large extent the 1990s., But at the beginning of the XXI century. competition on the possession of information orients the majority of companies in the long-term outcome, and information and technology leadership. Besides profits, as a residual index, as the approval of François Perrot could never accurately reflect the valuation of entrepreneurial skills, as it is difficult to differentiate, in any case realized monopolistic power to extract profits, and in which -implemented some capacity. Moreover, the profit may receive unfair or not the best agents. The latter effect is known representatives of evolutionary economics as hyper selection when there is a selection of negative qualities and properties and their behavior with respect to the rise of positive qualities. This effect is the basis of the deployment hreodnoy path of economic development. Advertising is a way for companies to extract additional profit, that is the information tool, providing a psychological impact on consumers, which actually reinforces the monopoly power of the company on this market share. Profit in absolute competition is absent, because the prices are equal to marginal cost. It turns out that in absolute competition, lack of entrepreneurial ability, once it receives a zero fee, or, otherwise, it gets a low score due to the structure of the market, more precisely, its shape? Maybe the very business is possible only in conditions of monopoly? These questions are obvious negative response, but a way of organizing the competition, probably, defines profit opportunities in the process of economic competition of different subjects. Information, on the one hand, virtualizes the concept of profit, but, on the other -programs the new profit opportunities. Thus, profit is a kind of synthesized concept, its formation depends on many factors. If this parameter is a collective, the attention paid to him under doctrines and theoretical models, created in economics, seems excessive and overpriced. In Russia in recent years numerous companies show zero profit or loss, that is, negative income while continuing to lead a normal economic activity, the markets are not perfectly competitive. Therefore, it is either to another method of evaluating the ability of business, or is it possible to doubt the thesis that the profit of certain valuation of these abilities. And if so, then income taxation is the most distorted, affecting diversified operations agent. In connection with this, obviously, it's best to ensure a reduction of income tax, rather than the value added tax in order to effectively solve the problem of capital, management, equity and dividend policy. [8] Information and information asymmetries determine the state of the market. J. Stiglitz, J.Akerlof and M. Spence in 2001 was awarded the Nobel Prize in economics for his support of the functioning of markets and information distortion occurring in the markets and lead to the need for government regulation. Supporters of liberal interpret this as an attack on freedom of the individual. They argue that a significant amount of information cannot be structured that it is only concerned with the market, and thereby recreates the spontaneous order in accordance with which information deemed relevant. When the state of the economic situation increases the amount of information flow (Figure 1, curve A). Information is collected and reaching the subjects of management, is the need for corrective action economic structure (curve S). Adequate response to external stimuli increases the efficiency of investment activity (curve X). Once the process is stabilized (activities adjusted in accordance with the previously received information, and the new has not yet been taken into account) will be first reduced efficiency, and again increasing the amount of information (curve A begins to rise). Determination of the dynamics of the relation of these parameters allows to identify the onset of bifurcation state, and therefore, time to take appropriate decisions in order to prevent its negative effects and formation of a new efficient investment policy. The power is determined by the ability to limit the freedom of action, the implementation of economic plans another subject of the economy, whether it be a company or a person, has the self-employed. The state is the voluntary agreement of individuals and groups on a specific limitation of its "absolute" freedom, there is a hypothetical, because it is impossible to measure the freedom to rank it in terms of "more or less". If, however, enter the scale at one end of which is to delay the zero level of freedom, but on the other -a single maximum, the chain relationships will be as follows: a complete and accessible information reduces the extent of uncertainty and means order, organization, implementation, management plans and through it -freedom if the same information is scarce and distorted, just cannot get it, we are dealing with disinformation, maximum entropy, disorganization, and, accordingly, in this case, it is difficult to talk about any freedom, collapsing all the economic plans -the maximum freedom. Thus, we want to show that freedom of information is defined and the organization, and lack of freedomdisinformation and disorganization. This vision has a foundation in the theory of management of complex systems, modern economic cybernetics and information is the original model of economic power. Social object, a person, group of people cannot be unfree due to subordination someone's orders if they are subject to a contractual basis and are able to influence the conditions of its signature, to monitor its compliance. Power in this case acts as a way to influence people about solving specific problems of society on the basis of consensus and agreement. It should, in our view, to hold another analogy between the freedom of the individual (the system), and autonomy. Endurance -the ability of a system to regulate the functions performed by it. The subject is free to the extent that it is autonomous, that is, alone can fulfill the law (agreements), while in the agreed framework. If he was not autonomous (free), the upper level of management would not cope with the information potential. Here freedom serves as a condition for self-organization and development of the system and the independence of parts as a whole is free parts. Ultimately, the connection between technological and institutional structures, as well as of the behavior of economic agents determine the features of the transformation of the Russian economy and Western economies. However, according to our position key to the implementation of institutional breakthroughs that are hardly possible without a constitutional ordinances, technological advances (which, in turn, are hardly possible without the implementation of mechanisms of concentration and diffusion of innovation and encouraging investment in science and technology, institutional environment), is the information that is gained thesaurus (Knowledge + Experience), which allows economic agents to act one way or another. Particularly important concept in favor of the efficient use of the accumulated achievements. In particular, the Russian Patent Office has accumulated intellectual property of a few trillion dollars, but it is not being used, is a dead weight, since there are no conditions for the reproduction and use of intellectual capital. In Russia, the lowest level of intangible assets in the industry in Europe. [10] Forming a full thesaurus, determining the "right" pattern of behavior depends on the relevant infrastructure and the current technological and institutional structures. What to select that remember the economic entity, and if the selection and memorization of the costs associated with? It is natural that in the present and the future, it is useful from the point of view that an entity is a utility. The knowledge and experience that are needed, according to the subject, will be selected in which there is no need -discarded and lost. Consequently, there is always a risk of losing valuable information, much later manifested in the functioning of economic entities. Thus, the evolution of economic entities, and their adaptation is a process of increasing the information level of the organization (to be more precise, the changes at this level). If the loan is tied to the criterion of the gross profit, the uniformity of the monthly turnover of the company, need to double collateral (best property) automatically innovative firms, high-tech company will not be able to get a loan, which means that innovation will remain only on paper. It will not be implemented, there is a new business idea and not turn into innovation. Similar rules are destroying the economy that directly hinder the emergence of innovators. More precisely, the innovators may appear, but in areas where there are these principles, namely in trade, oil and gas business, metallurgy, petrochemical industry, but not in the field of information technology, electronics industry, instrumentation, laser technology, quantum electronics, etc.Thus, there are institutional barriers to the development of entire areas of science and industry, bringing competitive advantages of the country. Policy destruction of high-tech industrial sectors is reduced to the adoption of legislation allowing the unimpeded flow of imported equipment, ostensibly to modernize Russian industry, having the highest depreciation of fixed assets. However, under conditions of high corruption that permeates all parts of the management of the economy in the first place, such a decision would deprive the relevant orders industries the means of production (in particular, for example, electronic engineering), and secondly, provide entry into the country equipment Used by high price, and emerged delta officials and some directors will be assigned to your personal income. The government has no solutions of a preventive nature, aimed at preventing such an outcome. Such approaches deprive the country's competitive advantages. In addition, absolutely do not use the flexible mechanism of tax knowledge-intensive sectors, allowing to create incentives for the advanced development of the information sector, the electronics industry, innovative engineering, etc. The idea is to make the transaction unprofitable due to the higher value of marginal tax rates related activities, complicating the procedures for the registration of companies such as the profile of the control measures by controlling the scale of competition rules for entry and exit from specific markets. High system, ensure the As was shown above the central component of economic organization today is to work with the information and use of information systems in the management process. Since the institutional theory of organizations analyzed the relationship between the participants through the structure of incentives, motivations, methods of decision-making, creating a hierarchy of relationships, power, choice and control, to the extent this information defines the entire set of ongoing interactions. The information is not difficult to measure. There are special methods for evaluating the information density, the amount of information (capacity), storage capacity and generation. Ongoing transactions -is the exchange of information and the perception that affects the nature of the transaction -a conflict or a loyal and directly generated from this attitude of some participants to others, develop rules of behavior, motives vary and ultimately transformed the system of values. Institutional theory, to get their hands of the theory of information and combine it with the methods of analysis of the dynamics of transaction costs, offers significant opportunities for the formulation and conduct further research organizations, and with the use of mathematical apparatus (and not only the theory of games). The company is characterized by multiple information interaction. Therefore, there is a problem of aggregation of these interactions in a single set of management of the organization. This problem is known as the problem of the integration of diverse information media company, where the information environment to understand the totality of software and hardware for processing information processes of the treatments that are written in a kind of organizational and management circuit, designed to develop and put into practice concrete decisions a particular area of life of the company. From thesaurus costs on firms it depends on the level of its operation, transaction costs, organizational efficiency, and, consequently, the market prospects. Needless to say, the benefits of information are much more important than the availability of raw materials, cheap credit, a good business partner with high quality products. Without information, the above conditions cannot be held longer. They may be a coincidence. Orderly information -this is a resource that allows you to permanently or for a long period of use of the achievements of the named as the information -the content of extension, conversion of these conditions from time to reach a permanent. Information opportunities. To integrate heterogeneous IT environments used info graphics modeling, that is, such "data processing systems, which are based on the principles of decentralized processing of information directly at the point of origin and transfer of results through communication channels." [11] Info graphics model is built on the assumption that there is a clear predisposition strictly certain amount of control links, information on the contours fluctuate between levels, analysis of data should take place in real time. Since the principle of a distributed information processing, the information messages can be sent to any structural units of the organization in unlimited scale. The criterion optimization model is the total duration of the processing and transmission of transactions on the routes of movement. This time, of course, must strive to minimize. The organization as a managed system is susceptible to changes in the three groups, which are important from the point of integration on the basis of information environments info graphics models: 1) it changes as a whole; 2) changes the structure of the administrative apparatus; 3) potential changes and document quality. Therefore, not enough action to improve the control systems of the organization. It is necessary to optimize the management structure, workflow, use filters and semantic barriers to information flows in the organization as a whole, inflowing from its environment. This is possible only on the basis of modeling of information processes in the organization, as well as the design of appropriate control systems as regulated information flows between object and subject of management and taken into account the diversity of both. Results Author research was conducted in the marketing sector, a separate organization. Moreover, under the marketing researches is the systematic gathering, processing and analysis of data about issues relating to marketing products and services, competition, consumers, prices, opportunities of the company and others, oriented to specification of available data or to obtaining new information to make managerial decision. Also changes in political, legal, economical, scientific and technological, demographical and other conditions of operation of an enterprise are targets of market research. Manager needs to make decision in the context of extreme uncertainty: dynamic rate of exchange, changes in tax laws, fluctuating inflation, new technology, competition. It is necessary to consider multiple options when asked questions "What would happen if…?" The labor-intensive analytical work lies at the heart of an effective activity of management staff of the companyperforming multivariable calculations, screening knowingly deadend decisions, assessment of possible risks, analysis and statistical treatment of survey findings, etc. However analytical work with high-performance is impossible without systems approach, which allows preventing undesired operation in the work of investmentbuilding complex. At the same time, the systems approach also has some disadvantages. Particularly at the systems approach for a long time clear methodological principles allowing to move from system methodology in total to methodology of specific activities, in this case to operation of the investment-building complex (hereafter referred to as IBC), were not developed. The theory of functional systems by academician P.K. Anokhin has the most pronounced materialist philosophy of outrunning reflection of the reality. Target function is backbone factor (providing of the operational and efficient activity of construction company), and marketing information system is associated with complex of actions of gathering, processing, analysis, estimation and dissemination of the relevant, accurate and timely data for information support of management decisions and also human and material resources required for this process. [12,13,14] Currently there is a large variety of conceptual models of the Marketing Information System (hereafter referred to as MIS). Nevertheless, many authors are unanimous that MIS should include either company's internal reporting system or internal information. The main functions of this system are gathering and analysis of the relevant information about the work of individual units of the company by calculating the absolute (input) and the relative (output) indicators. Usually used indicators are not enough for the analysis and prediction of the organization generally, so there is a pressing need to replenish data array of individual units of the company with new elements. In this article authors propose info graphic model of MIS functioning with the external incoming information (we will determine it with quantum) created based on theory of functional systems, which is used extensively in the economic functions simulation. The essence of this model ( Figure 2) is that any quantum entering and circulating inside it cannot exist in isolation from another. Always it is affected by certain environmental factors. The impact of external factors was called situational affer entation (afferent synthesis is the impact on the organization by the totality of external factors constituting a specific situation, and on its back further adaptation activity of the company is deployed) by P.K. Anokhin. Some impacts on the system (MIS) are accidental, but others (usually unexpected) provoke feedback. This feedback is in the nature of the orienting reaction, in this case there is an external information flow, which by afferent analysis turns into a database consisting of absolute indicators, which could lead managers to some alternative solutions, and the choice of the most effective of them depends on bringing all available data to one meter and by formation of self-existing internal systems of transmission and processing of information that to predict different conditions of a construction company at a predetermined time interval. Legend for Figure 2: B Info -external information, represented as matrix EK; MISmarketing information system; FSfunctional system (MIS); RFSthe result of a functional system; MD i -the results of internal research; MF jthe results of external research Discussion The authors propose to imagine a complex process of afferent synthesis analysis as synergistic analysis of incoming (collected) information which must be divided into two information flows, representing the result of marketing research of input information. (Binfo). The first flow includes information collected as a result of external marketing research (Fig. 2), which is converted into relative figures in the process of collection and processing (МFj). The second flow includes information obtained through internal marketing research (Fig. 2), which is also converted into relative figures in the process of collection and processing (МD i ). Each flow creates a system of relative performance indicators of internal and external reporting, shown in the Figure 2 as the matrix Sd and Sf, and the data they contain is used further to predict future performance of the construction companies or IBC generally, that leads to informed management decisions and long-term plan for the development of the enterprise, taking into account existing target figures. Conclusion Overall, the theory of functional systems allowsto build the logical flow of information coming from different sources, which become absolutely valuable tool to appraise the current status of the company in the market and predict its operations in future periods, as this flow has a flexible structure and fairly quickly can be updated with new required data. In addition, we can confidently state that the informational approach to the theory of the firm allows you to transform the role of the administrative apparatus, the motives of its behavior, management styles that allow many decisions and procedures in an automatic mode, and at the same time increase the degree of the autonomy of the organization, blurring its boundaries, since access to the same information capabilities steadily leads to an averaging of the dominant role of certain individuals, groups and organizations as a whole.
7,175
2017-01-01T00:00:00.000
[ "Computer Science", "Business" ]
Magnetic material in mean-field dynamos driven by small scale helical flows We perform kinematic simulations of dynamo action driven by a helical small scale flow of a conducting fluid in order to deduce mean-field properties of the combined induction action of small scale eddies. We examine two different flow patterns in the style of the G.O. Roberts flow but with a mean vertical component and with internal fixtures that are modelled by regions with vanishing flow. These fixtures represent either rods that lie in the center of individual eddies, or internal dividing walls that provide a separation of the eddies from each other. The fixtures can be made of magnetic material with a relative permeability larger than one which can alter the dynamo behavior. The investigations are motivated by the widely unknown induction effects of the forced helical flow that is used in the core of liquid sodium cooled fast reactors, and from the key role of soft iron impellers in the Von-K\'arm\'an-Sodium (VKS) dynamo. For both examined flow configurations the consideration of magnetic material within the fluid flow causes a reduction of the critical magnetic Reynolds number of up to 25%. The development of the growth-rate in the limit of the largest achievable permeabilities suggests no further significant reduction for even larger values of the permeability. In order to study the dynamo behavior of systems that consist of tens of thousands of helical cells we resort to the mean-field dynamo theory (Krause&R\"adler 1980) in which the action of the small scale flow is parameterized in terms of an $\alpha$- and $\beta$-effect. We compute the relevant elements of the $\alpha$- and the $\beta$-tensor using the so called testfield method. We find a reasonable agreement between the fully resolved models and the corresponding mean-field models for wall or rod materials in the considered range $1\leq \mu_r \leq 20$. Introduction. Magnetic fields produced by the flow of a conductive liquid or plasma can be found in almost all cosmic objects. In most cases, this does not apply to liquid metal flows in the laboratory or in industrial applications. The characteristic properties of these flows -namely velocity amplitude, geometric dimension and electrical conductivity -are usually not in the range that allows the occurrence of magnetic self-excitation, so that an experimental confirmation of the fluid flow driven dynamo effect requires an enormous effort. The aforementioned quantities can be combined into a single, dimensionless parameter, the magnetic Reynolds number, which is defined as Rm = LV/η. Here L is a typical length scale, V is a typical velocity amplitude, and η is the magnetic diffusivity which is the inverse of the product of vacuum permeability and electrical conductivity η = (µ 0 σ) −1 . In cosmic objects, Rm is typically huge so that one essential precondition for the occurrence of dynamo action is fulfilled. However, the flow amplitude in terms of Rm is not the only criterion that describes the ability of a flow field to provide for dynamo action, and magnetic self-excitation is also possible at much smaller Rm if the fluid flow has a suitable structure. Appropriate flows have been utilized, for example, in the three successful fluid flow driven dynamo experiments, the Riga dynamo (Gailitis et al. 2000), the Karlsruhe dynamo (Stieglitz & Müller 2001), and the Von-Kármán-Sodium (VKS) dynamo (Monchaux et al. 2007). Both, Riga dynamo and Karlsruhe dynamo, were based on a screw-like flow pattern, utilizing the fact that helicity is conducive for the occurrence of dynamo action (Stefani et al. 1999). The role of helicity is less obvious for the VKS dynamo with a flow of liquid sodium being driven by two counter-rotating impellers. It has long been known that the mean flow generated by this forcing is suited to drive a dynamo at comparatively low Rm (Dudley & James 1989). However, in the experimental implementation at the VKS dynamo, the motor power available to drive the flow is not sufficient to overcome the threshold for the equatorial dipole mode with an azimuthal wavenumber m = 1. Surprisingly, dynamo action of the axisymmetric dipole mode has yet been found at a rather low magnetic Reynolds number Rm ≈ 32 but only if the entire flow driving system, consisting of a disk and eight bended blades (figure 1), is made of soft-iron with a relative permeability in the order of µ r ≈ 60 (Verhille et al. 2010, Miralles et al. 2013. A possible explanation for this observation requires the combined effects of the magnetic properties of the soft iron disks (Giesecke et al. 2012), and helical radial outflows assumed in the vicinity of the impellers between adjacent blades (Pétrélis et al. 2007). These non-axisymmetric distortions of the mean flow can be parameterized by an α-effect (figure 1), but so far existing mean-field models of the VKS dynamo are only of limited significance due to a lack of knowledge about the α-effect and its interaction with the magnetic material of the impeller systems (Giesecke et al. 2010a, Giesecke et al. 2010b. Besides of the relevance for understanding the fundamental physics of geo-and astrophysical magnetic fields, a complementary argument for the development and construction of dynamo experiments originated from considerations on the safe operation of sodium cooled fast reactors (Rädler 2007). Dynamo action in the cooling system of a sodium fast reactor would likely be dangerous, because the self-induced magnetic field backreacts on the flow according to Lenz's law. This backreaction might cause an inhomogeneous flow breaking or a pressure drop in the pipe system so that the efficient cooling of the reactor core would be hampered with unknown consequences for the safety of the reactor. The occurrence of dynamo action in a sodium fast reactor can not be excluded a priori because the flow in the core has a sufficiently large flow rate, and the appropriate geometry. In the very core of the reactors the fluid flow is governed by screw-like shaped wires that are wrapped around individual nuclear fuel rods thus forcing the flow to follow a helical path around each rod (figure 2a). These fuel rods Figure 2. Idealized composition of the core of a sodium fast reactor; from left to right: (a) nuclear fuel rod surrounded by a helical shaped spacer forcing the flow on a helical path; (b) assembly of bundled fuel rods; (c) array of assemblies, forming the core of a liquid metal cooled fast reactor. Note that the figure shows an idealized system. In real systems, there are still additional elements with breeding material and control rods. are bundled into so called assemblies which may consist of up to a few hundreds of fuel rods (figure 2b), and the whole reactor core is composed of a few hundreds of these assemblies (figure 2c) * . In operation, this setup is flushed with liquid sodium thereby forming a helical flow field that is reminiscent of the flow used in the Karlsruhe dynamo (except the mean vertical component). Actually, early estimations by Bevir (1973) and Pierson (1975) as well as more recent experimental and numerical studies (Plunian et al. 1995, Plunian et al. 1999, Alemany et al. 2000 show no conclusive evidence for the occurrence of dynamo action in the core of a fast reactor. On the other hand it has been argued by Soto (1999) that the parameter regime reached by the French fast breeder reactor Superphenix is well within the range that allows for dynamo action if some magnetic material is introduced into the container (see p. 104, Fig III.32 in Soto 1999). So far, the problem of magnetic material in the core of a sodium fast reactor is merely academic, because state of the art reactors mainly utilize austenitic steels inside the core. However, in recent years, the application of Oxide Dispersion Strengthened (ODS) ferritic/martensitic alloys with a relative permeability µ r ≫ 1 has increasingly been discussed because these alloys have a lower sensitivity for nuclear radiation (Dubuisson et al. 2012). The dramatic influence of magnetic properties on the induction process, as observed at the VKS dynamo, motivated the present study, in which we examine complex interactions of helical flow fields with magnetic internals. Since the flow conditions in a sodium fast reactor are far too complex to be modeled in direct numerical simulations, we resort to the mean-field dynamo theory, which allows the development of models that are numerically much easier to handle. In order to consider the specific effects of a spatially varying permeability distribution we extend the original mean-field concept to the case of non-uniform material properties. The extension is straightforward and allows to take into account complex periodic patterns with magnetic properties in terms of standard mean-field coefficients like the α-and β-effect. For the estimation of the mean-field coefficients we perform kinematic simulations of electromagnetic induction generated by idealized helical flow fields that are reminiscent of the conditions in sodium fast reactors. We consider two paradigmatic configurations with either a helical flow subdivided by internal walls, or a flow following a helical path around solid rods, respectively. The first model follows the heuristic approach of Pierson (1975) in which the screw-like vortex represents the mean flow within an assembly of nuclear fuel rods. In a very broad sense, this model can also serve as an approach for the flow field between the blades in the VKS dynamo. The second model goes back to the work of Rädler et al. (2002aRädler et al. ( , 2002b on the kinematic theory of the Karlsruhe dynamo. In the present study, in which we assume a vertical mean flow, this type of flow field is suited to the conditions within an assembly of fuel rods in the core of a sodium fast reactor. We start with the analysis of the induction action of the fully resolved velocity field, from which we determine the mean-field coefficients using the testfield method (Schrinner et al. 2005, Schrinner et al. 2007. In a second step we use the α-and βcoefficients as an input for mean field dynamo simulations in order to prove that meanfield models are capable to reproduce the growth-rate and principle field structure of the fully resolved model by requiring much less computational efforts. For flow systems comprising a total of some tens of thousands of individual helical cells (figure 2), the use of a well-proven mean-field method is considered the only viable way to study dynamo problems. The present paper is mainly intended to establish and validate the necessary methodology. The possible application to specific reactor cores will need much more information on geometric details and material properties, and must therefore be left for future work. 2. Mean-field dynamo theory and the testfield method Outline of mean-field theory In the following, the magnetic flux density is denoted by B and the velocity field by U . The magnetic diffusivity is defined by η = (µσ) −1 with the electrical conductivity σ and the permeability µ which are assumed, for the moment, to be constant. The temporal development of the magnetic flux density in the presence of an electrically conductive liquid that moves according to the velocity field U is determined by the induction equation: Additionally, B must obey the divergence-free condition, ∇ · B = 0. In case of a prescribed (stationary) velocity field, equation (1) is a linear problem which, in principle, can be solved with the Ansatz In general λ is a complex quantity λ = κ + iω where κ denotes the growth-rate and ω denotes an oscillation-or drift-frequency. A dynamo solution is obtained if the magnetic field amplitude |B| grows exponentially ∝ e κt with a growth-rate κ > 0. Even though the linear approach is a severe simplification that neglects the backreaction of the field on the flow, equation (1) can be solved analytically only for very few cases. In particular for complicated velocity fields with small scale structures, equation (1) must be solved numerically. A possibility to draw further conclusions on the ability of a velocity field to drive a dynamo is provided by the mean-field dynamo theory developed by Krause & Rädler (1980). The mean-field dynamo theory essentially deals with the behavior of the large scale field and treats the induction effects of a small scale flow in terms of the so called α-effect. The basic principle of the mean-field approach is a splitting of magnetic field and velocity field assuming that the properties of the whole system can be described essentially by two scales, a mean, large scale part (B and U ) and a small scale fluctuation (b and u): Inserting (3) and (4) into (1) yields an induction equation for the mean-field B: while the induction equation for the corresponding small scale field b reads: Furthermore, the mean-field as well as the small scale field must obey ∇ · B = 0 and ∇ · b = 0. The mean-field induction equation (5) contains an additional source term, E = u×b, called the mean electromotive force (EMF). In the kinematic approximation, E is linear and homogeneous in B, and, under the assumption that the variations of B around a given point are small, E can be represented by the first terms of a Taylor expansion: Here α ij and β ijk are tensors of second and third rank, respectively. The diagonal components α ii give rise to an electromotive force parallel to the mean magnetic field and therefore may be responsible for dynamo action. For isotropic turbulence, the contribution proportional to the mean-field gradients simplifies to βǫ ijk ∂B j /∂x k (with the Levi-Civita tensor ǫ ijk ), so that this term behaves similar to a diffusive contribution. However, in our setup we have a strong anisotropy between vertical and horizontal coordinates, so that we refer to another expression for the electromotive force, that is based on elementary symmetry properties of flow and field (Krause & Rädler 1980): Here, a mean flow is assumed along the vertical direction which is labeled byẑ in a Cartesian system. The subscript denotes quantities that are parallel to this vertical direction whereas the subscript ⊥ denotes quantities that are oriented in the horizontal plane (xy-plane). In equation (8), α ⊥ and α give rise to a current parallel to the mean magnetic field and, hence, can be responsible for dynamo action. These coefficients correspond to the diagonal elements of the α-tensor, and anisotropic effects arising from properties of the small scale velocity field result in different contributions from the horizontal part α ⊥ (that generates a current in the xy-plane) and the vertical part α (that generates a current along the z-axis). In the same way, β ⊥ and β can be interpreted as anisotropic contributions to the magnetic diffusivity. The coefficient γ is related to the antisymmetric part of the α-tensor and describes an additional advection of the mean-field in the direction of the mean flow. The remaining coefficients β and δ i are related to the gradient tensor of the magnetic field and have no simple analogy. A more detailed derivation of equation (8) and a discussion about the meanfield coefficients α ⊥ , α , β ⊥ , β , β, δ i are given in the textbook of Krause & Rädler (1980). In the following, we only consider flow fields that do not depend on z and that are periodic in the xy-plane. All mean quantities are defined as horizontal averages, i.e., they do not depend on x or y. Consequently, most of the coefficients and all terms proportional to mean-field gradients in x and y vanish so that (8) can be significantly simplified: Note, that due to the constant velocity along z and the vanishing horizontal derivatives of the mean-field, all contributions labeled with can be dropped and only two terms ∝ ∂B/∂z survive. Furthermore, the effects corresponding to β ⊥ and β cannot be distinguished any more and are subsumed into one common coefficient β = β ⊥ + β (Rädler & Brandenburg 2003). The tensor coefficients appearing in (7) can be related to the more descriptive notation used in (9) giving the following relations: These relations reflect the horizontal isotropy in our models and allow a simplification of the problem since only four coefficients must be determined in order to establish a consistent mean-field model. Testfield method The test field method developed in Schrinner et al. (2005) provides a powerful tool to compute the coefficients α ij and β ijk from different realizations of the electromotive force that are obtained from externally applied, linearly independent mean-fields. Here, we restrict ourselves to the kinematic case with a stationary velocity field although the method can also be applied to fully non-linear magnetohydrodynamic systems where U is computed by solving the Navier-Stokes equation. The fluctuating velocity field is computed from the full velocity field U by with U being the horizontal average of U . The small scale magnetic field b is computed numerically by solving equation (6) with B defined as an external steady field, the so called testfield. Then the electromotive force is computed directly by correlating small scale flow with the small scale field and subsequently performing a horizontal averaging: The combination of different realizations of E obtained from different, linearly independent testfields with (7) yields a linear system of equations whose solution gives the desired mean-field coefficients. In principle, only minor preconditions for the testfields must be considered. In order to calculate mean-field coefficients that are consistent with the structure of the large scale field obtained from the fully resolved model it is necessary to consider the scale dependence of the mean-field coefficients (Brandenburg et al. 2008). Around the onset of dynamo action the vertical dependence of the large scale field in our systems is ∝ cos(z) (or ∝ sin(z)) which is exactly the vertical structure that we imply to the testfields. Because of the horizontal isotropy of our system, we define two testfields oriented in the horizontal plane parallel toŷ: With this definition we obtain four equations with four unknown mean field coefficients α yy , α xy , β yyz , β xyz which read: Here E 1,2,x,y denote the horizontal components of the electromotive force obtained with the testfields B 1,2 . The remaining coefficients α xx , α yx , β yxz and β xxz can, in principle, be calculated with similar equations involving E obtained from B 3 = cos(z)x and B 4 = sin(z)x which requires the numerical solution of two further partial differential equations for the corresponding b. For test purposes we have additionally performed these calculations and verified that the isotropy conditions given by (10) are met in the simulations. Velocity field In the present study we examine two different flow models: In model A we assume a flow consisting of various helical eddies that are separated by walls (left panel in figure 3). This flow definition resembles the Roberts-flow (Roberts 1970, Roberts 1972 but comprises a separating region between each cell quite similar to the model examined by Sarkar & Tilgner (2005). In contrast to the Roberts-flow, the flow in our model has the same orientation (left-handed) in every cell. However, in combination with a uniform vertical flow, each cell provides the same helicity as it is also the case for the Roberts flow. We further allow for a variation of the relative permeability assuming that magnetic material is used to guide the flow along the vertical direction. Following the idea of Pierson (1975), the helical flow within one cell represents the mean flow within one assembly of nuclear fuel rods ignoring the even smaller scale flow around individual rods. The flow with amplitude u 0 in one individual cell of size D is given by where x 0 and y 0 represent the coordinates of the cell center in the horizontal plane. The total velocity field is a superimposition of N cells (each using (17)) which additionally considers the wall regions by setting v x = v y = v z = 0 there. The thickness of the walls is defined as d = 2/(6 √ N) with N the number of helical cells. The definition of the wall thickness ensures that the relation of cell size to wall thickness is constant when increasing the number of cells. The specific value is chosen so that the number of grid points representing a wall is sufficient to numerically resolve the effects of the permeability transition of the fluid-wall interface. We used four different realizations with N = 4, 16, 64 and 256 cells arranged in a squared pattern, however, most simulations have been performed using the setup shown in the left panel of figure 3 where 16 cells in a horizontal plane are displayed. The second approach (model B, see right panel in figure 3) uses a more detailed picture of the flow conditions within a single assembly. The model is based on the so called spin generator flow that has been utilized for the simulation of the Karlsruhe Dynamo (Rädler et al. 2002a, Rädler et al. 2002b, Rädler & Brandenburg 2003. A detailed numerical model of the flow in a hexagonal assembly consisting of seven fuel rods including a wire wrap surrounding each rod can be found in Gajapathy et al. (2007) where the Navier-Stokes equation is solved numerically and turbulence effects are included in terms of a standard k − ǫ model. Here, we use a simplified flow field roughly in accordance with the model of Rädler et al. (2002bRädler et al. ( , 2002a by assuming a circular flow around a central rod superimposed with a constant vertical flow. The flow around a single rod is defined as where (x 0 , y 0 ) is the center of a rod, R is the radius of the rod and D is the distance between two adjacent rods (see right panel of figure 3). We have performed simulations with 9 and 25 rods regularly distributed in the horizontal plane. Note, that all helical flow cells in both models are left-handed, so that the helicity provided by each cell has the same sign. Furthermore, the global dimensions of the computational domain remain the same, independent of the number of cells, so that an increasing number of cells or rods goes along with a smaller scale of the fluctuating flow component and, thus, an increased separation between large scale and small scale flow. Horizontal isotropy is preserved by applying a quadratical configuration (identical linear extensions and identical resolution) and periodic boundaries. The vertical extent of the computational domain is z ∈ [0; 2π] with periodic boundaries as well. The Cartesian geometry is different from the hexagonal pattern of realistic assemblies. However, we believe that for the development of the methodology, the numerically much easier to handle Cartesian geometry is more advantageous without exhibiting excessive deviations from the realistic case. In order to characterize the amplitude of the flow we define a local magnetic Reynolds number that is based on the flow amplitude u 0 , the "normal" magnetic diffusivity η = (µ 0 σ) −1 and the size D of a single eddy (model A) or the distance between two adjacent rods (model B): Permeability distribution The standard mean-field approach developed in Krause & Rädler (1980) is not intended to consider a spatially varying ("fluctuating") permeability which can easily be seen taking the case Rm = 0. Then the EMF must vanish, E = 0 (since u = 0) and thus all mean-field coefficients vanish independently from the actual distribution of µ r . Our modification starts with the induction equation with a non-uniform permeability distribution µ r = µ r (r), which reads Using standard vector relations and ∇ · B = 0 we rewrite (20) in the form with η = η(r) = η/µ r (r). The modified induction equation (21) exhibits an additional, not necessarily divergence-free, velocity-like term, sometimes called paramagnetic pumping (Dobler et al. 2003): We define a modified velocity field U = U + u which is now the velocity field that has to be split up into mean part and fluctuating part when applied in the testfield method. Note, that the introduction of the pumping velocity u provides a non-vanishing fluctuating velocity contribution even in case of a vanishing fluid flow (i.e. when u 0 = 0). In our model, we first define a permeability distribution, from which we compute the corresponding pumping velocity u using a simple finite difference discretisation. In the fluid regions (where u z = 0), the permeability distribution takes the value µ r = 1, and µ r is set to a fixed value = 1 in the remaining regions (where u z = 0, indicated by the grey shaded areas in figure 3). In order to avoid the discontinuity at the fluid-solid body transition, which would lead to an amplitude for the pumping velocity that depends on the grid-resolution, we smoothed the discontinuity at the fluid-solid body interface by assuming some sinusoidal distribution with a fixed length-scale that is independent of the grid resolution. Results In this section, we will apply the test field method to the two geometric models A and B, first without and then with consideration of magnetic materials. In each case we will validate the correspondence of the dynamo action of the fully resolved and the derived mean-field models. Homogeneous case (µ r = 1) The typical structure of the magnetic field just above the dynamo threshold is shown in Figure 4. The field geometry is remarkably similar for both models and essentially describes a large scale helical pattern dominated by the horizontal components. The small scale field is visible in terms of little undulations on top of the large scale structure. 4.1.1. α-and β-effect We start with a uniform permeability distribution with µ r = 1 for both the fluid and the solid internals. The resulting α-effect is qualitatively in accordance with the results from Rädler et al. (2002aRädler et al. ( , 2002b for α in case of an ideal Roberts flow. Similarly, we write for the α-coefficient with a constant K (K ≈ 0.026 for model A and K ≈ 0.066 for model B) and a nonanalytic function Φ that only depends on Rm loc . Figure 5 shows the behavior of Φ versus the flow amplitude Rm loc for three different realizations of model A (with 4, 16 and 64 helical cells, left panel) and for two realizations of model B (9 and 25 rods, right panel). Note, that the normalization factor K is universal for each model and does not depend on the cell size D. Qualitatively, the behavior of the function Φ is similar for both flow models. Φ approaches its maximum value for Rm loc → 0 and decreases monotonically with increasing Rm loc , so that, presumably, Φ will asymptotically approach zero for very large flow amplitudes. Here, we are limited to Rm loc < ∼ 20 for model A and to Rm loc < ∼ 14 for model B because above these values the occurrence of small scale dynamo action with exponentially growing small scale field prevents a reliable estimation oft the mean-field coefficients. The onset of small scale dynamo action occurs at smaller Rm loc for a larger number of cells, so that the models with the largest N determine the largest achievable Rm loc . Nevertheless, both models are already highly overcritical at this Rm loc so we are still able to discuss the behavior around the onset of dynamo action (which is of main interest in the present context). Regarding the coefficient β, we find significant differences between both models (figure 6). To our knowledge, no analytic expressions for β beyond the second order correlation approximation (SOCA), in which u ×b −u × b = 0 is assumed, are available in the literature (see, e.g., Tilgner (2001) for an expression for β using SOCA). The restricted validity of SOCA is shown in Brandenburg et al. (2008) where mean-field coefficients obtained from the test-field method with and without SOCA are compared. Since in our model the preconditions for SOCA are not met, we refrain from a similar analysis. Surprisingly, we do not observe any dependency on the cell size for model A and only a weak dependence in case of model B. Thus, the β-effect mainly depends on Rm loc and is independent of the characteristic wave number of the small scale flow (at least within the rather restricted range of flow scales that has been examined for this study). The most striking property of the β-coefficient in model A is the transition to negative values around Rm loc ≈ 8 * . In general, the β-effect is associated with an enhancement of the magnetic diffusivity due to the small scale motion and hence should be positive. The occurrence of a negative β effect can be explained by the presence of two contributions related to field gradients in the z-direction that cannot be separated from each other in our configuration: the anisotropic part of the magnetic diffusivity β ⊥ (which is assumed to be positive) and the term related to the symmetric part of the field gradient tensor described byβ in equation (9). For the second contribution no restrictions for the sign are known so that the sum of both terms can become negative. A negative β may be helpful for dynamo action, but in our models the sum of β and the "normal" diffusivity η (which is set to unity in all runs) always remains positive (see insert plot in the left panel of figure 6) so that the consideration of β results "only" in a reduction of the overall diffusivity η tot = η + β (if we neglect that β is related to an * A similar behavior has already been observed in certain parameter regimes for the Roberts flow examined in Brandenburg et al. (2008). anisotropic contribution). The relative amplitude of β is much larger in model B with β exceeding η by up to a factor of 10, and we do not find a negative β-effect within the achievable parameter regime. However, a local maximum of β exists around Rm loc ≈ 13 and it cannot be ruled out that the further development of β follows a similar path as in model A but for larger values of Rm loc and β. For small Rm loc the behavior of β is roughly proportional to Rm 2 loc (see the black dashed curve in the right panel in figure 6) which is in accordance with measurements of the β-effect in the Perm experiment (Frick et al. 2010, Noskov et al. 2012. Comparison between fully resolved models and mean-field models In the following, the α-and β-coefficients presented in figure 5 and 6 will be used as an input for mean-field dynamo simulations. The corresponding equation includes the mean flow U obtained from horizontal averaging of equations (17) or (18), the EMF given by equation (9), and a diffusive term ∝ ∇ × B that involves an effective (mean) diffusivity η. The mean diffusivity η is computed by dividing the "normal" (uniform) diffusivity η by the horizontal average of µ r (r): with L the horizontal width of the computational domain. The resulting mean-field induction equation reads: where we additionally specified the terms related to γ and δ 2 which mostly have no influence on the growth-rates. For γ this is true for all runs, whereas for δ 2 we assume a beneficial impact for dynamo action in model B in case of large µ r and large Rm loc (see below). Figure 7 shows the growth-rates obtained from the fully resolved models (FRM) that have been used to compute the mean-field coefficients in the previous section (solid curves) in comparison with the growth-rates obtained from the mean-field models (MFM) at various Rm loc (dashed curves and stars). We obtain quite a good agreement and 25 rods, respectively. The solid curves denote the growth-rates from the fully resolved models (FRM) and the dashed curves denote the growth-rates for mean field models (MFM) that have been computed using the meanfield coefficients obtained for the specific Rm loc that are marked with the stars. between FRM and MFM if the system is not strongly overcritical. The agreement becomes better for an increasing number of helical eddies, i.e., for an increasing scale separation which provides a better fulfilment of the prerequisites for applying the meanfield theory. The rather large deviations in the strongly overcritical regime can be explained by a transition of the vertical wavenumber of the leading eigenmode from n = 1 to n = 2. The higher wavenumber is not incorporated by the particular vertical wave number of the applied testfields which are ∝ cos(z) and ∝ sin(z). In principle this issue could be attacked by computing mean-field coefficients for testfields ∝ cos(nz) and ∝ sin(nz) with n = 2, 3, 4, ... and including these contributions in the mean field models (see, e.g., Brandenburg et al. 2008). The growth-rates presented in figure 7 show that a reduction of the scale of a single helical cell (or an increase of the number of helical cells) improves the dynamo properties of the system. The increase in growth-rate with decreasing D follows a typical scaling law, which becomes apparent from the left panel of figure 8 where the growth-rates are plotted against Rm loc divided by √ D. The scaling is almost perfect for small magnetic Reynolds numbers and convergence arises when changing the flow pattern from 64 to 256 cells (compare green and orange curve in the left panel in figure 8). A similar scaling is obtained for the critical magnetic Reynolds number Rm crit loc that is required for the onset of dynamo action. The behavior of Rm crit loc for decreasing cell size D can be derived assuming that the onset of dynamo action is governed by some global magnetic Reynolds number. This quantity may be defined on the basis of an effective length scale that is given by the linear number of cells (which in our quadratic configuration is equal to √ N) multiplied with the typical scale of a single cell D. Then the onset of meanfield dynamo action is determined by Rm crit glob ∼ √ NDα crit /η. For a large number of cells (corresponding to a small D) we have Rm loc ≪ 1, so that using equation (23) with Φ → 1, we can write √ NDα crit /η = √ N K(Rm crit loc ) 2 . Given that we used √ ND = const, this immediately yields Rm crit loc ∼ √ D which is indeed confirmed by our results (right panel in figure 8). A more detailed analysis of the critical magnetic Reynolds number for a mean field model of the Roberts flow that includes the dependence on the vertical extension can be found in Tilgner (2007). Walls and rods with µ r > 1 In the following, we only examine systems with 16 eddies (model A) and 9 rods (model B) because the consideration of a permeability distribution with µ r > 1 extends the necessary simulation time due to the decreased effective diffusivity, so that we are limited to smaller systems with lower grid resolution. Not surprisingly, the results become more complex when µ r > 1. Figure 9a and b show the behavior of α ⊥ versus µ r for different values of Rm loc . Here we refrain from any scaling for α ⊥ in order to carve out the direct influence of Rm loc and/or µ r on α ⊥ . For a fixed µ r , we always find that α ⊥ grows with increasing Rm loc . However, we find significant differences between both flow models regarding the dependence on the permeability. For model A we observe a significant suppression of α ⊥ for small permeabilities (say µ r < 10), followed by a slow recovery for further increasing µ r . In contrast, for model B we see a moderate increase of α ⊥ Figure 9. Mean-field coefficients and growth-rates versus µ r for various Rm loc (left: model A, right: model B). Top row: α ⊥ versus µ r , second row: β versus µ r , third row: δ 2 versus µ r , fourth row: growth-rate versus µ r and comparison between FRM (solid curves) and MFM (dashed curves and stars). The dotted orange curve in panel (h) shows the MFM growth-rates without δ 2 term. for small µ r followed by a saturation regime for µ r > ∼ 5 in which α ⊥ becomes largely independent of µ r . For µ r only slightly above 1 in model A, we find a sharp maximum for α ⊥ around µ r ≈ 1.5. This maximum is retrieved again in the corresponding growth-rates (as the βeffect has no equivalent peak or drop) but the influence on the critical magnetic Reynolds numbers remains small (see below). A significant difference between the two models is also found in the behavior of the β-effect ( figure 9c and 9d). For model A, we see an abrupt transition to negative values between µ r = 1 and µ r ≈ 5, and β remains nearly constant (β ≈ −0.6) for µ r > ∼ 5. In model B the behavior of β is surprisingly simple and monotonic. β just increases linearly with increasing µ r with the slope increasing according to Rm loc . In particular, we do not find any indications for a transition to negative values of β with this flow configuration. In figure 9e and f we additionally present the behavior of the coefficient δ 2 . This coefficient does not play any role for model A, but δ 2 becomes large in model B for large Rm loc and µ r . In this parameter regime, we see an α-effect which is independent of µ r whereas β is linearly increasing. This feature would be inconsistent with the growthrates, which are also nearly independent of µ r , so it requires an additional term that compensates for the losses from the β-effect. The only possibility in our models stems from the effects described by δ 2 . Indeed, this is confirmed in comparative mean-field models without the term ∝ δ 2 in which we find a decreasing growth-rate in the limit of large Rm loc and large µ r (see dotted orange curve in figure 9h). Regarding the behavior of the growth-rates obtained with all relevant mean-field coefficients, we find in general a good agreement between FRM and MFM (solid and dashed curves in figure 9g and 9h). However, we see some increasing deviations at larger Rm loc when µ r > ∼ 10. In that parameter regime the growth-rates obtained from the MFM are systematically smaller than the growth-rates obtained from the FRM. The behavior of the growth-rates is not monotonic for model A, whereas for model B we find an enhancement of induction action at low µ r while the growth-rates become independent of µ r for µ r > ∼ 10. Considering the whole range of achievable µ r in model A we find a reduction of the critical magnetic Reynolds number from Rm crit loc ≈ 4.2 (at µ r = 1) to Rm crit loc ≈ 3.2 (at µ r = 20). However, inbetween, dynamo action is significantly suppressed by the presence of ferromagnetic walls (left hand side in figure 10) and Rm crit loc can even reach values up to ∼ 30 around µ r ≈ 5.5. For model B we see a monotonic decrease from Rm crit loc ≈ 2 at µ r = 1 to Rm crit loc ≈ 1.5 at µ r = 20. Regarding the asymptotic behavior for large µ r in figure 10 it seems unlikely that a further increase of µ r will significantly reduce the critical magnetic Reynolds number of both flow models. Conclusions We have performed numerical simulations of the kinematic induction equation for two different helical flow types including internal walls or rods that may have magnetic properties. In the limit of large permeability, we found a moderate impact of µ r on dynamo action in terms of a reduction of Rm crit loc of roughly 25% compared to the nonmagnetic case. This relative reduction of the critical magnetic Reynolds number is nearly the same for both models. With view on the asymptotic behavior of Rm crit loc for large µ r we do not expect much smaller values for further increasing µ r . In model A, at the fluid-wall interface (where the field is maximum) the magnetic field is predominantly parallel to the cell walls, so that the permeability is not very important. The situation is less clear for model B, for which one could guess that there is little field in the rods because of flux expulsion from the helical flow, so that the properties of the rods have little effect. Other possibilities for an explanation of the magnetic field behavior rely on the particular topology of the permeability distribution, which in our model B consists of disconnected columns. This might hamper the formation of a large scale field, however, the behavior is not unique in the whole parameter range so that more detailed investigations are required to find a convincing explanation for model B. Regarding the impact of the magnetic permeability, its influence on the critical magnetic Reynolds number is less than what could have been guessed from the results of VKS dynamo experiment. This can be explained by the dominant dynamo mode which, in the present study, can be characterized by the vertical wavenumber. Here, the leading mode has the wavenumber k z = 1, so that our results should be compared with the behavior of the simplest non-axisymmetric eigenmode in the VKS configuration (the m = 1 mode which is ∝ cos ϕ). Indeed, Giesecke et al. (2012) found a reduction of 29% from Rm crit = 76 to Rm crit = 54 for µ r → ∞ which is rather close to the reduction we obtained in our present calculations. However, both models (VKS and helical flow models in the present study) are quite different so that this accordance might be an accident. Regarding a dynamo mode with k z = 0 (which corresponds to a uniform field in the vertical direction), we do not see such a strong impact on its growth-rate as found for the axisymmetric dynamo mode in the VKS model. Despite the similar reduction of Rm crit loc for both models in the limit of large µ r , we find an entirely different behavior for the corresponding mean-field coefficients. For model A, the presence of magnetic walls surrounding a single helical flow cell results in a suppression of the α-effect and a transition to a negative β-effect (which remains smaller than the "normal" diffusivity). In contrast, we see a slight enhancement of α and a linear growth of β for increasing µ r in model B, where the helical flow surrounds a magnetic rod. The development of α and β is not sufficient to explain the constant behavior of the growth-rates for large µ r and Rm loc where for increasing µ r we find a constant growth-rate, a constant α-effect but a linearly growing (positive) β-effect. Hence, an additional dynamo supporting effect must be present in order to compensate the increasing losses due to the β-effect. The only possibility within our study is the δ 2 effect, which indeed becomes an important contribution in that parameter regime. Comparing the growth-rates obtained from fully resolved models with the corresponding mean-field models we found a good agreement between both approaches for non-magnetic material (µ r = 1) and for materials with µ r < ∼ 20. The main reason for discrepancies at larger µ r is the difficulty to estimate reliable values for the mean-field coefficients and the occurrence of eigenmodes with larger vertical wavenumber that are not included in our mean-field approach. Our results can be adopted to large scale systems in which the flow consists of tens of thousands of helical flow cells that cannot be resolved in a direct numerical simulation. The simplest way does not need any information on mean-field coefficients and directly uses the scaling found for the critical magnetic Reynolds number, Rm crit loc ∝ √ D in the limit of small D (which goes along with a large number of helical cells). However, in order to model realistic systems it is necessary to consider non-periodic (insulating) boundary conditions and the flow outside of the core which essentially describes a large recirculation cell * . Such global models can hardly be modelled in direct numerical simulations of the full set of magnetohydrodynamic equations so it makes sense to model the magnetic induction due to the helical small scale flow through the corresponding mean-field effects, which in a global model only prevail in a limited region. The main contributions in such mean field models originate from the α-effect and the β-effect. For non-magnetic internals we have confirmed that the α-effect can be expressed in terms of a "universal" function Φ that allows a conclusion on α for larger systems when flow scale and flow amplitude are known. In combination with the β-effect which is roughly independent of the flow scale and behaves ∝ Rm 2 loc for small magnetic Reynolds numbers this allows a modelling of systems that may consist of tens of thousands of individual helical cells embedded into some large flow structure. Of course, for any specific sodium fast reactor a reliable estimate of the dynamo effect would require further detailed knowledge, such as the size of the core, the number of fuel rods contained therein, and the total flow rate. In addition, the arrangement of the fuel rods and, thus, the flow field is not as simple as it is assumed in our idealized * The consideration of the recirculating flow has been quite important for example for modelling of the Riga dynamo where the reverse flow ensures that the dynamo instability sets in as an absolute instability. model. For example, the fuel rods are packed much more densely within an assembly with a hexagonal shape. A more detailed model in such a geometry would require a combination of our models A and B in order to consider the small scale helical flow within an assembly as well as the walls that separate individual assemblies. Furthermore, it should be noted that the pitch angle which describes the relation of vertical to horizontal flow may have an impact on dynamo action. In the present study, this parameter is fixed to unity assuming equipartition between horizontal and vertical flow whereas realistic fast reactors are characterized by a dominant vertical flow. Nevertheless, we believe that a consideration of these details will only result in minor modifications to our findings and are therefore of secondary importance.
10,973.2
2014-05-22T00:00:00.000
[ "Physics" ]
A Comparative Study on Ontology Development Methodologies towards Building Semantic Conflicts Detection Ontology for Heterogeneous Web Services Semantic conflicts detection is considered to be one of the essential steps that should be carried out effectively, in order to pave the way towards establishing semantic interoperability between heterogeneous Web services. To achieve that, ontology plays the backbone of the detection process, which required to be implemented in a high quality manner. However, choosing a methodology to build this ontology is very difficult, since a considerable number of methodologies have been emerged to guide the ontology development process. Therefore, this study aims at reviewing and comparing the most accepted and used methodologies, with the aim of choosing an effective methodology from ontology engineering perspective to be recommended and used for building semantic conflicts detection ontology. Furthermore, to provide some useful insights for both theoretical and practical purposes to help ontology developers to choose the right methodology and to advance the state of the art. The comparison was performed against the common ontology development life cycle. The result of this survey and comparison recommend a methodology called METHONTOLOGY to be used for implementing semantic conflicts detection ontology. INTRODUCTION In recent years, numerous types of technologies in the era of information technology have emerged.One of the most current and significant of the emerged technologies is Web services (Yu et al., 2008), which are rapidly growing (Han and Guo, 2005;Yee, 2007) and widely adapted (Gustavo et al., 2004).Web services make the realization of Service Oriented Architecture (SOA) applications possible (Azmeh et al., 2010;Siew Poh et al., 2006).Accordingly, a huge number of Web services have been provided from different providers for different application domains.As a matter of fact, the most challenge and obstacle that Web services are facing is the existence of semantic conflicts between their messages, which prevent establishing semantic interoperability between the messages of Web services.Therefore, as a first and critical step to tackle this problem and to achieve semantic interoperability between the messages of Web services those conflicts should be detected.To the best of our knowledge, ontology is the backbone of semantic conflicts detection, as it provides the semantics of Web services' messages. In fact, one of the preliminary reasons that motivated the development of ontologies was establishing semantic interoperability between various knowledge bases (Hepp, 2008).Thus, ontology has been used to support semantic interoperability between heterogeneous data sources (Zhu and Madnick, 2006) and to facilitate the interoperability between different systems (Xexéo et al., 2005).In practice, semantic interoperability includes two aspects; syntactic interoperability and semantic interoperability (Commission, 2009).The former, is about ensuring the exact formats and schemas of the data that being exchanged.While the latter, is about ensuring the meaning of the data that being exchanged and ensuring that both communicated parties are sharing the same understanding of the data.This is can be accomplished through ontology, as it will enhance the understanding and the interoperability (Ensan and Du, 2013).In Web service context, semantic interoperability is concerned with ensuring that the sender and the receiver Web services are communicating meaningfully, which means booth Web service are sharing the same interpretation and understanding of the messages that being exchanged. To detect semantic conflicts between heterogeneous Web services, we decided to build ontology called Semantic Conflicts Detection Ontology (SCDO), because ontology provides the necessary information to support semantic interoperability (Cheng et al., 2009) and it has the ability for solving semantic interoperability problem (Hajmoosaei and Abdul-Kareem, 2007).To build this ontology with ensuring its quality, a right methodology should be used.Answering a question such as what is the most suitable methodology that can used for building SCDO? is very difficult.Even though, some work has survived some of the existed methodologies, a precise answer for this question is still unanswerable.Moreover, a close look at the literature shows that there is a lack of a comparative study that compared the current methodologies from ontology engineering perspective.Therefore, this study attempts to tackle those issues and to provide constructive information to advance this field of research. ONTOLOGY DEFINITION Since early of the 1990s, ontologies have become an active topic investigated by the research communities in the area of Artificial Intelligence (Fensel et al., 2001;Studer et al., 1998).The term "ontology" was adapted from philosophy (Gruber, 1993) and it has been defined and used in different disciplines.One of the earliest ontology definitions was given by Gruber (1993), who defined the ontology as "an explicit specification of a conceptualization".This definition is one of the most well-known accepted definitions by the ontology community and also the most quoted (Corcho et al., 2003).A few years later, this definition had been modified by Borst (1997), in which the ontology can be thought of as "a formal specification of a shared conceptualization".The formal specification refers to the form of the ontology being formal specification means the ontology specification is machine readable (Corcho et al., 2003;Studer et al., 1998).Shared conceptualization refers to the fact that the knowledge captured by the ontology should be sharable; therefore, it should be accepted by a group and should not be restricted to some individuals (Corcho et al., 2003). Besides, ontology is thought of as a tool that helps to clarify the semantics of information (Li and Ling, 2004), in which the concepts of information and the relationships between these concepts are formally representing (Alamgir and Mohayidin, 2009).Ontology has the most important impact in the information exchange process (Terzi et al., 2003).This is because; ontology provides the fundamental technology, which is necessary for supporting semantic interoperability (Cheng et al., 2009).In fact, we argue that ontology is considered to be the backbone for semantic conflicts detection, which definitely will improve the achievement of semantic interoperability between Web services messages.This is due to the fact that, ontology has shown its ability in interweaving human and machine understanding (Della Valle et al., 2005;Fensel, 2003).Furthermore, the necessary information about semantic interpretation of terms, representation of terms and message structures can be derived from the ontology. In this study, we defined the ontology as: ontology is an engineering artifact for describing an explicit and formal representation of a shared conceptualization of a phenomena. Ontology and semantic conflicts detection: Ontology has been used as a tool for providing the common understanding in a formal manner for a given domain, in order to facilitate the overcoming of semantic heterogeneities (Yahia et al., 2009), through providing the semantics of information (Li and Ling, 2004).Furthermore, as mentioned previously, ontology has the ability of interweaving the human and machine understanding (Della Valle et al., 2005;Fensel, 2003) by representing human interpretations about the domain knowledge (Özacar et al., 2011).This ability paves the way towards the automation (fully/partial) of semantic conflicts detection process and/or resolution.Therefore, ontology has been used as the key player for most of the approaches that aim at detecting and/or solving semantic conflicts such as (Hajmoosaei and Abdul-Kareem, 2007;Liu et al., 2007;Sudha and Jinsoo, 2004). From another perspective, ontology can be implemented to explicitly represent semantic conflicts (Liu et al., 2007), which provides an ideal mechanism to effectively detect semantic conflicts.Nevertheless, forcing Web services' providers to adapt the same representations and interpretations of the same terms as well as adapt the same message structures is not practicable.Thus, ontology can provide the common understanding of heterogeneous Web services messages.Providing the common understanding will allow the process of messages exchange between the communicated Web services to tack place in a seamless manner. LITERATURE REVIEW Basically, the movement from the ontological art to the ontological engineering lead researchers to propose different methodologies, in order to develop ontologies for different purposes in different fields.In the context of ontology development methodologies, a considerable number of surveys has been done in the literature (Pinto and Martins, 2004).One of the earliest surveys was conducted by Jones et al. (1998).This study aimed at identifying the key issues of the surveyed methodologies that should be addressed.The reviewed methodologies in Jones et al. (1998) were categorized into two groups; comprehensive and incomprehensive methodologies. One year later, (Fernández-López, 1999) surveyed and analyzed the best-known methodologies that have proposed at the time of writing the paper.The analysis of the reviewed works was against IEEE standard 1074-1995 (IEEE Standard for Developing Software Life Cycle Processes) (IEEE, 1996).The aim of conducting the study in Fernández-López (1999) is to help people for choosing the relevant methodology for their works and to reveal the maturity of the state of the art. From another perspective, a comprehensive survey was conducted by Corcho et al. (2003).They analyzed and compared the reviewed methodologies based on three different criteria project management processes, Ontology development oriented processes and integral process. Similarly, Pinto and Martins (2004) conducted a comprehensive study with the aim of revealing how ontologies can be built.They presented only the representative methodologies that used for building ontologies from scratch.They compared these methodologies based on the common ontology life cycle activities.In fact, ontology is considered as the backbone of semantic Web (Hu et al., 2005;Mustafa Taye, 2010).Therefore, this was a motivation for the survey studies that done in Cristani and Cuel (2005).The authors attempted to measure the common elements that evolved in each methodology through defining the phases' names, the input for every phase, phase description and the output from every phase. As can be seen from this brief review, the main goal of the most mentioned studies in this section is to gather most of the reported methodologies in one place to help and guide people to select one of them.However, the work presented in this study is relatively different, since it aims at finding the right methodology that can be used to implement SCDO.Moreover, to provide a useful comparison between the most known methodologies based on the common ontology life cycle to help and guide other developers to choose the right methodology for their ontologies from ontology engineering perspective and to advance the state of the art. ONTOLOGY DEVELOPMENT PROCESS Research on ontology development process is active which has resulted in a noticeable number of methodologies.An ontology methodology describes the necessary activities that should be carried out, how to carry out every activity, the order of these activities and the required techniques that should be used to implement and maintain the ontology.In this section, we did not review all the reported methodologies in the literature; we rather reviewed only the most accepted and used methodologies by researchers.Furthermore, relatively most of the existed methodologies were proposed based on the reviewed methodologies in this study. Ontology development methodologies: Due to the fact that, ontology forms the knowledge representation of any system for its particular domain (Chandrasekaran et al., 1999), a noticeable number of ontologies have been developed and used for different purposes in different disciplines (Brusa et al., 2008).So far, a considerable number of efforts with the aim of proposing methodologies for ontology development have been done, which is the natural result from the fact that building ontologies is an ongoing research (Chandrasekaran et al., 1999).Despite the enormous number of methodologies that have been proposed and reported in the literature, only the most known have been considered.Uschold and King(1995) proposed a methodology for building ontologies based on four primitive activities; identify the purpose, building the ontology, evaluation and documentation.The building activity includes three sub activities, which are ontology capture, ontology coding and integrating existing ontologies.The authors of this study did not mention about how to carry out the evaluation activities (Lopez et al., 1999). Similarly, Gruninger and Fox (1995) proposed another methodology called TOVE (TOronto Virtual Enterprise).TOVE proposes six activities starting from motivation scenario, informal competency question, terminology, formal competency question, axiom and end with completeness theorem.In TOVE, building the ontology is based on competency questions (De Nicola et al., 2009).However, some activities such as knowledge acquisition, documentation and maintenance are not explicitly stated in TOVE (Pinto and Martins, 2004). Another methodology called METHONTOLOGY proposed by Fernández-López et al. (1997), which is considered to be a complete methodology for building ontology (De Nicola et al., 2009).In METHONTOLOGY, building ontology from scratch is composed of seven activities, which are specification, knowledge acquisition, conceptualization, integration, implementation, evaluation and documentation.This methodology provides a clear guidance (Pinto and Martins, 2004), in which the process of carrying out every activity is defined clearly.Furthermore, the ontology life cycle that proposed in METHONTOLOGY provides an accurate description of every activity (Brusa et al., 2008).The definition of the ontology development process is based on IEEE Standard 1074-1995 (López et al., 2000) The methodology for building ontology in public administration from scratch is proposed by Brusa et al. (2008).The development of this methodology is based on Fernández- López et al. (1997) and Gruninger and Fox (1995).This methodology is composed of three main sub processes specification, concretization and implementation.The orders of these sub processes are very important, in which the output of the each sub process will be used as an input for the next one.This methodology considers the graphical representational (in the sub process concretization), which is not explicitly considered in the aforementioned methodologies. Likewise, De Nicola et al. (2009) proposed another methodology for building ontology drown from Unified Process (UP).This methodology called UPON that stands for UP for Ontology.This methodology consists of five workflows and each workflow is composed of some steps.The workflows are requirements, analysis, design, implementation and test.It is worthy to point out that, UPON provides clear and accurate description for each of the workflow that involved in the process of building ontology based on UP. Comparison and analysis: This section aims at comparing the aforementioned analyzed methodologies.The criteria of the comparison are based on the activities that involved in the ontology development life cycle.The common ontology development life cycle from ontology engineering perspective involves around the following activities (Pinto and Martins, 2004): • Specification: Specification is the process of identifying the purpose of the ontology for which will be used and scope of ontology by knowing uses and users of the ontology.Identifying the ontology scope is a very important activity, which leads to the only necessary data to be analyzed (Brusa et al., 2008).• Conceptualization: Conceptualization is the process of structuring the domain knowledge, that has been acquired (from the specification activity) in a conceptual model that provides the necessary description about the problem and its solution (Fernández-López et al., 1997).The components of the conceptual model are the concepts and their relationships (Pinto and Martins, 2004).• Formalization: In this activity, the conceptual model will be transformed into a semi-formal or formal model.• Implementation: The outcome of this activity is the ontology codified.Thus, this activity requires the use of a representation language to formally implement the formal model to produce the intended ontology. • Maintenance: Maintenance is the process of updating and/or correcting the implemented ontology. • Knowledge acquisition: Is the process of acquiring the ontology knowledge.This knowledge can be elucidated from different resources such as books, scenarios, competency questions, experts, other ontologies, etc. • Evaluation: The evaluation means to judge the quality of the ontology technically (Pinto and Martins, 2004).• Documentation: It includes writing the necessary documentation to facilitate the use, reuse and maintenance of the ontology, as well as, for enhancing the clarity of the ontology. DISCUSSION Establishing semantic interoperability between heterogeneous Web services has been considered as a critical issue, in which semantic conflicts should be detected first.Hence, the main player for detecting semantic conflicts between heterogeneous Web services is the ontology, which provides the necessary semantics of Web service's messages for the detection process.To ensure that the ontology has been implemented in a sound manner, an effective methodology should be used.Choosing an effective methodology that has been used and shown its effectiveness for building ontologies is not an easy task. According to the comparison between the reviewed methodologies that presented in Table 1, METHONTOLOGY seems to be the most suitable methodology for implementing SCDO.Even though, UPON is almost the recent one and derived based on the Unified Process (UP), METHONTOLOGY has been approved as a typical methodology that has been used for implementing ontologies from different domains.It is worthy to point out that any of the reviewed methodologies can be used to implement SCDO, but without ensuring that the ontology will meet the quality design criteria.For example, TOVE can be used to implement SCDO.From SCDO point of view, this methodology has some limitations, which will diminish the quality of SCDO.Of these limitations, this methodology does not have evaluation activities, where evaluation is very important to ensure the quality of SCDO.Generally, the ontology life cycle activities that discussed previously in this study are the minimum and essential activities to ensure the quality of SCDO.Thus, any methodology that fulfills these activities can be recommended to be used for SCDO implementation. CONCLUSION One of the critical issues for implementing any ontology that should be addressed is the problem of choosing the mature and the right methodology.As adapting a mature methodology will certainly enhance the quality of the implemented ontology.Therefore, this study surveyed and compared the most known methodologies based on the common ontology development life cycle.This study aimed also to define the right and mature methodology to be used for building SCDO.Besides that, to provide the ontology developers with useful insights that would facilitate the process of choosing the suitable methodology for their ontologies.In addition, this study analyzed and compared the reviewed methodologies from ontology engineering perspective.Based on the result of this comparative study, METHONTOLOGY was chosen and recommended to be used for implementing SCDO.Thus, as a future study, we intended to develop SCDO based on METHONTOLOGY. Table 1 : A comparison of the different reviewed methodologies against the ontology development life cycle
4,079.6
2014-04-05T00:00:00.000
[ "Computer Science" ]
The lower bound violation of shear viscosity to entropy ratio due to logarithmic correction in STU model In this paper, we analyze the effects of thermal fluctuations on a STU black hole. We observe that these thermal fluctuations can affect the stability of a STU black hole. We will also analyze the effects of these thermal fluctuations on the thermodynamics of a STU black hole. Furthermore, in the Jacobson formalism such a modification will produce a deformation of the geometry of the STU black hole. Hence, we use the AdS/CFT correspondence to analyze the effect of such a deformation on the dual quark–gluon plasma. So, we explicitly analyze the effect of thermal fluctuations on the shear viscosity to entropy ratio in the quark–gluon plasma, and we analyze the effects of thermal fluctuations on this ratio. Introduction The AdS/CFT correspondence relates the supergravity solution in the AdS space to the conformal field theory (CFT) on its boundary [1,2]. As the AdS/CFT correspondence relates the AdS geometry to the ground state of a conformal field theory, a deformation of the AdS solution in the bulk will also modify the CFT dual to that AdS solution. In fact, such a deformation will result in the excitation of the ground state of the dual CFT solution. So, a black hole in AdS space corresponds to heating up the system, and this in turn corresponds to exciting the ground state of the CFT. In this paper, we analyze an interesting non-extremal black hole solutions which is motivated from results obtained using the string theory, and it called the STU black hole solution [3,4]. The STU black holes solution can be considered as the holographic dual of quark-gluon plasma (QGP), and it is possible to study QGP using STU/CFT correspondence [5]. The a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>QGP is a phase in quantum chromodynamics (QCD) which exists at extremely high temperature and density. There are many important quantities in QGP such as the shear viscosity, drag force and jet-quenching and they can be calculated holographically from a STU black hole [6][7][8]. So, in this paper, we will first analyze a deformation of the STU black hole geometry. Then we will analyze the modification to the QGP because of such a deformation of the STU geometry using the STU/CFT correspondence. Specially, we correct the shear viscosity to entropy ratio. It is conjectures that mentioned ratio has a universal value 1 4π in natural units. In fact it suggests a lower bound, so we have η s ≥ 1 4π . In order to analyze the deformation of the STU black hole geometry by thermal fluctuations, we need to first understand the relation between geometry of a black hole and its thermodynamics. In that case, the thermodynamics of STU black holes have been studied originally by Refs. [16,17]. The area-entropy relation establishes a relation between the geometry of space-time and thermodynamics of a black hole [18,19]. According to the area-entropy relation the entropy of a black holes scales with the area of its horizon [20][21][22]. It may be noted that this observation has led to the development of the holographic principle [23,24], and AdS/CFT correspondence (which has motivated the STU/CFT or STU/QCD correspondence [5][6][7][8]) is based on the holographic principle. This is because the holographic principle related the degrees of freedom in any region of space to the degrees of freedom on the boundary surrounding that region of space. This relation between the geometry of space-time and thermodynamics is more evident in the Jacobson formalism where Einstein equation is viewed as a thermodynamics relation [25,26]. In fact, the Einstein equation is derived in the Jacobson formalism by requiring the Clausius relation to hold for all the local Rindler causal horizons through each space-time point. As the Jacobson formalism establishes a clear relation between the geometry of space-time and thermodynamics, quantum fluctuations in the geometry of space-time will produce thermal fluctuations in the thermodynamics of black holes in the Jacobson formalism. Thus, we expect the thermodynamics of all black holes to get corrected because of the thermal fluctuations in the Jacobson formalism. It has been demonstrated that the area-entropy relation gets modified due to thermal fluctuations [27,28]. These corrections to the area-entropy relation have been studied using both analyzing the fluctuations in the energy of the system, and relating this system to a conformal field theory. However, the quantum fluctuations become important when the geometry is probed at very small scales, and the thermal fluctuations also become important when the temperature of the black hole is very large, and this corresponds to a very small size of the black hole. So, when the black hole is reduced in size due to Hawking radiation, the effects of thermal fluctuations cannot be neglected. Thus, when the size of the black hole becomes of the order of the Planck scale, the temperature of the black hole becomes very large, and the contribution from thermal fluctuations also becomes very important for such black holes. The effects of thermal fluctuations on a black hole in an anti-de Sitter space-time have been studied, and the correction to thermodynamics of such a black hole has also been obtained [29]. The corrected thermodynamics of such a black hole has been used for analyzing the phase transition in that system. The corrections to the thermodynamics of a black Saturn have also been studied, and it was observed that the entropy of both the black hole and the black ring gets corrected due to thermal fluctuations [30]. The black Saturn are thermodynamically stable because of the rotation of the black ring. However, it is possible for charged dilatonic black Saturn to remain stable because of background fields, and the thermodynamics of a charged dilatonic black Saturn has been discussed [31]. The corrections to the thermodynamics of such a charged dilatonic black Saturn have also been analyzed using the relation between this system and a conformal field theory [32]. The corrections to the thermodynamics of a modified Hayward black hole have also been discussed, and it has been demonstrated that the modified Hayward black hole is stable even after the thermal fluctuations are taken into account, as long as the event horizon is larger than a certain critical value [33]. It has been demonstrated that for all these systems the correction due to thermal fluctuations is a logarithmic correction. It may be noted that such correction terms have also been obtained from non-perturbative quantum general relativity [34], the Cardy formula [35], the exact partition function for a BTZ black hole [36], and matter fields in the backgrounds of a black hole [37][38][39]. In fact, even corrections obtained from string theory are logarithmic corrections [40][41][42][43][44][45][46][47]. All above studies indicated that logarithmic corrected thermodynamics of black objects is an important field of study in theoretical physics. So, in this paper, we will analyze the effect of such logarithmic corrections for a STU black hole. The logarithmic corrections to the thermodynamics of STU black hole will deform the geometry of the STU black hole in the Jacobson formalism, this will directly affect the properties of QGP inspired by the AdS/CFT correspondence. So, in this paper, we will analyze the effect of such a deformation on the shear viscosity to entropy ratio of QGP. This paper is organized as follows. In the next section we review some important properties of STU black hole from thermodynamical point of view. In Sect. 3 we introduce logarithmic correction, and in Sect. 4 we study its effect on the shear viscosity to entropy ratio. Finally in Sect. 5 we give our conclusion. STU black hole In this section we recall STU model in five dimension including electric charge and write thermodynamical properties which is useful in the context of AdS/CFT correspondence. The metric for the 5D STU model with three electrical charges can be written as where Here, R is the radius of the AdS space and it is related to the coupling constant as R = 1/g. The coupling constant is also related to the cosmological constant as = −6g 2 . Furthermore, r is the radial coordinate of the black hole, and the three electrical charges of black hole, corresponding to the three scalar field . The non-extremality parameter is denoted by μ. The closed universe has k = 1, the flat universe has k = 0, and the open universe has k = −1. The metric for these universes can be written as for k = 1, 0, −1, respectively. We will only consider the case where there is only one-charge for the black hole (q 1 = q, q 2 = q 3 = 0), and where H = H = 1 + q r 2 . So, the only free parameter of the model will be q. Now, the temperature and the entropy of this black hole can be written as [50], where G is Newton's constant and it is related to the AdS curvature as G = π R 3 2N 2 . Here, N is the number of colors. We should note that there is a coefficient k 2 in denominator of Eq. (5) for the cases of open and closed universes. Hence, Eq. (5) is valid in its present form for all cases of k = 0, ±1. It may be noted that r h is given by the root of f k = 0, The black hole horizon is a decreasing function of the black hole charge as illustrated by Fig. 1 for different k. The size of the black hole will be small for large electrical charge. However, for a large value of the black hole charge there is no way to distinguish between event horizons of a black hole in open, closed, and flat universe. Using Eqs. (4), (5), and (6), the temperature and entropy of the black hole can be expressed in terms of the black hole charge. In Fig. 2a, we demonstrate that the temperature is a decreasing function of charge for small values of q. There is a critical q c , where the black hole has a minimum temperature. Then for q > q c the black hole temperature increases with q. In Fig. 2b, we observe that by increasing the charge of the black hole, its entropy decreases. This is also expected from Fig. 1, because the entropy of the black hole is proportional to the radius of the event horizon r h . The specific heat can be written as In Fig. 3, we plot the specific heat and observe that for the large value of the black hole charge, there is an instability. Thus, by increasing the charge of the black hole, the black hole become smaller and warmer and it enters an unstable phase. However, we do not expect such instabilities and we can remove them. The thermal fluctuations are important when the size of the black hole is small. In the next section, we will analyze the effects of the logarithmic correction, due to the thermal fluctuation, on the specific heat. We will observe that we can remove such instabilities by using such corrections. We can use the following expression for the shear viscosity η [50]: to investigate famous ratio η S , where S is corrected entropy which will define in the next section. Thus, the conjectured universal relation η s = 1 4π hold for the STU model. So, using Eqs. (5) and (8), we can verify the universal behavior. However, there are some examples [48,49] where the mentioned ratio deviates below 1 4π . It should be noted that the calculation of η s in the STU background was first performed by the Refs. [50,51]. Thermal fluctuations It is possible to analyze the effects of thermal fluctuations on the black objects thermodynamics [27]. The entropy of any black objects gets corrected by a logarithmic term due to these thermal fluctuations. Thus, if we assume β −1 κ = T as a temperature close to the equilibrium, and β −1 0 = T 0 as the equilibrium temperature, then the corrected entropy can be written as [45] where It is also possible to express the second derivative of the entropy in terms of the fluctuations of the energy near the equilibrium. Thus, the corrected entropy can be written as [27,30,54] where s is the original entropy and c is the original specific heat of the system. Furthermore, as almost all approaches to quantum gravity generate such a logarithmic correction, however, the coefficient of such a correction term depends on the exact model of quantum gravity that has been used. Thus, such a coefficient can be used as a parameter than can test different model of quantum gravity. This is because different approaches to quantum gravity would generate different values of the coefficient of the logarithmic term. So, in this paper, we will keep this analysis general and introduce a general parameter α, which will be the coefficient of the logarithmic correction term. Now when α = 1, the usual thermal fluctuations taken into account, which is corresponding to a very small black object. On the other hand for α = 0, thermal fluctuations ignored, which is corresponding to the large black objects. Finally dots denote higher order corrections, which may be considered in future work. It is also possible to relate the microscopic degrees of freedom of a black hole with a conformal field theory [32]. Thus, using the modular invariance of the partition function of the conformal field theory, corrected entropy can be written as [27,32] It may be noted that, for the charged STU, there is an important difference between results obtained from Eqs. (11) and (12); however, both are the same at q = 0. So, there is a difference between the corrections generated from a conformal field theory, and the corrections generated from the fluctuations in the energy of the system. So, we observe that the effect of thermal fluctuations for the STU black holes is different from the effect of thermal fluctuation on most other black hole solutions. This is because the correction from both these approaches generated the same effects for all the other black holes that have been analyzed using this formalism [27,29,30,32,33,45]. Now using the logarithmic corrected entropy (11), we can obtain the corrected specific heat as In Fig. 4, we can observe the behavior of corrected specific heat for one-charged STU black hole, in terms of α for flat universes. For the reason we explain later, we only consider the case of k = 0. However, we have the same situation for the open and closed universes. As we see already, some instability happened for the large black hole charge. We show that there is a critical temperature q c so for the case of q > q c the black hole is unstable, while in the presence of a logarithmic correction (α = 0) with appropriate choice of α c the black hole is stable. For the selected values of the parameters (R = G = μ = 1) we can see that q c ≈ 1.16 and α c ≈ 0.39. A solid line of Fig. 4 shows the black hole specific heat for large electric charge. It is clear that the black hole is unstable for α = 0 as has become clear already by Fig. 3. We can see that for the appropriate choice such as α = 1 we have a totally stable black hole. Therefore we find that the logarithmic correction help to gain stability of black hole at high temperature. Shear viscosity to entropy ratio In this section, we are going to study the effect of the logarithmic correction on the shear viscosity to entropy ratio. We will consider three different cases corresponding to thermal fluctuation effects. So, first of all we can make a simple assumption, i.e., we can assume that the thermal fluctuations do not affect the shear viscosity, hence, we can obtain shear viscosity to entropy ratio using corrected entropy. So, we can use the corrected entropy given by Eqs. (8) and (11), to obtain the corrected shear viscosity to entropy ratio. In the case of α = 0, we have η s = 1 4π . However, in the presence of a logarithmic correction, we obtain η S = So, from Fig. 5, we can observe the effect of α on the shear viscosity to entropy ratio. Thus, we observe that lower bound ( 1 4π = 0.08) decreased due to the logarithmic correction. It may be noted that, using the AdS/CFT correspondence, these corrections in the bulk correspond to 1/N 2 corrections in the dual boundary theory. It is clear that α = 0 yields the conjectured universal lower bound ( η s = 1 4π = 0.08) while α = 1 yields η s = 0.01 − 0.03, which means universal lower bound violated. It should be noted that shear viscosity is typically defined in flat space (k = 0) hence in this section we only consider the case of k = 0. As we mentioned, the logarithmic correction is correspond to 1/N 2 correction, hence G R 3 α should be a small number proportional to 1/N 2 and small-alpha region of Fig. 5 is in any way reliable. Hence, we can rewrite Eq. (14) as follows: where γ is a small positive constant. It is clear that the shear viscosity to entropy ratio is a decreasing function of γ , hence the lower bound is violated. It is possible to obtain another result by using a better and more physical approximation. In such a calculation the thermal fluctuations also correct the shear viscosity. In fact, as the corrections to the entropy are 1 N 2 correction in the bulk, we expect such corrections to also correct the shear viscosity. It is possible to suggest the corrected value of shear viscosity, and this corrected value of the shear viscosity can correct the shear viscosity to entropy ratio. In that case, the shear viscosity to entropy ratio may be given by It is clear that the above ratio may be an increasing or a decreasing function of α and also may be constant for a suitable choice of O(α), hence the lower bound for this ratio may hold. However, for the appropriate value of O(α), lower bound may violated and the shear viscosity to entropy ratio yields zero. The best way to calculate O(α) is the Kubo formula, which relates the shear viscosity to the correlation function of the stress-energy tensor at zero spatial momentum by using the retarded Green function [51]. It is also possible to obtain an expression for the corrected shear viscosity, such that the ratio of the correct viscosity and corrected entropy is still does not violate conjectured universal minimum bound. This can be used to understand the behavior of shear viscosity in this limiting case. Thus, we can assume that the universal value η S = 1/4π holds for the corrected case, and we obtain Therefore, we can obtain the corrected shear viscosity as It may be noted that, like all gauge theories with Einstein gravity dual, a lower bound hold for all values of α. However, by study perfect quark-gluon liquid [52], it has been found enhanced viscosity to entropy ratio 5 8π . It has also been found that higher curvature corrections in the dual gravitational theory modify this ratio [14], hence the higher derivative corrected STU black hole [53] is an interesting issue to investigate under logarithmic correction. It has also been argued that for certain corrected theories the lower bound is violated. Just like the logarithmic corrected case we show that the lower bound may be violated due to thermal fluctuations. Conclusion In this paper, we have analyzed a special case of STU black hole in five dimensions with an electric charge. We have used the AdS/CFT correspondence to investigate the effect of thermal fluctuations on the properties of QGP, specially the shear viscosity to entropy ratio. First of all we used the logarithmic corrected entropy and studied the thermodynamics of one-charged STU black hole. We found that the logarithmic corrections affect the black hole stability. For instance, instability of the black hole at high temperature changes to stable phase in the presence of a logarithmic correction. We demonstrated that the viscosity to entropy ratio of QGP dual of the STU background is reduced due to thermal fluctuations with an appropriate choice of the correction parameter. Our study was based on three different assumptions. First, we assumed that the shear viscosity not changed due to logarithmic correction and only used logarithmic correction of the entropy and found that positive value of α yields a violation of the lower bound. In the second assumption we write the general form of the corrected shear viscosity and claim that the appropriate value of α gives the lower bound violation. Finally we assumed a universal value and calculated the corrected shear viscosity. The corrections to the thermodynamics can be obtained by analyzing the fluctuations in the energy of the system. They can also be analyzed using the relation of the black hole microstates with a conformal field theory. It has been observed that the effects of thermal fluctuations from both of these approaches are the same for all black hole solutions that have been previously analyzed using this formalism [27,29,30,32,33,45]. However, in this paper, it was observed that the effects of thermal fluctuations for a charged STU black hole from the energy fluctuations are different from the effects of thermal fluctuations for a STU black hole obtained from conformal field theory. However, these effects are the same when the charge vanishes. It might be interesting to investigate the reason for this further. It may be noted that a U-duality invariant expression for the area-entropy relation has been obtained for a stationary, asymptotically flat, non-extremal STU black holes [55]. It was demonstrated that this expression can be written in terms of asymptotic charges of this stationary, asymptotically flat, non-extremal STU black holes. This involves the scalar charges of the black hole which can be solved in terms of the dyonic charges and the mass of the black hole. It might also be possible to express the corrections to the area-entropy relation in such a stationary, asymptotically flat, non-extremal STU black holes using asymptotic charges. It would thus be interesting to analyze thermal fluctuations of such a STU black hole and discuss these corrections to the entropy using asymptotic charges. Testing quantum gravity using black objects [56] like the STU black hole is also an interesting research field. The extended thermodynamics of a STU black hole has also been studied [57]. This was done by viewing the cosmological constant as a thermodynamic variable of the STU black hole. A fixed charge ensemble was used to perform this analysis. It was demonstrated that the phase structure associated with this black hole was conjectured to be a dual RGflow on the space of field theories. It was also observed that the phase structure of this system resembles a Van der Waals gas for certain charge configurations. Thus, for this system a family of first order phase transitions exist. Furthermore, at a critical temperature, these first order phase transitions ended in a second order phase transition. The holographic entanglement entropy for such charge configurations was also obtained. It was observed that this entanglement entropy also predicted a transition at the critical temperature. So, the entanglement entropy can be used for analyzing the system in an extended phase structure. In this analysis, holographic heat engines dual to STU black holes were also studied. It would be interesting to analyze the effect of thermal fluctuations on such a system. Thus, we could analyze the effects of thermal fluctuations on both Van der Waals gas and the holographic entanglement entropy of black holes. It would also be interesting to analyze the effects of thermal fluctuations on the holographic heat engines dual to STU black holes.
5,607
2017-02-01T00:00:00.000
[ "Physics" ]
Structural design of concrete to EC2 and GB50010-2010: a comparison We mainly compares the differences and similarities in the design using the Chinese code GB50010-2010 (modified in 2015) and Eurocode 2. The paper focuses on the comparison of the two design codes in relation to the ultimate limit state (ULS) and serviceability limit state (SLS) as well as durability requirements. Also, the material specifications using both codes are discussed in relation to stress-strain curves, strength grades etc. Introduction The Chinese government invested over $50 billion to Belt and Road projects by March 2017 [1] , and 20% of this investment went to engineering sections [2] . Many countries adopt Eurocodes as their design standards for the plan of international project [3] , in addition to European countries and UK for their own national local projects. Some Chinese companies working on international projects also use Eurocodes as a reference. However, due to the region differences the designs to Eurocodes and Chinese codes follow similar but different guidelines. In addition, Eurocode is considered as a relatively advanced design standard and is widely adopted (including Europe, and some countries in Africa and Asian). Hence, the comparison between the design standards using the two codes is of interest to practical engineers working for international projects. This paper provides a review of the concrete design to relevant codes in both China and Europe, with a focus on the comparison of the two standards in concrete structural design part (EN1992-1-1 & EN1992-2 for Eurocode and GB50010-2010 for Chinese Standard) [4] . An E-learning package is developed based on the research outcome. The package summarizes main differences between the two standards and the background logics. Background Information of GB50010-2010 The GB50010-2010 is generally designed based on the requirement raised in the "Unified standard for reliability design of engineering structures" (GB 50153) and "Unified standard for reliability design of building structures "(GB 50068) [4]. The code of GB50010-2010 is designed towards two objectives. The first objective is to enforce the economic policies and national techniques for the design of concrete structures in China, and the second objective is to ensure the structures to be reasonable designed (in the first satisfy the safety requirements, and in the second design the structure economically), and hence satisfy the requirement of sustainable development [4][5][6] . In general, the code could be used to design the normal reinforced concrete, plain concrete and prestressed concrete structures used in civil and industrial buildings [4] . However, special concrete structures and lightweight aggregate concrete structures need extra considerations [4] . The GB50010-2010 includes eleven chapters and ten appendixes. The chapters are listed below: 1 General Provisions, 2 Terms and Symbols, 3 [4] . Generally the detailed design requirements are concentrated from the third chapter to the eleventh chapters, and these parts will be quoted for the comparison against the Eurocode. Background Information of Eurocodes The Eurocodes consist of ten European standards specifying the structural design rules within the European Union (EU). These standards were developed by the European Committee for Standardisation according to the requirement from the European Commission [7] . Generally the Eurocodes is designed to pursuit three main objectives: a means to prove compliance with the requirements for mechanical strength and stability and safety in case of fire established by European Union law [8] . By March 2010 the Eurocodes are mandatory for the specification of European public works and are intended to become the Eurocodes is a set of codes, and during the design procedure each code usually needs to be cross referenced. Table 1 provides the ID of different Eurocodes. Within the Eurocodes, EC0, EC1, EC7 and EC8 are considered as general ones, which guide all structural design, and the rest of the codes are used for structures with specific construction materials [7,[9][10][11] . Comparison of Compressive Characteristic Strength The Eurocode uses a two-index form denoting concrete strength such as C40/50, where the first number is the strength characteristic value of the concrete cylinder specimen with the size of 150 * 300 , and the second is the strength characteristic value of the cube specimen with the size of 150 * 150 * 150 . The formulas within EC2 have adopted the cylinder compressive strength for design. The relationship between cylinder compressive strength and the equivalent cubic compressive strength is: The characteristic strength is defined as the value that guarantees 95% safety, which is the same in both codes. In GB50010-2010 the concrete strength is expressed by a single value, such as the C50, which represents the nominal compressive strength of 50 of the standard cube specimen with the size of 150 * 150 * 150 . GB50010-2010 also defines the axial compressive strength of the standard specimen with size of 150mm x 150mm x 300mm in prism shape as [12] . The comparison of the specimen type associated with their codes are provided in the Furthermore, it is worthwhile to note that the definition of f of the two codes are different, but if the specimen have same shapes, then similar compressive strength would be expected. Compression of Compressive Design Strength The design value of concrete compressive strength, tensile strength and the corresponding characteristic value given in EC2 are as follows: f / (1) where γ represents partial coefficient of the concrete strength, which is 1.5 for the permanent and the transient condition, and takes 1.2 for accidental circumstance; and α represents the reduction coefficient considering the long-term effect and adverse effect to concrete, for compressive and bending condition this factor should equal 0.85 while in other conditions should equal 1.0. Chinese standard has adopted a similar method to obtain the concrete design strength. However, the value of γ is slightly smaller than Eurocode. For the structural design, according to GB50010-2010 this factor should equal to 1.4 and for the bridge the road design according to JTG D62-2004 this value should equal to 1.45 [4,13] . If taking the α as 1 and γ 1.5 , the design compressive value of the two standard is shown in Table 2. According to the table, Chinese code has smaller design value while the specimen is same. Hence GB 50010-2010 is more conservative while the value of α 1 and γ 1.5 are adopted in the structures designed following EC2. However, if α 0.85 are adopted, the design compressive values of the two standard would be close. Rebar The mechanical properties of rebar are also very important in reinforced concrete structures. Rebar not only provides tensile strength and compressive strength to the structure, but also make the structure satisfy the special requirements of deformation properties. The dimensions and type of reinforcement defined in Chinese codes and Eurocodes are similar, but the strength are different. The steel bar defined in Chinese codes covers low, medium and high strength level, while the Eurocode do not have low strength level and pay more attention to the middle and high level reinforcements. In addition, under the same reinforcement level, the values of elastic modulus and steel characteristic yield strength in Chinese code and Eurocode are the same, and for the partial factor of reinforcement, the value of the European code is 4.17%, less than that in Chinese code [8] . Comparison of Rebar Strength The rebar classification is specified in GB 50010-2010 for China and some detailed specification could be found in EN10080-2005 for Eurocode. The steel varieties and specifications are relatively close for the two codes [14] . One of the most important characteristic parameters of steel bar is its characteristic yield strength f . EC2 defines the characteristic yield stress f by the standard yield load divided by the section nominal area of the rebar. In addition, for the steel bar without obvious yield stress, the stress corresponding to 0.2% residual deformation can be used as the yield stress, which is similar to Chinese code. Figure 1 shows the typical stress-strain relationship curves of steel bars, and the steel normal strength are presented in the Table 3. According to the Table 3, GB 50010-2010 include low, medium and high strength rebar, while the Eurocode does not have low strength level rebar, the minimum strength specified in Eurocode is 400 MPa. In addition, the design values of yield strength in terms of tension and compression for each rebar grade are the same for each code, except the highest grade in GB 50010-2010, see Table 3. Comparison of Durability and Limit State in EC2 and GB50010-2010 Both Chinese code and Eurocode adopt the philosophy of limit state concept, with consideration of safety factors of loading and resistance [8] . The partial loading factors of Eurocode are higher than those adopted in GB500010-2010. [8] For the two codes, while the strength level of concrete is same, the value of concrete standard strength, elasticity modulus, design strength is higher than Chinese standard [8] . This section will present the durability and limit state design of the two codes. Durability and Concrete Cover In addition, the regulation of structural reinforcement and expansion joints in China contains more details. Comparing the control method of the load crack of the reinforced concrete member in Chinese-code and Eurocode, Chinese-code adopt the method of crack width checking to control the cracks, and Eurocode pays more attentions on crack control, including calculating the minimum steel area, restricting the stress imposed on concrete and the steel bar. The crack width calculation method of the two codes are similar [8] . Environment Code Both codes require the design of concrete structures to consider the environment condition in order to keep the structure durable and economical. Based on EC2 and EN206-1, the environmental conditions are classified by exposure grade. Detailed charts are recorded in EN206-1 and EC2. The main environment is divided into six categories: 1. No corrosion or erosion risk. 2. carbonation corrosion. 3. Chloride Apparel 4. Chloride corrosion in seawater. 5. Freeze-Thaw denudation 6. Chemical erosion. The concrete exposure environment is divided into seven categories in GB 50010-2010, but there is no specific description of each classification. Comparing the environmental categories of the two codes, the classification of the environmental categories of concrete structures by EC2 is more detailed than Chinese code. Concrete Cover The concrete cover requirement in Eurocode is based on the grade of concrete structure exposure and the environmental category, while in Chinese code is based the Environment category and concrete structural form. EC2 defines the thickness of the concrete cover as the distance from the concrete surface to the nearest steel bar (including the associated links, distribution rebars etc.). The nominal thickness c of the protective layer is defined as the sum of the minimum cover thickness c c Δ (2) The associated exposure classes related to environmental conditions could be found in Table 4.1 of EN1992-1-1 :2004 [15] . The structural classification of the c prescribed by EC2 is contained in EN10080 [14] . In addition, the recommended concrete strength is S4 for structures with a design life of 50 years, and specific adjustments for different materials can be found in Table 4.3N of EC2, and the minimum structure class recommended by EC2 is S1. For EC2, the minimum cover thickness c should meet the requirements of bonding and durability, which could be obtained from Table 4(for reinforcement steel based structure) and Table 4.3N of EC2 (for pre-stressing steel based structure). For GB 50010-2010,the cover thickness requirement of concrete structures with a life span of 50 years should satisfy the minimum requirements provided in the Table 5, and for the concrete structure with a design life of 100 years, the cover should be not less than 1.4 times of those in Table 5. Furthermore, if the strength grade of concrete is not more than C25, the thickness of the protective layer in the table should be increased by 5mm, the concrete cushion (blinding concrete) should be set on the reinforced concrete foundation, and the thickness of the concrete cover should be calculated from the top of cushion layer and should not be less than 40mm [4] . Limit State Design Limit State Design (LSD) is a design method widely used in civil engineering. The partial factor of Eurocode is 3.33% larger than Chinese code [8] . In contrast to the non-load crack control clause of the Chinese-code and Eurocode, it is found that the Chinese standard structure is reinforced on both sides of the beam and inside the stirrup, while the Eurocode is on both sides and the lower part of the beam and outside the stirrup. Both the EC2 and GB50010-2010 adopt limit state philosophy for design. Both codes divide the limit states into ultimate limit state and serviceability limit state. This section will compare the LSD of the two standards. Comparison of Ultimate limit state (ULS) The ULS concerns the safety of people, and/or the safety of the structure. In some circumstances (agreed for a particular project with the client and the relevant authority), the limit states that concern the protection of the contents should also be classified as ultimate limit states, as documented in EC0 [7] . Specifically, Eurocode considers the following states as ULS: (1) states prior to the structural collapse, which, for simplicity, are considered in place of the collapse itself (2) loss of equilibrium of the structure or any part of it, considered as a rigid body [7] ; (3) failure by excessive deformation, transformation of the structure or any part of it into a mechanism, rupture, loss of stability of the structure or any part of it, including supports and foundations [7] ; (4) failure caused by fatigue or other time-dependent effects. The limit states can be divided into EQU, STR, GEO and FAT A detailed explanation of the four limit states could be found in the provided references [7,16,17] . Errors and inaccuracies may be due to a number of causes. Firstly, design assumptions and inaccuracy of calculations. In addition, possible unusual increases in the magnitude of the actions. Thirdly, unforeseen stress redistributions. Fourthly, constructional inaccuracies. These cannot be ignored and are taken into account by applying a partial factor of safety during design [18] . Generally, permanent and variable actions will occur in different combinations, all of which must be taken into account in determining the most critical design situations for any structure. For example, the self-weight of the structure may be considered in combination with the weight of furnishings and people, with or without the effect of wind acting on the building (which may also act in more than one direction) [15] . In Eurocode, the actions are used in place of loadings/forces while in Chinese code loadings/forces are retained. The variable actions in Eurocode as well as Chinese code are divided into leading variable action and accompanying variable actions such as wind loading. However, in old British standard BS8110 there is no such division [19] . Yan Yalin [20] thinks the common load combinations under the STR based on Eurocode can be expressed by: 1.35 * 1.5 * (3) Xue Yingliang [21] thinks the common combinations under the STR based on Eurocode should be the maximum value of eq. (4) expressed as below: max 1.35 * 1.5 * , * 1.35 * 0.85 * 1.5 * (4 ) According to EC0 Annex A, the selection between (3) or (4) should base on the specific national Annex. According to the British Annex, both equations are applicable. Normally the result of (4) is smaller than of (3). The detailed reduction factor can be found in [22] . The ULS load in Chinese code can be obtained as: 1.2 * 1.4 * (5) The minimum and recommended values of variable actions/live load in EC2 and GB50010-2010 are provided in Table 6 for common civil buildings. As can be seen from the table, the recommended values of live load stipulated in EC2 are larger than the value of GB50010-2010, while some are less than GB50010-2010. However, it is hard to say which is safer according to the value provided in the table, because the basic national conditions of the two countries are different. For instance, the recommended live load value of teaching room for China is 2.5 kN/m , which is higher than the value provided by EC2, due to the higher density of students in China than that in most Europe countries. , , represents the characteristic value of permanent load, represents the relevant representative value of a pre-stress action , , , represents the characteristic value of living load . , , represents the combination factor for a variable action , , , represents the frequency factor of a variable action and , , represents the quasi-permanent variable action . Table 7. The Chinesecode and Eurocode have the same form of load combination. Some common values of those factors introduced in Table 7 are provided in Table 8 [23] . According to the Table 8, most values of the three factors for Chinese code and Eurocode are same. The minor differences between the two codes could easily be observed from the table. The maximum value difference is a quasi-permanent factor ( for EN1991:2002 and for GB50010-2010), which is 0.5 0.3 0.2, and the other differences are 0.1. The two codes are quiet close in this part. However, it is hard to say which is safer according to those value provided in the table, because the basic national conditions of the two countries are different. For permanent loads, the main components stipulated by the two standards are the self-weight of structure and decorations. For live load, the recommended value of live load stipulated in the two standards are similar. For the specific project the two codes might provide different values, but the difference should be small. Comparison of Serviceability limit state (SLS) According to the requirement of EC2, the checking of the limit state should be based on the following factors: (1) The distortion that affects the appearance, user comfort or structural function (including mechanical or service functions), and distortion of surface finishing or non-structural component; (2) The vibration that makes pedestrians uncomfortable or restricts the effectiveness of structural functions; (3) The damage that adversely affects the durability, appearance, or structural function. Based on the above factors, the EC2 stipulates the concrete structure should satisfy: where: Cd reflects the limiting design values of serviceability criterion (i.e. f , ); Ed reflects the design value of action effect in the serviceability criterion, determined based on the relevant combination (i.e. f , ). In GB 50010-2010 related to SLS, the following describes the guidance. (1) Deformation checking should be carried out for the components which need to be controlled. (2) The tensile stress of concrete should be checked when the cracks are not allowed. (3) For the members that are allowed to appear cracks, the stress crack width should be checked; (4) For the floor structure with the requirement of comfort, the vertical natural frequency checking should be carried out. Based on the above factors, the GB 50010-2010 stipulates the concrete structure should satisfy: S C where: S reflects the design value of action effect corresponding to the Ed in EC2 ; C reflects the Limit design values, corresponding to the Cd in EC2. The detailed deflection control requirement is provided in 7.4 of EN1992-1-1. The requirement is: * 1 * 2 * 3 (6) where K depends on the span type, equals to 1, 1.3, 1.5, 0.4 for simply support span, end span, interior span or cantilever span respectively. The calculation of F1, F2, F3 could be found in [24] . The allowable deflection values for flexural members in GB50010-2010 is presented in Table 9. For the deflection control, GB50010-2010 used a table while EC2 used some equations. Consider table normally based on conservative simplification, it seems equations given in EC2 could provide more precise result and more flexible. The crack control levels and limit values of maximum crack width control stipulated in EC2 and GB50010-2010 is presented in Table 10 and Table 11 respectively: For X0, XC1 exposure classes, crack width has no influence on durability and this limit is set to give generally acceptable appearance. In the absence of appearance conditions this limit may be relaxed. Note2: For these exposure classes, in addition, decompression should be checked under the quasi-permanent combination of loads. 0.3(0.4) Ⅲ 0 . 2 Ⅱ Ⅲ 0 . 2 Ⅱ 0 . 1 Ⅲ Ⅲ 0 . 2 Ⅰ -From the two tables, the allowable maximum crack width of the two codes are 0.4 mm and depends on different conditions the requirement would vary. Generally the crack widths control of the two codes are close. Case Study In this section a rectangular beam flexure design example will be presented according to both code to present the main difference discussed above. Problem Statement A rectangular simply supported beam with a span of 6m is required in a project, the exposure condition is an open-air environment of non-severe cold or non-cold area. The cross section of the beam should be b*h=250mm*500mm. The material using is: concrete with f , 30 , and reinforcement f 400 . Design moment value is 300 , the quasi-permanent moment is: 150 . Calculate the longitudinal reinforcement area and check the deflection. Consider the cross section is rectangular and span less than 7m, conservatively assume A , , in this stage, thus: Conclusion In this paper, the main issues related to concrete structural design are discussed and the similarities and differences of the EC2 and GB50010-2010 are studied. For the normal concrete structural design, the partial load factors in the two codes are different. For ULS, the partial load factors in EC2 is a little larger than the Chinese code. Considering the same strength concrete, Chinese code adopts a lower design value, so the two codes' final structure reliability should be relatively close. For SLS, GB50010-2010 gives a reference form, which provides the direct requirement of the deflection for different concrete specifications and structures. EC2, however, indirectly checks whether the span-depth ratio is satisfied based on equations provided, to control deflection. For SLS crack control, EC2 and GB50010-2010 are relatively close. From the given case study, the compressive reinforcement area obtained by GB50010-2010 is smaller than that of EC2, and the area of main reinforcement is larger than the result of EC2, and both A , could pass the deflection check in the two codes.
5,220.6
2018-01-01T00:00:00.000
[ "Materials Science" ]
One-loop Corrections to the Higgs Boson Invisible Decay in a Complex Singlet Extension of the SM The search for dark matter (DM) at colliders is founded on the idea of looking for something invisible. There are searches based on production and decay processes where DM may reveal itself as missing energy. If nothing is found, our best tool to constrain the parameter space of many extensions of the Standard Model (SM) with a DM candidate is the Higgs boson. As the measurements of the Higgs couplings become increasingly precise, higher-order corrections will start to play a major role. The tree-level contribution to the invisible decay width provides information about the portal coupling. Higher-order corrections also gives us access to other parameters from the dark sector of the Higgs potential that are not present in the tree-level amplitude. In this work we will focus on the complex singlet extension of the SM in the phase with a DM candidate. We calculate the one-loop electroweak corrections to the decay of the Higgs boson into two DM particles. We find that the corrections are stable and of the order of a few percent. The present measurement of the Higgs invisible branching ratio, BR$(H \to$ invisible $)<0.11$, already constrains the parameter space of the model at leading order. We expect that by the end of the LHC the experimental measurement will require the inclusion of the electroweak corrections to the decay in order to match the experimental accuracy. Furthermore, the only competing process, which is direct detection, is shown to have a cross section below the neutrino floor. Introduction The search for dark matter (DM) has replaced the search for the Higgs boson as the main goal of particle physicists. In fact, since the Higgs has been discovered at the Large Hadron Collider (LHC) by the ATLAS [1] and CMS [2] collaborations, and the Higgs couplings have been measured with great precision, the attention has turned to the outstanding problems of the Standard Model (SM). The search for DM is certainly on the top of the list especially because at this point we cannot even be sure if it comes in the form of an elementary particle. Therefore, even if collider physics is not the place to prove a DM candidate exits, it can help us by hinting at some particular directions even if only by excluding the parameter space of particular models. The Higgs invisible decay measurements are probably one of best quantities to probe the dark sector of particular models. The branching ratio of Higgs to invisible is now bounded to below 11% by ATLAS [3]. This number will improve both in the next LHC run and in the high luminosity stage. This increasing precision will take us further inside the dark sector of the models. In this work we discuss the Higgs invisible decay in the Complex Singlet extension of the SM (CxSM) which amounts to the addition of a complex scalar singlet to the known SM fields while keeping the SM gauge symmetries. While the tree-level decay of the Higgs into DM involves only the portal coupling, the one-loop corrections to the decay give us access to the quartic coupling of the singlet field. Therefore, the one-loop result gives us a more complete understanding of the Higgs potential. There is a competing/complementary measurement which is the one given by the direct detection process. The DM-nucleon cross section is only relevant at one-loop due to a cancellation that renders the tree-level cross section proportional to the DM velocity and therefore negligible [4,5]. The one-loop corrections to the direct detection process were calculated in [6,7] and compared to the latest experimental results from XENON [8]. We will discuss the interplay between direct detection and the branching ratio of the invisible Higgs decay including the electroweak corrections in both processes. Our analysis will be performed taking into account the most relevant theoretical and experimental constraints on the model. These are collider constraints and also DM constraints. We will then calculate the next-to-leading order (NLO) electroweak corrections to the invisible decay width of the SM-like Higgs boson using several renormalization schemes. Once the allowed parameter space is found, the NLO result will be compared with the leading order (LO) one. The final goal is to understand if the NLO Higgs branching ratio into two DM particles can be larger than the experimentally measured value for some regions of the parameter space. Moreover, as the new data will become available both at the next LHC run and at the high luminosity stage the Higgs coupling measurements will be more precise and the theoretical calculations need to match this precision. The outline of the paper is as follows. In section 2, we will introduce the CxSM together with our notation. Section 3 is dedicated to the description of the different renormalization schemes used in this work. Section 4 discusses the experimental and theoretical constraints on the model. In section 5, the results are presented and discussed. Our conclusions are collected in section 6. Finally, there are two appendices, the first one where the results for the scalar pinched self-energies are presented and the second one where we discuss the minima of the CxSM potential. In this section we introduce the version of the CxSM used in this work. The model is a simple extension of the SM by the addition of a complex singlet field with zero isospin and zero hypercharge. As a singlet for the SM gauge group, the scalar field appears only in the Higgs potential. The SM Higgs couplings will be, however, modified by the rotation angle from the matrix that relates the scalar gauge eigenstates with their mass eigenstates. The doublet field Φ and the singlet field S are defined as where H, S and A are real scalar fields and G + and G 0 are the Goldstone bosons for the Z and W ± bosons. The v, v A and v S are the vacuum expectation values (VEVs) of the corresponding fields and can all be, in general, non-zero in which case mixing between all three scalar fields arises. We will, however, focus on a model where a DM candidate is generated by forcing the potential to be invariant under a symmetry, unbroken by the vacuum. We choose to impose invariance of the potential under two separate Z 2 symmetries acting on S and A, that is, S → −S and A → −A. The resulting renormalizable potential is where all constants are real. By choosing v A = 0, the A → −A symmetry remains unbroken and A is stable, becoming the DM candidate of the model. The other Z 2 symmetry is broken since v S = 0 which leads to mixing between S and H. The mass eigenstates of the CP-even field h i (i = 1, 2) relate to the gauge eigenstates H and S through where the rotation matrix is given by The mass matrix in the gauge basis (H, S) is given by where the tadpole parameters T 1 and T 2 are defined via the minimisation conditions, and at tree level, the minimum conditions are T i = 0 (i = 1, 2). The mass of the DM candidate A is given by while the remaining mass eigenstates are obtained via Therefore, the scalar spectrum of the CxSM consists of two Higgs bosons, h 1 and h 2 , one of which is the SM-like Higgs with a mass of 125 GeV, and one DM scalar, which we call A. Since the mixing between the two scalars is introduced only via the rotation angle, the couplings of the two Higgs bosons to the remaining SM particles is modified by the same factor k i defined as. where g H SM SM SM denotes the SM coupling between the SM Higgs and the SM particle SM . With these definitions the parameters of the potential can now be written as functions of our choice of input parameters given by as Renormalization Our goal is to calculate the decay width of the Higgs bosons into a pair of DM particles, h i → AA, at NLO. Since A only couples to the two Higgs bosons h i we just need to renormalize the scalar sector. With the trilinear h i couplings to the DM particles given by and according to our choice of input parameters we need to renormalize the masses of the two scalars h i , the mass of the DM particle, m A , the singlet VEV v S and the mixing angle α. Besides these parameters we also need to renormalize the h i and A fields and the tadpoles to work with finite Green functions. We start by formally defining the relation between the bare and the renormalized quantities as where δβ is the counterterm of the physical quantity β and β 0 is the bare quantity. All bare fields φ 0 are related to their renormalized version via where Z φ is the field strength renormalization constant. On-Shell Renormalization of the Scalar Sector We start by calculating the mass and field counterterms in the scalar sector using the on-shell scheme. The renormalization constants for the DM particle are defined as where Z A is the field strength renormalization constant, D 2 A,0 = m 2 A,0 and δD A is the mass counterterm for A. The two scalars h 1 and h 2 again mix at one-loop order and therefore both the field renormalization constants and the mass counterterms are defined by with D 2 hh,0 = diag(m 2 h1,0 , m 2 h2,0 ) and the matrices δZ hh and δD 2 hh defined as The on-shell renormalization conditions lead to the following expressions for the counterterms of the scalar fields h i where Σ h i h i denotes their self-energies. Similarly, the expressions for the DM field A read The diagonal terms of δD 2 hh or δD 2 A are related to the mass counterterms and to the corresponding tadpoles. The off-diagonal terms are related to the tadpoles to be discussed in the next section. Tadpole Renormalization Tadpole renormalization is essentially the way we choose the VEVs at 1-loop order so that the minimum conditions hold. Another way to express it is to state that the terms proportional to the scalar fields at 1-loop order have to vanish. The VEV chosen to fulfil this condition [9,10] is the true VEV of the theory. We will follow the scheme proposed by Fleischer and Jegerlehner [9] for the SM with the goal of rendering all counterterms related to physical quantities gauge independent. The scheme was applied to various extensions of the SM (see e.g. [11,12]). For the CxSM a brief description follows. We start by defining the true VEVs by performing the shifts which lead to the following shifts in the tadpole parameters at NLO The minimum equations lead to the following relations between the shifts in the VEVs and the tadpole counterterms with the relation between the tadpole counterterms δT 1,2 in the gauge basis and those in the mass basis, δT h 1,2 , given by The shift introduced in the VEVs can be applied to the mass matrix from Eq. (5). The additional terms resulting from that shift read The last term in Eq. (24) vanishes, because after the shift the tadpole conditions can be applied again. The mass matrix can now be rotated into the mass basis and all counterterm shifts can be applied leading to Using Eqs. (22) and (23) as well as the relations Eq. (11) between the potential parameters and the input parameters we can express the shifts ∆D 2 h i h j (i, j = 1, 2) as with the trilinear Higgs couplings given by In terms of Feynman diagrams this can be seen as the contribution of the tadpole diagram (times a factor i, at vanishing momentum transfer) to the propagators of h 1 and h 2 , which were not included previously in the definition of the self-energies. We define and the renormalized self-energies take the form This shift of contributions from the mass counterterm matrix into the self-energy corresponds to the inclusion of the tadpole diagrams into the self-energy. With this change in the renormalized self-energy the following results for the counterterms hold Following a similar reasoning, the counterterms of the field A can be expressed as Renormalization of the Mixing Angle α There are two parameters left to be renormalized. We start with the rotation angle α. Previous works [11,13] lead us to the conclusion that a scheme that is simultaneously stable (in the sense that the NLO corrections do not become unreasonably large) and gauge independent can be built by combining the one proposed in Ref. [14,15] with the gauge dependence handled by the use of the pinch technique [16,17]. The scheme proposed in [14,15] introduces a shift in α, the angle from the rotation matrix R α , and by relating it to the field renormalization matrix constant leads to the following counterterm for α, The result is model independent, it only assumes the mixing of solely two fields. This relation can now be expressed in terms of self-energies as This counterterm turns out to be gauge dependent. This in itself would not be a problem if the complete amplitude for the process was gauge independent, which is not the case. There is, however, a procedure to isolate this gauge dependence in a systematic and consistent way known as the pinch technique [16][17][18][19]. After successfully applying the pinch technique, the pinched self-energies can be defined by adding the additional contributions to the self-energies from the pinch technique. This results in The loop integral B 0 and the factor O ij as well as Σ add h i h j (p 2 ) are defined in App. A. Note that the expression with ξ = 1 does not mean that a specific gauge has been chosen. The additional terms together with the tadpole self-energies result in a gauge-independent result which can just be written in that form. We can now define a gauge-independent counterterm for α, for which two different scales will be chosen: • Setting the external momenta to the respective OS masses, p 2 = m 2 h i , called OS pinched scheme. • Setting the external momenta to the mean of the masses, In the p * pinched scheme the additional gauge-independent terms from the pinch technique vanish so that the expression for the mixing angle counterterm becomes more compact. We can write the counterterm for α in the p * scheme and the OS pinched scheme as With these definitions, δα is gauge independent by construction and the problem with the gauge dependence is solved. Renormalization of v S The last parameter to be renormalized is the VEV v S of the scalar singlet. We will be using a process-dependent scheme and also a derivation thereof where the conditions are imposed at the amplitude and not at the physical process level, defined as zero external momentum scheme (ZEM) scheme [13]. The latter, although less stable, allows to cover the entire parameter space because it is not constrained by kinematic restrictions. Process-dependent Scheme The process to be used needs a coupling constant proportional to v S and if we want to use a decay, the only possibilities 1 in the CxSM are h 1 → AA and h 2 → AA. Therefore one of the processes will be used to extract the singlet VEV renormalization constant, and because we want to use the measurement of the SM-like Higgs invisible width, the second Higgs will be used for that purpose. Note, however, that any of the two Higgs bosons can be the SM-like one, while the other can either be lighter or heavier than 125 GeV. Hence, there are two scenarios to be analysed and we have to find v S for both. In the process-dependent scheme the counterterm is calculated by forcing that is, the LO and NLO decay widths are equal. This is turn leads to where A LO h i →AA is the amplitude of the process h i → AA at LO and A NLO h i →AA is the amplitude at NLO. Because the LO amplitude is just a coupling constant, the expression further simplifies to The NLO contribution A NLO h i →AA can be written in terms of the vertex corrections A VC h i →AA and the vertex counterterm such that where i, j ∈ {1, 2}, but i = j. And with the trilinear h i couplings to the DM particles λ h i AA given in Eq. (12) we have Finally, the expression for the counterterm v S reads for the two processes. These counterterms are gauge independent and lead to UV-finite results. The renormalization scheme also leads to stable results. Therefore, the only drawback is the kinematic restriction which forces us to be in a restricted region of the parameter space. We discuss a solution to avoid this restriction in the next section. ZEM Scheme The ZEM scheme was introduced in [13] to avoid kinematic restrictions on the parameter space, and we will now apply it to the CxSM. It is a simple derivation of the process-dependent scheme, where the square of all external momenta are set to zero at the level of the amplitude, eliminating therefore the kinematic constraint. Choosing the same physical processes, the condition now reads where p 2 = 0 means that all squared external momenta are set to zero. There is another difference relative to the process-dependent scheme: the NLO leg corrections are not canceled by the corresponding counterterms, because the leg counterterms are defined through the OS scheme. Therefore Eq. (46) now takes the form Again, this equation can be solved for the two processes h 1 → AA and h 2 → AA to obtain the counterterms We now just have to check if the final result is finite and gauge independent. The question of gauge dependence in the alternative tadpole scheme is always related to wave function renormalization constants. A thorough analysis leads to the conclusion that although finite the result is gauge dependent due to the term for the corresponding process h i → AA. The problem was solved by simply replacing the selfenergies in the wave function renormalization constants in Eq. (48) by their pinched versions. This way δv S becomes gauge independent. This change in the δZ h i h j , however, is only applied to terms appearing in Eq. (48) where the ZEM counterterm of v S is defined and not anywhere else. Otherwise, a gauge dependence in the overall amplitude of the renormalized process could be reintroduced. Therefore, the resulting counterterms for v S in this modified ZEM scheme read The renormalization is now complete and before moving to the presentation of the NLO results we will discuss the constraints imposed on the model. Constraints on the Model The constraints imposed to find the allowed parameter space are implemented in ScannerS [20][21][22]. In this section we will just briefly review the most relevant theoretical and experimental constraints considered. Theoretical Constraints • Boundedness from Below The conditions to have a stable minimum are easily obtained by writing, Φ † Φ ≡ x and |S| 2 ≡ y and writing the quartic terms of the potential Forcing the potential to be bounded in all directions leads to the following conditions at tree level • Perturbative Unitarity Constraints Following [23] we force the eigenvalues of the scattering matrix M 2→2 of all possible twoto-two scalar scattering interactions to obey leading to • Stability of the Vacuum In the CxSM the most general vacuum structure is obtained by the following expectation values for the fields because of the SU (2) invariance. Therefore, the value of the tree-level potential at each vacuum configuration is given by We have chosen to work in the configuration where the potential is V (v, v S , 0) to have one DM candidate. In App. B we show that by choosing the vacuum configuration with non-zero v and v S (and v A =0) to be a minimum automatically implies that this configuration is the absolute minimum at tree level. Experimental Constraints Before moving to the experimental constraints we note that where m W,Z are the masses of the massive W and Z bosons, respectively, and c w denotes the cosine of the Weinberg angle, is equal to 1 at tree-level, like in the SM. Also, no tree-level flavour-changing neutral currents are introduced because the gauge singlet does not couple to fermions and to gauge bosons in the gauge basis. We will now briefly review the experimental constraints implemented in ScannerS and used for the generation of parameter points. • S, T, U precision parameters The additional scalar fields in the CxSM contribute to the gauge bosons self-energies and this implies deviations from the SM predictions. These deviations relative to the SM have to be within experimental bounds, i.e. ScannerS compares the model predictions with the electroweak precision results from experiment. Then the program applies a consistency check on the S, T, U parameters [24] with 95 % confidence level to check if the constraints are fulfilled. • Compatibility with the LHC Higgs data and exclusion bounds There are two important constraints coming from colliders. The most relevant one is the one coming from the LHC related to the measurements of the discovered Higgs boson. The searches for additional scalars also play a role in restricting the parameter space of the model. ScannerS enforces these bounds by the interfaces with HiggsSignals [25,26] and HiggsBounds [27,28]. Agreement of the signal rates of the SM-like Higgs boson of the CxSM with the observations at 2σ level is checked by HiggsSignals-2.6.1. Through HiggsBounds-5.9.0 the exclusion bounds from searches for extra scalars are taken into account. • DM relic density The CxSM has a scalar DM candidate and therefore the predicted DM relic density of this model should not exceed the measured value. Smaller values are not excluded since they allow for additional contributions coming from other sources. ScannerS is interfaced with the program package MicrOMEGAs [29] to include this constraint from the relic density. • DM direct detection As previously stated, the DM-nucleon cross section is only relevant at one-loop order due to a cancellation that renders the tree-level cross section proportional to the DM velocity and therefore negligible [4,5]. However, one-loop corrections to the DM-nucleon spinindependent cross section have to be below the present experimentally measured result from XENON1T [8], as discussed in [6,7]. We will come back to this important constraint in the next section. Higgs Decay into Dark Matter The CxSM has two CP-even scalars h 1 or h 2 and any of them can play the role of the 125 GeV SM-like Higgs boson denoted h 125 in the following. The non SM-like Higgs can be either heavier or lighter than 125 GeV. In order to optimize the analysis we fixed h 1 to always be the lightest of the two and considered two distinct scenarios, • m h 1 = m h 125 (scenario I): the width is calculated from h 1 → AA and the process h 2 → AA is chosen for the renormalization of v S . • m h 2 = m h 125 (scenario II): the width is calculated from h 2 → AA and the process h 1 → AA is chosen for the renormalization of v S . The LO decay width is given by while the NLO expression can be written as with λ(x, y, z) = x 2 + y 2 + z 2 − 2xy − 2xz − 2yz and A LO and A NLO denoting the LO and NLO amplitudes, respectively. The LO amplitude is simply the coupling constant and therefore the decay width takes the form where both h 1 and h 2 can be the SM-like Higgs h 125 . For the NLO amplitude we need to compute the vertex corrections together with the counterterm contributions. The vertex corrections are just the sum of all irreducible contributions at 1-loop order while the vertex counterterm can be read off the Lagrangian yielding where i, j ∈ {1, 2} but i = j. We finally arrive at the overall NLO contributions for the processes which will be calculated numerically using Eq. (57). The value obtained for the width depends on the renormalization scheme used which will be discussed in the next section. We have explicitly checked that for all scenarios the NLO width is UV-finite and gauge independent. Allowed Parameter Space For our numerical investigation we performed a scan in the CxSM parameter space using ScannerS [20][21][22] and kept only those points that are compatible with the above described theoretical and experimental constraints. The scan ranges for the input parameters are summarized in Tab. 1. The DM mass has to be below 62.5 GeV for h 125 → AA to be kinematically allowed. The SM input parameters are taken from [44] and their values are given in Tab We have also used the program BSMPT [45,46] to check for the possibility of having a strong first order EW phase transition (SFOEWPT). We found that in the parameter space probed there were no points with a SFOEWPT. Before starting the discussion of the allowed parameter space we again remind the reader that there is a kinematical constraint that applies to the process-dependent scheme but not to the ZEM scheme of the counterterm δv S . As previously discussed two of six parameters are fixed, one by G F and the other one is the 125 GeV Higgs boson mass. This leaves us with the 4 input parameters m s , m A , α, v S where m s denotes the scalar mass of the non-125 GeV Higgs boson. In Fig. 1 we show correlations between α, v S and m s . In the upper row a strong correlation can be seen between α and v S . This is to be expected since all SM couplings to the h 125 Higgs boson have an additional c α in scenario I or s α in scenario II. These couplings are very well measured and only small deviations are allowed. Thus, the additional factor has to be close to 1 and α has to be close to 0 or ± π 2 , respectively. Moreover, the parameters α and v S are connected through the decay width of the 125 GeV Higgs boson into DM particles. As can be seen in Eq. (59), the LO decay width in scenario I is proportional to Thus, in order for the LO branching ratio of the 125 GeV Higgs into DM particles in the CxSM not to exceed experimental limits [3], this ratio has to be small. Therefore, if v S is small α has to be small. This behavior can be seen in Fig. 1. In scenario II the LO decay width is proportional to Therefore, if v S is small, α has to be close to ± π 2 which can be seen in Fig. 1 as well. One should also mention that there is a hard bound on α coming from the Higgs coupling measurements. The plots in the lower row in Fig. 1 show the relation between v S and m s . The two parameters m s and v S can be related via d 2 . Because in scenario I m s = m h 2 and α cannot deviate much from zero we can write Using again the small angle approximation in Eq. (11), λ and δ 2 can be expressed as With this simplified expressions the fourth constraint in Eq. (54) results in where d 2 was considered to be positive. This relation explains the line in Fig. 1 (lower left) for scenario I, showing m s and v S are linearly related with the correctly predicted slope. The same calculation applies to scenario II. In this case, m s = m h 1 and the angle α is close to ± π 2 . The conclusion is again that m s and v S are linearly related. For example, setting m s to the highest possible value in this scenario, i.e. about 125 GeV, v S has to be at least 35 GeV. In this scenario only a small part of the parameter space is constrained but in Fig. 1 (right) we see that the far left side of the plot indeed contains no parameter points in scenario II. Fig. 2 shows the parameter space spanned by m s and m A . The blue points (scenario II) are the ones where the kinematical constraint (due to the process-dependent scheme) appears. As expected the constraint is not there for scenario I (red points). In scenario I the DM mass m A prefers values close to 125/2 GeV, whereas in scenario II (blue points), m A has values close to half of m s or also close to half of m h 125 in the ZEM scheme where the kinematic constraint 2m A < m s from the renormalization condition on v S ceases to apply. This behavior results from DM constraints applied on the DM mass m A . To visualize the effect of DM constraints, we show in green the points that passed all constraints except the dark matter ones. The reason for these constraints is the requirement that the relic density obtained in the CxSM must not exceed the observed value of the relic density. Therefore, the thermal annihilation processes of two DM particles A into one of the scalar particles h i must be efficient enough. This annihilation is enhanced close to the threshold, so that the DM mass m A is preferably close to half of the 125 GeV or half of m s . In Fig 3 (left) we present a histogram showing the points frequency as a function of the relic density for both scenarios. This plot clearly shows us that there are points that saturate the relic density but most of the points have a low h 2 Ω cdm and would need other DM candidates. The percentage of points that is in the range −5σ < h 2 Ω cv cdm ≤ 2σ, where h 2 Ω cv cdm is the experimental central value, is around 1% and the preferred values for the parameters are for the two resonant regions already discussed. In the right panel we present the relic density as a function of the DM mass with m s presented by the color bar for the scenario where m h 2 = 125 GeV. There are points that saturate the relic density in the entire DM mass range probed. We clearly see that these points all have a DM mass that is half of m s or half m h 2 . There are also some outliers that saturate the relic density in the region where m s is roughly between 30 and 50 GeV for a DM mass above 30 GeV. For the other scenario, since only the case half of 125 GeV is possible all values of m h 2 can in principle saturate the relic density. In Fig 4 we show a histogram of the frequency of the variable α without and with the relic density constraint for scenario I. Without the DM constraints there is a bound on α that forces it to be close to zero. This is related to the already discussed bounds from colliders. Looking at the Boltzmann equation where n is the DM number density, H is the Hubble parameter, σv is the velocity-averaged cross section and n 2 eq is the density of DM particles when in thermal equilibrium with the photon bath. The annihilation cross section σ(AA → SM SM ), where SM are SM particles, is proportional to sin α cos α. Hence, if either sin α → 0 or cos α → 0 we get σv → 0 and no freeze-out will occur or the relic density will be extremely high at the end of freeze-out. The interesting feature is then that as we move closer to the limit where the couplings are all SM-like (α ≈ 0 is scenario I) we lose the DM candidate because of the constraints from DM. This is not surprising because in this limit the portal coupling vanishes and freeze-out is no longer possible. Let us now move to the last constraint coming from DM, the direct detection process. Since we allow DM not to saturate the relic density we need to define a DM fraction where (Ωh 2 ) A is the calculated relic density for each point in the CxSM and (Ωh 2 ) obs DM is the central value of the experimental measurement. In the comparison with the data, we are actually comparing an effective DM annihilation cross section defined by where f AA and σ AN , the direct detection DM nucleon cross section, are calculated by MicrOMEGAs. This is because the experimental limits assume the DM candidate to make up for all of the DM abundance. This constraint is particularly relevant because it directly probes the portal coupling just like the invisible decay. Even if, as we have already discussed, the DM nucleon cross section is only relevant at one-loop order, it could be that the experimental bound from XENON1T [8] would provide a stronger restriction than the one from the invisible Higgs decay. It turns out, however, that it does not. In Fig. 5 we present the effective spin-independent DM nucleon cross section [6,7] as a function of the DM mass for scenario I (left) and scenario II (right). The neutrino floor [47] is also presented as a grey shaded region. For the range of masses considered it is below a line of about 10 −48 cm 2 . We can see that the points are not only below the XENON1T line but they are also below the neutrino floor and therefore have extremely small chances of being detected directly. Therefore, in the near future, and perhaps also in the far future, information about the dark sector of the CxSM will come only from the LHC. This shows the importance of taking into account the radiative corrections for the invisible Higgs decay. Numerical Results and Analysis of the SM Higgs Decay into DM In the following, we present and discuss the LO and NLO decay widths for all allowed points in the parameter space, for the two scenarios. There are a total of four schemes corresponding to the combination of the choices of the counterterms δα (p * pinched and OS pinched) and δv S (process-dependent and ZEM). We display results for the relative size of the NLO decay width with respect to the LO result, defined as (71) Figure 6: ∆Γ plotted against the scalar mass m s , where h 125 = h 1 (red points) and h 125 = h 2 (blue points). All different combinations of possible renormalization schemes are shown. Interesting sections (indicated by the red band) of the two plots in the second row are also shown in more detail. In Fig. 6 we present ∆Γ as a function of m s for the two scenarios and for the four different possible combinations of renormalization conditions. The relative NLO corrections in scenario II (blue points) are quite small in the process-dependent scheme (denoted by 'pd' in the plot), but become comparatively large in the ZEM scheme with respect to scenario I (red points). Both in scenario I and II, ∆Γ is barely affected by the choice of the renormalization scheme of α. Larger differences occur when changing the renormalization scheme of v s from the process-dependent to the ZEM scheme but they still remain relatively stable in scenario I. Note, that the peaks in scenario I in the ZEM scheme, that induce larger ∆Γ are related to kinematical thresholds of the B 0 and C 0 functions of the loop integrals. They are better visualized by the zoomed inserts in Fig. 6. In scenario II, the change in ∆Γ when turning from the process-dependent to the ZEM scheme has a large effect. Here, ∆Γ can go from −50 % to 10 %, whereas in the process-dependent scheme, ∆Γ varies between −3 % and 3 %. Thus, the ZEM scheme can result in relatively large corrections at NLO. These large corrections, however, only occur in a small number of points. These are the points that would be rejected by the additional kinematic constraint that in scenario II is effective in the process-dependent scheme. They hence only occur in the ZEM scheme. One further remark is in order here. One has to be careful when directly comparing the results for ∆Γ in the different renormalization schemes. A consistent comparison would require the proper conversion of the input parameters when going from one scheme to the other. This requires the implementation of the conversion formulae which is beyond the scope of this paper. Our goal here primarily is to show which sizes of relative corrections at all can be expected in the various schemes. Apart from the ZEM scheme they are all relatively small and numerically stable in the sense defined above. In Fig. 7 we present ∆Γ as a function of m s with all other input parameters fixed. The resulting scenarios do not necessarily fulfil all theoretical or experimental constraints any more but are shown here for illustrative reasons. The peaks that can be seen in the figure origin from thresholds in the loop functions and depend on the chosen scheme as the two schemes used for the derivation of δα are evaluated at different scales. For example, the peak in the OS pinched scheme seen in Fig. 7 at m S ≡ x OS = 250 GeV appears in the p * pinched scheme at the m S ≡ x p * value equal to 330 GeV because since in the p * pinched scheme the self-energies are evaluated at the mean of the scalar masses. The peaks only occur in scenario I, because most of the SM masses occurring in the calculation (e.g. the W and Z boson mass) are of order of 100 GeV. The purpose of this analysis is to improve the precision of the calculation of the Higgs invisible decay width so that it can be used to constrain the parameters from the dark sector. The current observed limit on the branching ratio of the 125 GeV Higgs decay into invisible particles is given by [3] BR(h 125 → invisible) 0.11 +0.04 −0.03 , at 95 % confidence level. In order to compare results the calculated branching ratio is needed which in turn means that we need the total decay width of the 125 GeV Higgs boson in the CxSM including NLO EW corrections. Since the corrections are not available for all decays in the model we can only estimate the branching ratio using the total decay width of the 125 GeV Higgs boson in the SM without EW corrections 2 which is taken from [48,49] and is given by In order to translate this decay width into the CxSM set-up it will be multiplied by the appropriate squared angular factor k 2 i , where the index i is chosen according to the mass scenario. Also the NLO h 125 → AA width is added to obtain the total decay width in the CxSM. Furthermore, in scenario II the 125 GeV Higgs boson is the heavier of the two scalar particles (h 125 ≡ h 2 ). If h 1 is light enough, the decay h 2 → h 1 h 1 is also allowed and is added to the total decay width. Thus, the LO and approximate NLO branching ratio of the decay h 125 → AA is given by where δ is defined as This expression is approximate in the sense that the NLO EW corrections are only included in the Higgs-to-invisible decay but not in the SM-like CxSM Higgs decays into SM particles. It is justified, however, if the EW corrections to these decay widths are small enough compared to the EW corrections to the h 125 → AA decay 3 . Moreover, for a better approximation the NLO corrections to the decay h 125 → h 1 h 1 have to be included as well unless its contribution to the total width is negligibly small. In Fig. 8 the calculated approximate NLO branching ratios for all generated parameter points are displayed versus the corresponding LO values. The experimental limit on the branching ratio is shown as well. However, the limit is only indicated for the NLO result, since the parameter points are generated with respect to the limit at LO. Almost all parameter points have an NLO branching ratio below the experimental limit . Only about 0.2 % of the points are above the experimental limit. The highest obtained branching ratio is, however, around 0.121 and therefore still lies well within the experimental uncertainty. The relative change of the branching ratio at NLO with respect to LO has been calculated and increases the LO value by up to 7-8% at most. Thus, the NLO contributions to the branching ratio are too small to further constrain the model. Moreover, it is interesting to see that the points from scenario II result in smaller branching ratios, especially when using the ZEM scheme. This is to be expected, since many points in that scenario have negative relative NLO contributions to the decay width. Conclusions In this work we have calculated the EW NLO corrections of the Higgs decay into two dark matter particles in the CxSM. We have used four different renormalization schemes but with all masses and fields renormalized on-shell. Except for very particular regions of the parameter space corresponding to thresholds in the Passarino-Veltman functions, the corrections were shown to be quite small, on the per cent level in all renormalization schemes. There is one exception, however, given by the ZEM scheme with h 2 being the SM-like Higgs. Here, points that could not be used in the process-dependent scheme for the renormalization of v S due to kinematic constraints, lead to relatively large corrections that amount up to a few tens of per cent. The central value of the measured invisible Higgs branching ratio is now at 0.11. The inclusion of the EW NLO corrections to the decay width of the process h 125 → AA does not lead to extra constraints on the parameter space because the calculated approximate NLO branching ratios for all allowed parameter points are found to be within the experimental error. Calculating the EW corrections to all decays of the SM-like CxSM Higgs boson into SM particles (and, if kinematically allowed into a pair of lighter scalars) will further improve the obtained result. But more importantly, tighter experimental constraints will be obtained in the near future in the upcoming LHC run [51] and even more at the high luminosity stage. We have also shown why it is crucial to have a precise measurement of the invisible width -it is the only direct probe of the portal coupling. In fact, the other possible way to probe the same coupling would be through the DM-nucleon cross section. However, we have shown that this cross section is not only below the present experimental bound from XENON1T [8] but is also below the neutrino floor which makes it virtually unusable. Therefore, in the near future and perhaps also in the far future, information about the dark sector of the CxSM will come only from the LHC. This shows the importance of having the radiative corrections for the invisible Higgs decay. A The Scalar Pinched Self-Energy in the CxSM In this appendix we will present the result for the scalar pinched self-energy in the CxSM. We define the quantity (i, j = 1, 2) to write all couplings in the CxSM between the scalars and the SM particles X, Y as where g SM XY H and g SM XY HH are the corresponding couplings between the SM particles X and Y and one or two SM Higgs bosons and k i is given in Eq. (9). With these definitions the self-energies iΣ add h i h j are given by Here m W,Z denote the masses of the W and Z bosons, g = 2m W √ 2G F is the SU (2) gauge coupling, c w the cosine of the weak mixing angle, ξ V (V = W, Z) are the bare gauge couplings and λ V ≡ 1 − ξ V . The integrals are defined as . (79d) B Minima of the CxSM Higgs Potential To analyze all possible vacuum configurations, the scalar potential of the CxSM, has to be considered with the fields defined as Due to the SU (2) invariance we can choose a configuration where only the fields H, S and A can acquire a non-zero VEV, in the following labeled x H , x S and x A . The stationary conditions of the potential read with the scalar fields collected in the vector The three nontrivial equations in Eq. (82) can be written as from which we read off that for all VEVs a possible solution is to set them to zero or solve the equations in brackets. Thus, eight different cases, in general, have to be considered. Moreover, if x S and x A are simultaneously non-zero, the terms in brackets in Eqs. (84b) and (84c) have to be zero. Since these two terms only differ in the sign in front of the parameter b 1 , this can only be achieved if b 1 is set to zero. Here, however, b 1 is always chosen to be non-zero and thus these cases cannot result in a minimum of the potential. Furthermore, it has to be checked whether the stationary point is indeed a minimum of the potential, i.e. the Hessian matrix of the potential has to be positive definite. The general form of the Hessian matrix reads where the diagonal elements are To start with the remaining cases, first the desired minimum is considered, namely the configuration with the VEVs x H and x S to be non-zero and x A to be zero. Since the VEVs are chosen to be input parameters, they are in this case relabeled as v and v S and the Eqs. (84) can be solved for other parameters resulting in Next, the positive definiteness of the Hessian matrix has to be checked. For this Eq. (87) is used to simplify the Hessian matrix in Eq. (85) leading to The matrix is positive definite if the determinants of all minors are positive, i.e. the relations have to be satisfied. If these inequalities hold, the potential is automatically bounded from below (compare with Eq. (52)). Moreover, the Hessian matrix of the potential resembles the mass matrix of the scalar fields, i.e. the eigenvalues of the matrix are the squared masses of the corresponding particles and thus the eigenvalues have to be positive, i.e. the Hessian matrix has to be positive definite. Furthermore, the parameter b 1 is just given by −m 2 A . This means that if the VEVs v and v S are given as input parameters and the VEV for the field A is chosen to be zero and the potential parameters fulfill the relations in Eq. (89), this configuration of VEVs is a minimum of the potential, as desired. The remaining question now is, whether this minimum is automatically the global minimum of the potential. Thus, the values of the potential at all minimum configurations have to be calculated and compared. For the desired configuration the value of the potential at the minimum reads Now all other VEV configurations have to be checked for their potential values at the stationary point and whether or not they are indeed a minimum of the potential. • case x H = x S = x A = 0: This is the most trivial configuration, and the value of the potential at this point reads V (0, 0, 0) = 0. (91) Thus, the difference between the values of the potential at the two configurations results in The inequality is true because of the relation between δ 2 , λ and d 2 from Eq. (89). • case x S = x A = 0, x H = 0: Here the nontrivial equation from Eqs. (84) can be solved for x H and results in Here m 2 has to be negative. The value of the potential results in where in the second step the relations Eq. (87) were used. The difference between the values of the potential of the different configurations reads The inequality again holds because of the relations Eq. (89). • case x H = x A = 0, x S = 0 Here the nontrivial equation from Eqs. (84) can be solved for x S and results in Here b 1 + b 2 has to be negative. The value of the potential results in where in the second step the relations Eq. (87) were used. The difference between the values of the potential of the different configurations reads The inequality again holds because of the relations Eq. (89). • case x H = x S = 0, x A = 0 Here the nontrivial equation from Eqs. (84) can be solved for x A and results in Here b 2 − b 1 has to be negative. The value of the potential results in where in the second step the relations Eq. (87) were used. Here the parameter b 1 does not get canceled and the difference between the values of the potential of this configuration with respect to the desired minimum state depends additionally on b 1 and an inequality similar to the other cases cannot be shown as straightforwardly. It is, however, sufficient to look at the Hessian matrix. It results in where E is a combination of potential parameters. It can be seen that b 1 is a negative eigenvalue of the matrix. Thus, it cannot be positive definite and this VEV configuration cannot be a minimum. • case x S = 0, x H = 0, x A = 0 The last case is a bit more complicated, since now two VEVs are non-zero. Here it is easier to redo the same steps as in the desired minimum configuration. First, the VEVs are relabeld as w and w A . Next, the stationary conditions from Eqs. (84) are solved for other parameters to obtain the relations Similar to the last case, the value of the potential of this configuration will again depend on b 1 , so comparing values with the desired minimum configuration will not lead to a simple inequality. Thus, the Hessian matrix is again considered. With the help of Eqs. (102) it can be simplified to
11,950.4
2022-02-08T00:00:00.000
[ "Physics" ]
Fed-batch like cultivation in a micro-bioreactor: screening conditions relevant for Escherichia coli based production processes Objectives Recombinant protein production processes in Escherichia coli are usually operated in fed-batch mode; therefore, the elaboration of a fed-batch cultivation protocol in microtiter plates that allows for screening under production like conditions is particularly appealing. Results A highly reproducible fed-batch like microtiter plate cultivation protocol for E. coli in a micro-bioreactor system with advanced online monitoring capabilities was developed. A synthetic enzymatic glucose release medium was employed to provide carbon limited growth conditions without external substrate feed and the required buffer capacity to keep the pH value within 7 ± 1. Accurate process design allowed for cultivation up to cell densities of 10 g biomass l−1 without any limitations in oxygen supply [dissolved oxygen (DO) level above 30 %]. In the micro-bioreactor system (BioLector) online monitoring of cell growth, DO and pH was performed. Furthermore, the influence of the cultivation temperature, the applicability for different host strains as well as the transferability of results to lab-scale bioreactor cultivations was evaluated. Conclusion This robust microtiter plate cultivation protocol allows for screening of E. coli systems under conditions comparable to lab-scale bioreactor cultivations. . In contrast to these screening conditions production is usually conducted in fed-batch mode with carbon limitation for growth rate control. This strategy allows for higher cell densities and product yields and prevents by-product formation (Larsson et al. 1997). The differences between screening and production conditions frequently result in selection of clones that behave poorly under production conditions (Jeude et al. 2006). Consequently conditions in screening approaches should be approximated to production conditions. The goal of our work was to develop a simple, robust and reproducible protocol for fed-batch cultivation of E. coli in MTP. We focused on growth conditions similar to those in a lab-scale bioreactor. Thus, the requirements were carbon-limited growth in a defined synthetic medium to high cell densities and sufficient oxygen supply throughout the cultivation. The BioLector micro-fermentation system providing online access to cell density, pH and pO 2 represents a good compromise between very complex miniaturized stirred tank reactors and simple titer-plates Samorski et al. 2005). More advanced concepts of this technology with glucose feed and pH control based on microfluidics (Funke et al. 2010a, b) were excluded because of increased complexity, reduced number of cultivation wells and high costs for plates. An alternative to implement fed-batch like cultivation conditions in titer plate format is to use enzyme based glucose release media (Krause et al. 2010;Panula-Perälä et al. 2008). In our approach we selected a synthetic enzymatic glucose release in combination with the BioLector system. Media and solutions For BioLector cultivation experiments the synthetic Feed in Time (FIT) fed-batch medium with glucose and dextran as carbon sources (m2p-labs GmbH, Baesweiler, Germany) (Hemmerich et al. 2011) was directly used or diluted with sterile filtered water to gain 50, 67 and 80 % (v/v) FIT mixtures. The FIT fed-batch medium was supplemented with 1 % (v/v) VitaMix. Immediately prior to inoculation 1 % (v/v) of the glucose releasing enzyme mix (EnzMix) was added. The hydrolytic enzyme (glucoamylase) cleaves off glucose residues from a solubilized glucose polymer. The growth rate of the cells is controlled by the activity of the enzyme and the enzymatic glucose release mimics the substrate feed of lab-scale fermentation processes. All media were tempered to 25 °C. For high cell density (HCD) cultivation experiments minimal media calculated to produce 80 g cell dry mass (CDM) in the batch phase and 1940 g CDM during feed phase were used. The batch medium was prepared volumetrically; the components were dissolved in 5 L RO-H 2 O. The fed-batch medium was prepared gravimetrically; the final weight was 10.2 kg. All components for the fed-batch medium were weighed in and dissolved in RO-H 2 O separately. All components (obtained from MERCK), were added in relation to the theoretical grams of CDM to be produced: the composition of the batch and the fed-batch medium is as follows: 94.1 mg g −1 KH 2 PO 4 , 31.8 mg g −1 H 3 PO 4 (85 %), 41.2 mg g −1 C 6 H 5 Na 3 O 7 * 2 H 2 O, 45.3 mg g −1 NH 4 SO 4 , 46.0 mg g −1 MgCl 2 * 2 H 2 O, 20.2 mg g −1 CaCl 2 * 2 H 2 O, 50 μL trace element solution, and 3.3 g g −1 C 6 H 12 O 6 * H 2 O. The trace element solution was prepared in 5 N HCl and included 40 g L −1 FeSO 4 • * 7H 2 O, 10 g L −1 MnSO 4 • * H 2 O, 10 g L −1 AlCl 3 • * 6 H 2 O, 4 g L −1 CoCl 2 , 2 g L −1 ZnSO 4 • * 7H 2 O, 2 g L −1 Na 2 MoO 2 • * 2 H 2 O, 1 g L −1 CuCl 2 • * 2 H 2 O, and 0.5 g L −1 H 3 BO 3 . To accelerate initial growth of the population, the complex component yeast extract (150 mg g −1 calculated CDM) was added to the batch medium. Nitrogen level was maintained by adding 25 % ammonium hydroxide solution (w/w) for pH control. Pre-cultures for inoculation were grown in synthetic media calculated to produce 3 g L −1 . BioLector cultivations All experiments were performed in the BioLector micro-fermentation system in 48-well Flowerplates ® (m2p-labs) equipped with optodes for on-line measurement of dissolved oxygen (DO) and pH value (Funke et al. 2009). Bacterial growth was monitored in the BioLector via a backward scattered light measurement at 620 nm. The biomass concentration (g L −1 ) was calculated from light scatter signals with calibration settings obtained by linear regression analysis. The pH dependent fluorescence signal was excited at 470 nm and monitored at an emission of 525 nm. The DO dependent fluorescence signal was excited at 520 nm and detected at an emission of 600 nm. The signals were converted by the BioLector software (BioLection 2.2.0.3) to DO and pH values using predefined calibration settings specific for the Flowerplates ® . The green fluorescent protein (GFP) expression level was monitored at an excitation of 488 nm and an emission of 520 nm. The signal is given in arbitrary units [A.U] and it was also correlated to the specific amount of recombinant protein in mg GFP g −1 CDM with a GFP specific ELISA (Reischer et al. 2004). The cycle time for all parameters was 15 min. Gas-permeable sealing films (m2plabs) were used to ensure aseptic conditions and to reduce evaporation. The humidity in the incubation chamber was controlled (% rH ≥85 %) and the shaking frequency was 1400 rpm. The total cultivation volume was 800 or 1000 µL. The initial cell density (OD ini ) was equivalent to an optical density of OD 600 = 0.5. For inoculation, a deepfrozen (−80 °C) working cell bank (WCB) (OD 600 = 2) was thawed and biomass was harvested by centrifugation (7500 rpm, 5 min). Cells were washed with 500 µL of the corresponding medium to remove residual glycerol and centrifuged; then, pellets were re-suspended in the total cultivation medium. All cultivations were prepared in six replicates at 37 °C for 40 h. To ensure comparability of cultivations three wells with dilutions of a turbidity standard (NTU200), two wells with pH buffers (pH 5 and pH 7) and one well with medium for sterility control were used as internal plate standards. The cultivation with recombinant protein synthesis was performed with HMS174(DE3) (pET11aGFPmut3.1) in FIT 67 %. Recombinant gene expression was induced with 1 mM IPTG 10 h after start of cultivation. HCD cultivations All HCD fermentations were performed in a 30 L (23 L net volume, 5 L batch volume) computer controlled bioreactor (Bioengineering; Wald, Switzerland) equipped with standard control units (Siemens PS7, Intellution iFIX). The pH was maintained at a setpoint of 7.0 ± 0.05 by addition of 25 % ammonia solution (w/w), the temperature was set to 30 ± 0.5 °C. To avoid oxygen limitation the DO level was held above 30 % saturation by adjusting the stirrer speed and the aeration rate of the process air. The maximum overpressure in the head space was 1.0 bar. Foaming was suppressed by addition of 0.5 mL L −1 antifoam (PPG 2000 Sigma Aldrich) to the batch medium and by pulsed addition of antifoam during the fed-batch phase. The cultivation was inoculated with a pre-culture. The pre-culture was set-up by inoculating 300 mL LB media with 1 mL of a deep frozen WCB. Cells were grown on an orbital shaker at 200 rpm and at 37 °C until the OD 600 reached a value of 3.5. Thereafter, approximately 15 mL of the pre-culture, corresponding to 50 OD units, were aseptically transferred to the bioreactor. At the end of the batch phase as soon as cells entered the stationary growth phase, an exponential substrate feed was started. During a period of 11 h cells grew exponentially at a constant growth rate of µ = 0.17 h −1 . The substrate feed was controlled by increasing pump speed according to the exponential growth algorithm, X = X 0 · e µt , with superimposed feedback control of weight loss in the substrate tank. Thereafter, the feed regime was switched from exponential to linear feed mode. During the linear feed phase (17 h) glucose was fed to the culture with a constant rate of 4.9 g glucose min −1 . Due to the linear feeding profile the growth rate decreased from 0.17 h −1 to 0.04 h −1 . Calibration of light scatter signals The correlation between the light scatter signal and the biomass concentration was determined by measuring the scattered light intensities of a concentrated cell suspension and a series of cell suspensions/water dilutions (1:0.3, 1:1, 1:4, 1:9, and 1:19). The cell suspension of an E. coli HMS174 shake flask culture (24 h incubation at 37 °C in FIT 67 %) was used for measurement of the light scatter signal in the BioLector at 30 °C and 1400 rpm until a stable initial value was reached. In parallel, the biomass concentration of the culture was determined by centrifugation of 20 mL of the cell suspension, followed by re-suspension of the pellet in distilled water, centrifugation, and transfer to a pre-weighed beaker. The beaker was dried for 24 h at 105 °C and re-weighed. Glucose release kinetics The glucose release was monitored in diluted FIT 67 % at the default pH of 7.3 or after changing the pH to 6.8 with HCl. Shake flasks (250 mL) with a filling volume of 30 mL were incubated for 30 h on an Infors HT Multitron orbital shaker. The temperature was set to 25, 30, or 37 °C and the shaking frequency was 200 rpm. The glucose concentration was determined periodically with an enzymatic assay specific for d-glucose (Megazyme International Ireland Ltd.). Prior to the measurement, the glucose releasing enzyme was inactivated for 15 min at 85 °C and samples were diluted to 0.05-5.00 mg glucose L −1 . Results In this study we evaluated the suitability of a medium based on enzymatic glucose release (FIT fed-batch medium) for fed-batch like cultivation of E. coli in MTP. The predefined protocol must allow for cultivation at a DO level above 30 % and at pH values within 7 ± 1 throughout the process and carbon limited growth conditions in the "feed"-phase. In a first setting E. coli HMS174 was cultivated in the BioLector in 1000 µL FIT 100 % medium. In these experiments high biomass concentrations of 13 g L −1 were reached, but during a long lag phase in the beginning of the cultivation glucose accumulated and was then metabolized at a high rate in the late batch phase. This caused the DO level to drop close to zero (Fig. 1a, b). Next, the cultivation volume was reduced to 800 µl to increase the oxygen transfer rate (OTR). In parallel diluted FIT fed-batch media (FIT 50 %, FIT 67 %, FIT 80 %) were used to prevent exceeding oxygen consumption rates in the late stage of the batch phase. With FIT 50 % medium a biomass concentration of 12 g L −1 was obtained and the batch phase was shortened by 2.5 h compared to cultivation in undiluted medium and there was no oxygen limitation at the end of the batch phase. However, after 10 h, the pH value reached a minimum of 5.5 and for 7 h no growth was detected. Within this resting phase the pH increased to 6.0 and growth was restored. In FIT 80 % medium the batch phase was shortened by 2.5 h and the pH value ranged between 6.3 and 7.0 throughout the cultivation. The DO-level dropped to 5 % at the end of the batch phase. The cultivation with FIT 67 % medium (Fig. 1c, d) reached a final biomass of 11 g L −1 in a shortened batch phase without oxygen limitation or severe pH variations. Based on these results, the recombinant strain HMS174(DE3)(pET11aGFPmut3.1) was cultivated according to the protocol with FIT 67 % with and without induction of GFP expression to evaluate the influence of recombinant protein synthesis on cell growth. The growth behavior of the wild type and the non-induced recombinant strain did not show significant differences (Figs. 1c, 2a). However, overexpression of recombinant protein negatively affected cell growth and the biomass yield was 36 % lower than in the reference experiment without induction (Fig. 2a, b). Ten hours after induction the GFP signal reached a maximum of 500 A.U, which correlates with 90 mg GFP g −1 biomass (Fig. 2b). To evaluate the stability of the µ-scale cultivation protocol, a reproducibility study was conducted. Wild type and recombinant strains were cultivated in sixfold on 3 different days (Fig. 3). For biomass the well-to-well coefficient of variation (CV) on the same plate was 4.5 % for both strains. The day-to-day and plate-to-plate CV was 4.8 % for the wild type strain HMS174 and 9.1 % for HMS174(DE3)(pET11aGFP). To investigate the influence of temperature and pH on the activity of the polymer degrading enzyme the glucose release was determined in in vitro experiments at 25, 30, and 37 °C and pH values of 7.3 and 6.8. The enzyme activity directly correlated with temperature as the glucose concentration after 28 h incubation at 37 °C was 76 % higher than at 25 °C (Fig. 4a). With respect to the variation of pH, incubation at a lower pH of 6.8 yielded 40 % higher glucose concentrations than at pH 7.3 (Fig. 4c). The strong relationship between temperature and cell growth was shown in cultivation experiments with HMS174 (Fig. 4b). Furthermore, the growth behavior of two strains with different acidification patterns (HMS174 and BL21) was compared (Fig. 4d). During cultivation of HMS174, the pH reached a minimum of 6.1 after 10.5 h; whereas, a minimum of 6.4 was reached after 12 h cultivation of BL21 cells. Within the first 25 h of cultivation, higher growth rates were observed for strain HMS174. Thereafter, the growth of both strains was comparable reaching the same biomass concentration of 11 g L −1 after 40 h (Fig. 4d). To evaluate transferability of results generated in our small scale screening platform to fermentation processes a series of HCD cultivation was conducted with E. coli HMS 174, E. coli RV308 and E. coli BL21 (Fig. 5). Discussion In this work we successfully established a MTP cultivation protocol that allows for E. coli clone screening and characterization under production like conditions. The application of a synthetic glucose release medium enabled a fed-batch like E. coli cultivation Fig. 3 Reproducibility of the established small scale fed-batch cultivations: a growth of E. coli HMS174 and b of E. coli HMS174(pET11aGFPmut3.1) in m2p 67 %; "Day 1", "Day 2" and "Day 3" are biological replicates of a cultivation performed as sixfold; the mean value and the standard error of the mean for these six individual parallel experiments is shown Fig. 4 a, c Influence of temperature and pH on glucose release in m2p 67 % medium; b corresponding growth of E. coli HMS174 at different temperatures and d of E coli HMS174 and E coli BL21 at 37 °C in m2p 67 %. For glucose release and process characteristics, the mean value for triplicates is shown without external substrate feed up to >10 g CDM L −1 . The predefined success criteria (DO >30 %, pH = 7 ± 1; carbon limited growth in the feed phase) were set in order to mimic production conditions. A similar concept has already been used for Pichia pastoris (Wenk et al. 2012) but for E. coli reported results were limited to shake flask experiments (Hemmerich et al. 2011). Based on our results we conclude that the FIT 100 % medium supplemented with the enzyme for glucose release cannot directly be used in our approach as the DO level dropped close to zero at the end of the batch phase. Even though the oxygen depletion had no influence on further growth or the final biomass, anaerobic conditions are not favorable, especially during a strain screening approach (Büchs 2001;Zimmermann et al. 2006). To avoid the oxygen limitation at the end of the batch phase the cultivation volume was reduced as lower volumes allow for increased oxygen transfer in titerplates (Funke et al. 2009). Furthermore the use of diluted medium (FIT 67 %) with lower glucose concentration and lower osmotic pressure led to a shortened lag-phase. This combination met all of the defined success criteria important to mimic production conditions and the final cell density at the end of the process was only 8 % lower as in cultivations with FIT 100 %. The transition from batch to fed-batch mode was clearly shown in experiments with different strains (HMS174, HMS174(DE3)(pET11aGFPmut3.1), BL21) (Figs. 1, 2a). The cells switched from unlimited to carbon limited growth controlled by the constant glucose release rate of the hydrolyzing enzyme. The continuously decreasing growth rate during this carbon limited phase most likely equates the output of a linear feed in labscale bioreactors. A high reproducibility was proven for the established cultivation protocol with the FIT 67 % medium and consequently an important requirement for a HTP cultivation platform for strain screening was fulfilled. As expected the HMS174 wild type strain reached a higher final cell density than its recombinant descendant and the variations in pH and DO were more pronounced but they were kept within defined ranges. In cultivations with induced HMS174(DE3) (pET11aGFPmut3.1) product formation kinetics and GFP yield (90 mg GFP g −1 biomass) correspond well to results achieved in fully controlled fed-batch cultivation (129 mg GFP g −1 biomass) (Striedner et al. 2010). Moreover, the influence of the high level recombinant gene expression on the final biomass yield with 30 % decrease in fermenter and 36 % decrease in BioLector was comparable in both systems (Fig. 2).
4,453.4
2015-09-11T00:00:00.000
[ "Engineering", "Biology" ]
AN ASYMPTOTIC-PRESERVING SCHEME FOR THE SEMICONDUCTOR BOLTZMANN EQUATION WITH TWO-SCALE COLLISIONS: A SPLITTING APPROACH . We present a new asymptotic-preserving scheme for the semiconductor Boltzmann equation with two-scale collisions — a leading-order elastic collision together with a lower-order interparticle collision. When the mean free path is small, numerically solving this equation is prohibitively expensive due to the stiff collision terms. Furthermore, since the equilibrium solution is a (zero-momentum) Fermi-Dirac distribution resulting from joint action of both collisions, the simple BGK penalization designed for the one-scale collision [10] cannot capture the correct energy-transport limit. This problem was addressed in [13], where a thresholded BGK penalization was introduced. Here we propose an alternative based on a splitting approach . It has the advantage of treating the collisions at different scales separately, hence is free of choosing threshold and easier to implement. Formal asymptotic analysis and numerical results validate the efficiency and accuracy of the proposed scheme. 1. Introduction.The semiconductor Boltzmann equation describes the transport of charge carriers (electrons or holes) in semiconductor devices [20,6,18].In this paper, we are interested in the following non-dimensionalized form [2,1,7]: where f (x, k, t) of position x, wave vector k and time t, is the electron distribution function in the conduction band of a semiconductor.The parameter α is the scaled mean-free path which can be either small (diffusive regime) or large (kinetic regime) depending on the device structure.ε(k) is the energy-band diagram given by (the parabolic band approximation is assumed for simplicity) V (x, t) is the electrostatic potential produced self-consistently through the Poisson equation: where D 0 is the square of scaled Debye length, ρ(x, t) = R d f (x, k, t) dk is the electron density, and h(x) is the fixed doping profile that takes into account the impurities due to acceptor and donor ions in the device.The right hand side of equation ( 1) models three different collision effects: 1. Q el is the elastic collision combining interactions between electrons and lattice impurities, and the elastic part of electron-phonon interactions; 2. Q ee is the interactions between electrons themselves; 3. Q inel ph is the remaining inelastic part of electron-phonon interactions.Specifically, Q el and Q ee are given as follows (the exact form of Q inel ph will not be needed in the following discussion and thus is omitted): where δ is the Dirac measure, ε , ε 1 , ε 1 , f , f 1 , f 1 are short notations for ε(k ), ε(k 1 ), ε(k 1 ), f (x, k , t), and etc, η is the typical distribution function scale characterizing the degree of degeneracy of the system.The scattering matrices Φ el and Φ ee are determined by the underlying interaction laws.They satisfy the symmetry conditions: To this date, there are numerous models describing the electron flow through a semiconductor device, ranging from macroscopic equations to kinetic equations, or even microscopic ones [20,6,18].The equation (1) we consider here is especially useful to capture the hot-electron effects in nanoscale devices [19].Since the delta function is involved in the collision operator, it is more realistic than many commonly used kinetic models that only deal with smoothed kernels [15,8,17,9].Furthermore, it includes the electron-electron interaction which is usually neglected under a low-density assumption [4,5] (not true in our case).Mathematically, the two-scale collisions with delta kernels bring many interesting features: the leadingorder elastic collision does not have a unique null space; only when the electronelectron collision at next level takes into effect, the solution can be driven to a fixed equilibrium -a (zero-momentum) Fermi-Dirac distribution.As a result, the macroscopic asymptotic limit when the mean free path goes to zero is a system of conservation laws for the electron mass and energy, the so-called energy-transport (ET) model [19]. Our goal in this paper is to design an efficient numerical scheme for the Boltzmann transport equation (1).The emphasis will be put in the situation that the parameter α takes a wide range of values: from α ∼ 1 (kinetic regime) to α 1 (diffusive regime).While individual solvers for both regimes are available (or possible), it is desirable to have a unified scheme working for different α, as in practice α may not be uniformly small or large in the entire domain of interest.To make it precise, we want a numerical scheme that is consistent to the kinetic equation ( 1), and when α approaches zero it automatically becomes a macroscopic solver for the limiting ET system, i.e., it is asymptotic-preserving (AP) [14].A natural thought is to use implicit rather than explicit schemes on stiff terms, but this is impractical since the collision operators are too complicated to invert (even forward computation is not an easy task).Inspired by [10], one can seek simple BGK-like operators to penalize these integral operators.However, as the collision possesses three different scales, a closer examination of the asymptotic behavior of the numerical solution reveals that the penalization should be performed wisely, otherwise it won't capture the correct asymptotic limit.The same problem was considered in our recent work [13], where we provided a solution by introducing a threshold on the penalization of the elastic collision.Here we propose an alternative based on a splitting approach.By carefully separating the different scales in equation (1), we are able to recover the correct ET limit at the discrete level.Moreover, compared to the previous idea, the new method is free of choosing threshold and hence is easier to implement.Finally, we comment that most of the AP schemes were developed for problems with two scales (the kinetic and the hydrodynamic ones, for example [14]).Here and in [13], the problem contains an additional, intermediate scale, thus offers new features and challenges for AP schemes. The rest of the paper is organized as follows.In the next section we briefly review the properties of the collision operators and the energy-transport limit of equation (1).Section 3 describes our AP scheme: we first consider the spatially homogeneous case with an emphasis on the two-scale collision terms, and then include the spatial dependence to treat the full problem.In either case, the asymptotic property of the numerical solution is carefully analyzed.We present several numerical examples in Section 4 to illustrate the efficiency, accuracy, and AP properties of the new scheme.The paper is concluded in Section 5. 2. The energy-transport limit of the semiconductor Boltzmann equation.In this section, we give a brief review of the energy-transport limit of the semiconductor Boltzmann equation (1).The (formal) derivation is a combined procedure of the Hilbert expansion and moment method which mainly relies on the properties of the collision operators Q el and Q ee .We only list here the basic elements that are necessary to understand the numerical schemes.The complete treatment can be found in [7,13].Proposition 1. 1.For any "regular" test function g(ε(k)), 1. Conservation of mass and energy: is the zero momentum Fermi-Dirac distribution function.The macroscopic variables z and T are the fugacity and temperature. Remark 1.The collision operator Q ee also conserves momentum: R d Q ee (f )k dk = 0.Only when fixed to ε-dependent functions, its null space is (5). To derive the asymptotic limit of (1), one inserts the Hilbert expansion f = f 0 + αf 1 + α 2 f 2 + . . .and collects equal powers of α: Using the properties of Q el and Q ee , one can show that An important observation here: f 0 = M is not completely determined by ( 6) since the null space of Q el is not unique.It is the joint action of ( 6) and ( 7) that yields the above result.Now plugging f into (1), and taking the moments • (1, ε) T dk on both sides, to the leading order we have Theorem 2.1.In equation (1), when α → 0, the solution f formally tends to a Fermi-Dirac distribution (5), with the position and time dependent fugacity z(x, t) and temperature T (x, t) satisfying the so-called energy-transport (ET) model: where the density ρ and energy E are defined as the fluxes j ρ and j E are given by with the diffusion matrices and the energy relaxation operator W inel ph is Remark 2. Unlike the classical statistics, given macroscopic quantities ρ and E, finding the corresponding Fermi-Dirac distribution ( 5) is not trivial.In fact, ρ and E are related to z and T via ( [11]) where and Γ(ν) is the Gamma function. 3. Asymptotic-preserving schemes for the semiconductor Boltzmann equation based on operator splitting.Equation ( 1) contains three different scales, wherein explicit schemes become extremely expensive when α is small (convection part requires CFL condition ∆t = O(α∆x); collision part requires at least ∆t = O(α 2 )).To design a scheme whose stability is immune from the restriction induced by stiff terms, one usually expects a fully implicit scheme.However, as mentioned in the Introduction, this may result in a large algebraic system that is hard to invert.Our goal is to design an efficient numerical scheme that is uniformly stable regardless of the magnitude of α: only requires parabolic CFL condition ∆t = O(∆x 2 ), and the implicit terms can be treated in an explicit manner.While the stiff convection term has been successfully handled via an even-odd decomposition [15,16], the two-scale collisions is treated recently in [13] using a thresholded-penalization.There, a spatially dependent threshold is placed on the stiffer collision operator such that the evolution of the numerical solution resembles the Hilbert expansion at the continuous level.However, the choice of threshold depends on the accuracy of the elastic collision solver, so its optimal value is not easy to determine.Here we propose a different scheme based on a splitting approach (free of threshold).We will start with the spatially homogenous case whose splitting is somewhat straightforward.We then consider the inhomogeneous case, where the splitting needs to be done carefully in order to capture the correct ET limit.This is indeed inspired by [13].To facilitate the presentation, we always make the following assumptions without further notice: 1.The inelastic collision operator Q inel ph in ( 1) is assumed to be zero, since it is the weakest effect and its appearance won't bring extra difficulties to temporal and spatial discretizations. The scattering matrix Φ Then the elastic collision (3) can be written as where In particular, for any odd function f (k), 3. The collision operators (3) and ( 4) can be written symbolically as where the forms of Q + el , Q ± ee should be clear from the definition.3.1.The spatially homogeneous case.In the spatially homogeneous case, equation (1) reduces to where f only depends on k and t.Following [13], we use a BGK-like penalization [10] to remove the stiffness on collision terms: where M is simply chosen as the Fermi-Dirac distribution (5) since it belongs to the intersection of N (Q el ) and N (Q ee ).The essence of penalization is to choose as small as possible so that it is non-stiff or less stiff.Viewing (12), we may choose This choice is sufficient to guarantee AP property and stability as illustrated by our later analysis and numerical results.Now to handle the two-scale collisions, a direct separation of scales in (13) suggests the following first-order splitting scheme: Note in the spatially homogeneous case ρ and E are conserved, so M = M (ε) is an absolute Maxwellian and can be obtained from the initial condition (first find ρ = R d f 0 dk, E = R d f 0 ε dk, and then invert (10) to get z and T so as to define M ). 3.1.1.Asymptotic properties of the numerical solution for the BGK model.In this subsection, we study the asymptotic behavior of the numerical solution to the scheme (15)(16).In the BGK model, where M ee is a general Fermi-Dirac distribution defined by1 , with z, u, T determined through the moments of f : Note that ρ, e are related to z, T via the same system (10) but with the energy E replaced by internal energy e (in fact, e and E have the relation e = E − 1 2 u 2 , thus in the zero momentum case, E = e and we have E rather than e in (10)). Remark 3. The operator ( 17) is a simple, yet practical approximation to the complicated integral operator.The AP analysis for the full Boltzmann operator (4) is still lacking, but the analysis here sheds some light on the behavior of the solution.We still use (4) in numerical simulations.Now (15)(16) can be rewritten as where Notice that Q + el (f n ) only depends on ε, so when α is small, (18) drives f * toward a radially symmetric function as long as |γ 1 | < 1, i.e. β el > 1 2 λ el ; M * ee computed from f * then has a decreasing momentum and thus approaches M (M and M * ee share the same density and energy); and (19) . Therefore the loop containing these two stages resembles a Hilbert expansion within one time step.To be more precise, we need the following Lemma (its proof is given in the Appendix) characterizing the distance between M * ee and M .Lemma 3.1.Consider two Fermi-Dirac distribution functions M ee,1 and M ee,2 with the form and assume T 1,2 (x, t), z 1,2 (x, t) > 0 and Here Recall that the scheme (15-16) conserves mass and energy: If we take the moments •k dk on both sides of (18)(19), we have where the moments of Q + el and M vanish due to their radial symmetry.Therefore, thanks to Lemma 3.1.Notice that a combination of (18)(19) implies Applying the linear operator Q el on both sides of (22) yields where the first inequality is based on the facts and the second inequality is obtained using (21) since As a result of ( 21) and ( 23), for any m > 0, there exists an integer N such that for all n > N we have On the other hand, (18-19) can be written as and thus ), for n > N , because of (24).Since |γ 2 | < 1, therefore, no matter what the initial condition is, f n+1 will eventually be driven to the desired Fermi-Dirac distribution M , and the convergence rate depends on the magnitude of γ 2 .Remark 4. We would like to point out that in this splitting framework, the two stages in each time step alternate, that is, the solution is driven closer to its radially symmetric counterpart (with momentum decreased) in the first stage and then slightly to the Fermi-Dirac distribution (with momentum preserved) in the second stage, which makes the relaxation toward the final (zero-momentum) Fermi-Dirac distribution 'oscillatory'.This is different from the thresholded penalization scheme in [13] where the solution is driven all the way to a radially symmetric function at the beginning, and then toward the final Fermi-Dirac distribution after the onset of threshold. 3.2. The spatially inhomogeneous case.We now include the spatial dependence to treat the full equation (1).Following [13], we reformulate it into a set of parity equations [15].Denote f + = f (x, k, t), f − = f (x, −k, t), then they solve we have where we have used the fact that j is an odd function in k, and thus Q el (j) = −λ el j. For (25-26), the same penalization as in the homogenous case suggests Note here M = M (x, ε, t) is the local Fermi-Dirac distribution.The coefficients β el and β ee are chosen the same as in ( 14) except that β ee can also be made spatially dependent. Rewrite the above equations into a diffusive relaxation system [16], we have where 0 ≤ θ(α) ≤ 1 α 2 is a control parameter chosen as θ(α) = min 1, 1 α 2 .Unlike the simple separation of O( 1α ) and O( 1 α 2 ) terms in the spatially homogenous case, we propose the following first-order splitting scheme for (27-28): • stage 1 where M * = M n since Q el conserves mass and energy.• stage 2 where M n+1 is computed via (10).Here ρ n+1 and E n+1 are first computed by taking the moments • (1, ε) T dk of (31), wherein the right hand side vanishes owing to the conservation of mass and energy. 3.2.1.Asymptotic properties of the numerical solution for the BGK model.Let's again assume that Q ee (f ) = M ee (f ) − f .A reformulation of (29-30) leads to Rearrange (31-32), we have Here M * ,± ee = M ee (f * ,± ) = M ee (r * ± αj * ), γ 1 , γ 2 are defined the same as in (20).First we point out that j n+1 (n ≥ 0) has magnitude O(1) since from (34), (36) Notice that j * is an odd function in k, thus M ee (r * ) and M ee (r * ± αj * ) share the same mass and energy.Since r * has zero momentum, we have M ee (r * ) = M * , and therefore Lemma 3.1 implies (37) Henceforth, when α 1 (so θ = 1), we have from (36) if all functions are smooth.Similar to the homogenous case, apply operator Thus there exists N such that for all n > N , Q el (r n ) = O(α).On the other hand, a reformulation of (33) and (35) yields Therefore, we have From the above discussion, we know that when α → 0, no matter what the initial condition is, immediately and after an initial transient time These are the desired AP property we want (compare with ( 8)): plugging them into (29), (31) and taking the moments • (1, ε) T dk, we get exactly a first-order time discretization for the ET system (9). Numerical examples. In this section, we present several numerical examples using our AP scheme (29-32).Here the space discretization follows the upwind scheme with flux limiter and wave vector discretization uses the spectral method.More details are refereed to [13]. In what follows, we always take k ∈ [−L k , L k ] 2 with L k = 9.2, and x ∈ [0, L x ] with L x = 1.N k is the number of points in each k direction, N x is the number of points in x direction.We assume the periodic boundary condition in x and choose L k large enough so f ≈ 0 at |k| = L k .The time step ∆t is chosen to only satisfy the parabolic CFL condition: ∆t = O(∆x 2 ) (independent of α). AP property. Consider equation (1) with non-equilibrium initial data The electric field ∂ x V is set to be one. We check the asymptotic property by looking at the distance between r and M at each time step, i.e., errorAP n L ∞ = max x,k1,k2 The results are gathered in Figure 1, where we observe that, unlike the thresholded penalization which experiences a clear two-stage convergence, the solution by the splitting approach undergoes a faster convergence during the first few steps, and smoothly transits to the final equilibrium.This is because the two collisions, even though they are not in the same scaling, act together all along. 4.2.1-D n + -n-n + ballistic silicon diode.We next simulate a 1-D n + -n-n + ballistic silicon diode, which is a simple model of the channel of a MOS transistor.The initial data is taken to be For Poisson equation ( 2), we choose h(x) = ρ 0 (x) = f 0 dk, D 0 = 1/1000, with boundary condition V (0) = 0, V (L x ) = 1.The doping profile h(x) is shown in Figure 2. We consider two regimes: one is the kinetic regime with α = 1, where we compare our solution with the one obtained by the explicit scheme (forward Euler); the other is the diffusive regime with α = 1e − 3, where our solution is compared with that of the ET system using the kinetic solver [13].Good agreements are obtained in Figures 3, 4.Here the macroscopic quantities plotted are density ρ, energy E, temperature T , fugacity z, electric field E = −∂ x V , and mean velocity ū defined as ū = j ρ /ρ.This example is borrowed directly from [13] and the results are very similar to those computed using the thresholded penalization. with α 0 = 1e − 3, thus it increases smoothly from α 0 to 1, and then suddenly drops back to α 0 .The plot of α is shown in Figure 5, which involves both kinetic and diffusive regimes.The initial condition is taken as in (38).We compare the solution using ∆x = 1/40 and ∆t = 0.2∆x 2 with a more refined solution with ∆x = 1/160 and ∆t = 0.05∆x 2 in Figures 6, 7 with η = 0.01 and 1, respectively. Conclusion. We designed an asymptotic preserving scheme for a Boltzmann-Poisson system that characterize the transport of charge carriers in the semiconductor.In a diffusive regime where the collisions are not in the same scales, the system approaches an energy-transport system.Besides the stiff convection terms, the two-scale collision operators pose new difficulties since the simple BGK penalization fails to capture the correct limit.Inspired by the two stages in the convergence to the equilibrium, we propose a splitting setting such that the relaxation of the numerical solution to the local equilibrium resembles the Hilbert expansion at the continuous level at each time step.Therefore, the numerical solution experiences an alternating path towards the equilibrium with one direction heading to the radially symmetric function and the other to the Fermi-Dirac distribution.The main advantage compared to the thresholded penalization in [13] is that it is free from the choice of threshold.We analyzed this asymptotic behavior using a simplified BGK model.Several numerical results confirmed the asymptotic-preserving property for any non-equilibrium initial data, as well as the uniform stability of the scheme with respect to the mean free path, from kinetic regime to diffusive regime.
5,157.6
2015-07-01T00:00:00.000
[ "Physics", "Computer Science" ]
Origin of Self-preservation Effect for Hydrate Decomposition: Coupling of Mass and Heat Transfer Resistances Gas hydrates could show an unexpected high stability at conditions out of thermodynamic equilibrium, which is called the self-preservation effect. The mechanism of the effect for methane hydrates is here investigated via molecular dynamics simulations, in which an NVT/E method is introduced to represent different levels of heat transfer resistance. Our simulations suggest a coupling between the mass transfer resistance and heat transfer resistance as the driving mechanism for self-preservation effect. We found that the hydrate is initially melted from the interface, and then a solid-like water layer with temperature-dependent structures is formed next to the hydrate interface that exhibits fractal feature, followed by an increase of mass transfer resistance for the diffusion of methane from hydrate region. Furthermore, our results indicate that heat transfer resistance is a more fundamental factor, since it facilitates the formation of the solid-like layer and hence inhibits the further dissociation of the hydrates. The self-preservation effect is found to be enhanced with the increase of pressure and particularly the decrease of temperature. Kinetic equations based on heat balance calculations is also developed to describe the self-preservation effect, which reproduces our simulation results well and provides an association between microscopic and macroscopic properties. reduction of guest molecules within the surrounding ice layer. Takeya and Ripmeester 16 suggested that the self-preservation is related to the interaction between guest and water molecules. A two-step dissociation model was first reported by Handa 4 : the destruction of the host lattice followed by the desorption of the guest molecules. Very recently, a consecutive desorption and melting (CDM) model was proposed by Windmeier and Oellrich 17 in their theoretical study. They suggested that the desorption of guest molecules near the interface is the reason for hydrate decomposition, which leads to the melting of the superheated empty hydrate lattice. Although the above investigations provide important insights into the anomalous preservation effect, the molecular mechanism for this kind of self-preservation effect remains unclear. Since the temporal and spatial limitations of the monitoring techniques, molecular simulation becomes a powerful method of providing molecular details of hydrate decomposition. By performing molecular dynamics (MD) simulations, Báez et al. 18 and English et al. 19 found that the surface layer of hydrate is composed of partial cages with guest molecules, and the dissociation near the hydrate-fluid interface is roughly in a stepwise manner. Subsequent MD study further indicated the decomposition occur in a heterogeneous layer-by-layer manner 20 . Tse et al. 21 investigated the decomposition process of sI xenon hydrate with MD simulations, and they found the melted water molecules reassemble into solid-like structure near the hydrate surface to block its further decomposition. Considering the heat transfer during the hydrate melted, Baghel et al. 22 performed micro-canonical MD simulations at temperatures higher than 273 K, and fitted their results to the heat balance equations. In general, the unexpected hydrate stability is presumably ascribed to the formation of the ice layer on hydrate surface that reduces the diffusion of guest molecules 4,6,15,17 , but the origin of the self-preservation effect, i.e. the molecular mechanism that causes the anomalous effect, remains unclear. Molecular simulation studies aforementioned provide the microscopic insights on the decomposition of gas hydrate, however, the self-preservation effect is clearly of distinct origins as it in fact prevents hydrates from decomposing above their melting point. Inspired by molecular simulation studies on hydrate decomposition [18][19][20][21][22] , in this work we investigated the mechanism of the self-preservation effect for methane hydrates via molecular dynamics simulations, and the simulation results suggested a coupling between the mass transfer resistance and heat transfer resistance as the driving mechanism for self-preservation effect. Results The general feature of self-preservation effect. To investigate the molecular origin of the self-preservation effect, in this work a series of NVT/E MD simulations at different temperatures and pressures (see Methods section for details) were performed to explore systematically what is responsible for the effect in hydrate melting. Some general characteristics found from our simulations on the hydrate decomposition are discussed below on the aspect of the self-preservation effect. By taking the NVT simulation at 265 K and 5 atm for example (see the initial configuration in Fig. 1), the time evolution of H 2 O and CH 4 densities during the hydrate decomposition process is given in Fig. 2. It clearly shows that the hydrate was melted to liquid water firstly, and then solid-like water structure was formed and grew continuously on the outer side of the liquid water as the simulation proceeds. No significant resistance for CH 4 mass transfer is found until a complete solid layer of water is formed, as indicated by the gradual accumulation of CH 4 between the undecomposed hydrate and the solid-like water layer. With the hydrate melted, the surface of hydrate phase is shrinking. Since the hydrate decomposition is not in a layer-by-layer manner, it is difficult to determine the hydrate surface exactly by using the density profile as shown in Fig. 2. Here we used a wave packet method to determine the phase interface: Figure 1. The initial configuration for a typical simulation run. The system was divided into two regions along the x-direction: hydrate region (region A) and rest of the system (region B). In the figure, red wireframe models represent H 2 O molecules and the hydrogen bonds formed between them, while gray spheres represent CH 4 molecules. , and the phase interface was determined at ρ ρ ρ ( ) = . After the determination of the phase interface, surface area can be easily calculated. If the melting surface of hydrate remains flat during the whole process, the surface area is expected to be proportional to the square of the side length. However, Fig. 3a indicates that the surface shows a fractal feature instead. In this work, we assume that the interface area can be written as = A kr D , where k is a constant, r is the side length of cross section and D is the surface fractal dimension. The D can be determined by fitting above equation to the simulation date, and the results are shown in Fig. 3b. A fractal dimension of D ~ 2.62 is obtained for our systems with a cross section of 4 × 4 unit cells. We should note that the size of cross section, however, can affect the fractal feature (Fig. 3b) since fractal structure itself is a nesting structure in different scales 23 . The occurrence of fractal feature facilitates the decomposition of gas hydrate owing to the obvious increase of surface area. Combined with Figs. 2 and 3, one can find that D is greater than 2 although the decomposition slows down and even stops, indicating a high interfacial energy. Therefore, the formation of the solid-like layer results in not only the resistance of mass transfer that increases the chemical potential of methane µ n d B B to prevent the hydrate from further dissociation, but also a higher interfacial energy σ A d that comes from a fractal surface of melted hydrate. To give the direct evidence of the self-preservation effect, we calculated the time evolution of hydrate melting rate (Fig. 4a). The figure clearly shows that after a short period of rapid melting, the melting slows down gradually and finally stops. The fitted reaction order from the rate equation with respect to hydrate dissociation , in which L is the actual length and L 0 = 4 is the initial length of hydrate crystal in x-direction, shows a value of n ~ 2.16 under the thermodynamic conditions (see the inset of Fig. 4a). The effect of temperature and pressure. Experimental study shows that natural gas hydrates can be saved more than 1 year at temperature between 255 and 272 K 24 . It is worthy to study how temperature and pressure affect the self-preservation effect on molecular level. During the decomposition process, the component of H 2 O in our simulations (hydrate, liquid-like, and solid-like) was analyzed and the results are shown in a ternary diagram (Fig. 5). The figure clearly indicates that at the end of the totally 100 ns NVT simulations, the amount of liquid-like water increases at increasing temperature or at decreasing pressure, indicating a weak self-preservation effect under those conditions. The figure also shows that the solidification of liquid water (i.e., the formation of ice) is more sensitive to temperature than pressure. The strong self-preservation effect at decreasing temperature or increasing pressure is partly manifested by enhanced memory effect when hydrate melting slows down. By using the topological algorithm developed in our previous study 25 , we calculated the percentage of residual rings in the newly formed solid-like structures, which is shown in Fig. 6a. We note that the residual ring refers to the ring left from the melted hydrate cage, in which the water molecules keep connected each other all the time although the cage is broken. This is the typical feature of memory effect on the microscopic dynamics of hydrate melting. From Fig. 6a, we can see that both the number of solid-like water molecules and the percentage of residual rings within the solid-like layer increase with decreasing temperature, indicating the enhanced memory effect and the enhanced self-preservation effect. Moreover, ice-like water molecules in the solid-like layer were also identified by the algorithm mentioned in Methods section, and the percentage of ice-like water molecules is given in Fig. 6b. Interestingly, the ice motif occurs only at moderate temperature. At high temperature, the melted water molecules are hard to nucleate themselves to form ice, and at low temperature, however, water molecules are also difficult to transform their structure to ice because of their low mobility. In general, Fig. 6 indicates that at sufficiently high temperature the solid-like structure does not appear, and as the temperature decreases the structure gradually appears near the hydrate surface. Only at a suitable temperature range, in which the memory effect is weakened by the intermediate mobility of water molecules, the solid structure is ice-like. For a deeply cooled system, the structure is amorphous instead. Our simulation results provide a microscopic explanation for the experimental observations 7,26 that the stability of hydrate has a maximum value as temperature changes. The effect of heat transfer. Since the latent heat for hydrate decomposition always remove immediately from the thermostat in NVT ensemble, it is difficult to discuss the role of heat transfer on the self-preservation effect with NVT MD simulations. To illustrate the role the resistance of heat transfer, we performed NVΘ simulations instead (see Methods section for details). We still take the system at 265 K and 5 atm as an example, and the time evolution of the component of H 2 O is plotted in Fig. 7a. With decreasing n of NVΘ n simulations, which indicates the enhanced resistance of heat transfer, more melted water molecules show a tendency to form solid-like structure. Thus, the resistance of heat transfer strengthens the self-preservation effect: the resistance of heat transfer facilitates the formation of solid-like layer, which in turn provides the resistance of mass transfer for the diffusion of guest molecules. In contrast, if the resistance of heat transfer was totally remove, as shown in NVT simulations at a high temperature of 275 K and 1 atm (Fig. 7b), the hydrate will decompose completely, and only the water molecules in liquid-like structure are left. To consider the effect of mass transfer, we performed an additional NVE simulation at 265 K and 5 atm without the resistance of mass transfer via artificially controlling the released methane: we remove the methane molecules from the system immediately after they are released from the hydrate region during the decomposition process. The initial configuration we used is the same as other simulations, and the simulation time is also 100 ns. To make comparison clearly, the result is also given in Fig. 7a. One can find that without mass transfer resistance, the hydrate melts more thoroughly with less solid-like water molecules in the system. This might be ascribed to the weakened memory effect, since the quickly remove of methane promotes the dissociation of the residual rings. For the same reason, the nucleation and growth of hydrate have been found to begin with adsorption of guest molecules on the faces of pentagonal or hexagonal rings 27,28 . Consistently, Jacobson et al. 29 suggested that the adsorbed guest molecules should be considered as a part of the crystal nucleus. On the molecular level, the mass transfer resistance comes mainly from the difficulty of methane diffusion. For self-preservation effect, the dynamics is considered as a diffusion-controlled process in which the resistance of mass transfer is necessary. But we emphasize that the fundamental origin for the self-preservation is the resistance of heat transfer, rather than that of mass transfer. The heat transfer resistance induces the formation of solid-like layer, which further enhances the mass transfer resistance. A schematic illustration of the microscopic mechanism is given in Fig. 8. Kinetic equations for the self-preservation effect. Based on the heat balance calculations, we developed a kinetic equation to describe the self-preservation effect, which contains contributions from both heat transfer and mass transfer. The information we obtained from the simulations allows us to fit the MD results to the kinetic equations directly. We marked the hydrate region as region A and rest of the system as region B, as is shown in Fig. 1. Note that the range of the two regions varies during a decomposition process. In an adiabatic NVE system, the latent heat of phase change (fusion of hydrate and solidification of melted water) may locally cause the change of system temperature. Hence, a heat balance equation can be written as follows. To understand the equation intuitively, a schematic illustration of different terms in the kinetic equation is given in Fig. 8. The two terms on the left side are the latent heat of hydrate melting in region A that includes the hydrate and hydrate-liquid water interface. The first term represents the heat for melting hydrate lattice with A 1 being the fractal surface area for the hydrate-liquid water interface, and the second term is for the heat for desorption of CH 4 from the phase interface. For the second term, we assumed that the heat needed is a function of the rate of CH 4 molecules escaped from the hydrate cages i.e., the mass transfer of methane with k an arbitrary quantity independent of time. On the right side of the equation, the first term is the exothermic rate in region B far from hydrate and the hydrate-liquid water interface. In region B, we omitted the exothermic rate from methane molecules due to a much smaller number of methane molecules in the region compared to water molecules. Note that the H 2 O-CH 4 interaction in hydrate-liquid water interface should be included (namely, , N t d d CH 4 A in eq. (1)) due to the enrichment of methane molecules near the interface. The last term of the equation is latent heat of water solidification, with A 2 the fractal area for the interface between the solid-like water and liquid water. Note that the two latent terms in our model are time dependent. Only if the melting rate of hydrate and the freezing rate of liquid water reach zero after initial melting (see Fig. 4a), the two latent heat terms become time independent and correspondingly the equation simplifies to a quasi-stationary equation. In this case, the self-preservation effect dominates the hydrate melting. We further ignored the difference on structure motif of hydrate crystal and solid-like water, and then we obtain ∆ = −∆ h h l s s l . The equation (1) can be rearranged as , and time evolution of the fractal surface area A 1 (hydrate surface) and A 2 (solid-like water surface) can be obtained from the MD results. As an example, Fig. 9 gives those variables from an NVE simulation at 265 K and 5 atm. By integrating the equation (3), we can obtain the temperature of the system versus time (Fig. 10). On the other hand, the temperature can be also calculated from the simulations by using the equi-partition theorem of . We can see from Fig. 10 that the temperatures We note that if we use r 2 instead of the fractal one to represent the interface area, a considerable deviation is observed on the calculated temperature, confirming the fractal feature of the interface. For NVΘ systems, a term of heat flow rate comes from thermostat should be added at the right hand of the heat balance equation (1). Since the energy difference of the system between the initial value E 0 and the current one E can be considered as the thermal energy supplied from the environment, the heat flow term can be written as ( − ) where A is heat exchange area and equals to r 2 . Hence, the term λ ( ) t 2 in equation (2) changes to By providing the time evolution of total energy of the system (Fig. 11), for NVΘ systems the temperature can be again obtained from equation (3) (see also Fig. 10). The figure shows that the temperature taken from NVT systems is in better agreement than others. A possible interpretation for the deviation might be ascribed to the temperature gradient existed in the NVΘ system. Our kinetic equation does not include the spatial distribution of temperature. A more accurate equation should include the contribution from ∇T. Discussion In this work we investigated the molecular mechanism of the self-preservation effect with a combined NVT/E MD method, which is in particular designed to include different levels of heat transfer resistance. Our simulations indicate that the methane hydrate was initially melted at the interface, and then the solid-like structure was formed and grew continuously from the melted liquid water, followed by an increase of mass transfer resistance for the diffusion of CH 4 molecules. This simulation observation is in agreement with the suggestion from experimental studies 30 . More importantly, our simulations indicate a coupling between the mass transfer resistance and heat transfer resistance as the driving mechanism for self-preservation effect. We found that the phase interface of melting hydrates exhibits fractal characteristics, with a fractal dimension greater than the topological dimension of 2. The fractal characteristics for the phase interface of melting hydrate were also reported experimentally in porous media 23 . Furthermore, our simulations show that the fractal dimension of the interface decreases with increasing cross section (Fig. 3b), indicating that the hydrate crystal in a larger size has a lower interfacial energy and hence shows an enhanced stability. This is in agreement with the stability study for CO 2 hydrate 13 , which suggested that the hydrate is more stable when it has a larger particle size. We show that the self-preservation effect is enhanced with decreasing temperature or increasing pressure, and more sensitive to temperature. To compare directly with experimental results, we calculated the half-time of the hydrate dissociation, Fig. 4a), and the results are given in Fig. 4b. The variation trend of t 1/2 versus T and p is in a good agreement with experimental studies 31 , in which the authors similarly showed the anomalous preservation effect at 242-271 K. Note that the order of magnitude of t 1/2 in our work is Figure 11. Time evolution of the total energy of the system at 265 K and 5 atm. To facilitate comparison, the energy of the system at t = 0 ns is scaled as 0 kJ/mol. obviously less than that for the experiments since we only considered the melting of hydrates in a thickness of two lattices. According to our simulations the preservation effect is caused by a combined effects of heat transfer and mass transfer. Moreover, our simulations indicate that the ice-like motif was observed only at moderate temperature range (Fig. 6), as demonstrated by experimental studies 7,26,32 . Note that at the temperature below 200 K, the Ic ice structure has not been observed here as in experimental measurement 14,33,34 , and an amorphous structure for the newly formed solid layer is formed instead. This may be partly due to the much short time scale covered by our simulations. Our simulations also suggested a coupling between the mass transfer resistance and heat transfer resistance that induces the self-preservation effect. Furthermore, the heat transfer resistance is a more fundamental factor to control the self-preservation effect, in comparison with the mass transfer resistance. This is partly because the solidification of the melted water, which is the first step for the occurrence of self-preservation effect, is caused mainly by the release of latent heat rather than the mass transfer. The heat transfer resistance promotes the formation of solid-like layer, which in turn enhances the resistance of mass transfer. We note that the solid-like layer is formed from the melted liquid water rather than from the hydrate directly because of the considerably higher energy barrier for the direct hydrate-to-ice transition (Figs. 2 and 5). This observation is in a good agreement with the suggestion from the experimental study 35 , in which the authors found that the hydrate always melts to liquid water before solidification, and never undergoes a solid-solid phase transformation. Finally, based on the heat balance calculations we developed here a kinetic equation, which contains both contributions from heat transfer and mass transfer, to describe the self-preservation effect. The equation provides a method to make association between the molecular-level knowledge and the macroscopic information. Methods Model and simulation procedure. The MD simulations implemented in LAMMPS 36 were performed in this work. To obtain the initial configuration, we firstly prepared a perfect sI CH 4 hydrate and placed it into a simulation box with a size of 4 × 4 × 4 unit cells (1 unit cell equals to 1.245 nm). For the structure of the hydrate, the positions of the oxygen atom in H 2 O molecules and carbon in CH 4 were obtained from the data of the space group of the crystal structure 37 . This hydrate crystal was relaxed for 2 ns through an NpT process at 260 K and 5 atm. Then, we replicated the crystal along the x-direction to get an 8 × 4 × 4 repetition (totally 5888 H 2 O molecules and 1024 CH 4 molecules), and a vacuum layer of 2 × 4 × 4 unit cells was introduced along the x-direction to each of the two hydrate interfaces, as is shown in Fig. 1. To investigate the effect of pressure on hydrate decomposition, we added a certain amount of CH 4 molecules into the vacuum layer to reach the specified value of the pressure of CH 4 gas (e.g. 5 atm). The amount of CH 4 added is summarized in Table 1, which is determined by the equation of state developed by Vennix and Kobayashi 38 . In our simulations, the TIP4P water model 39 was used and the rigidity was implemented with SHAKE algorithm 40 , while a single point model 41 was used to model CH 4 molecules. The unlike parameters of Lennard-Jones potentials were obtained by the Lorentz-Berthelot mixing rule. A cutoff radius of 12 Å was utilized for the short-ranged (Lennard-Jones) interactions, while the long-ranged (Coulomb) interactions were evaluated by using the pppm algorithm 42 . Periodic boundary conditions were imposed in all the three Cartesian directions. Since the hydrate decomposition involves dissociation and rearrangement of the hydrogen bonds at the microscopic scale, it is usually not considered as an isothermal process. The hydrate decomposition is endothermic 43 , which will cause a drop in temperature 20 during the melting process. If the heat is 200 5 ---250 4 18 37 -260 3 18 36 -265 3 17 35 -270 3 17 34 71 275 3 17 34 69 supplied from the outside when hydrate melted, however, the temperature drop will be weakened. To investigate the effect of the heat transfer on the self-preservation effect, we performed both NVT simulations (closed system), NVE simulations (isolated system), and several mixed NVT/E simulations at different temperature and pressure conditions (see Table 1). The NVT simulations fail to consider heat transfer resistance, as the heat needed by the hydrate decomposition can be supplied immediately. In other words, the thermal effect of phase transition is not considered in NVT ensemble. To the contrary, NVE simulations represent a maximal resistance for heat transfer. In this work, NVT simulations were performed by using the Nosé-Hoover algorithm 44 and with the relaxation parameter of 0.1 ps; while NVE simulations were performed directly. For the simulations of mixed NVT/E, a series of NVT and NVE simulations were conducted alternatively. By adjusting the ratio of the steps of the two alternative runs, we can obtain different resistance for heat transfer. The number of steps we used for different mixed NVT/E runs were listed in Table 2. To simplify the description, we introduce a uniform symbol of NVΘ n to describe the different ensemble, in which the subscript n indicates the step number ratio of NVT and NVE within a cycle. Therefore, an NVT simulation can be written as NVΘ ∞ ; through a series of mixed NVT/E marked as NVΘ 4 , NVΘ 1 and NVΘ 0.25 with increasing resistance for heat transfer, we finally get NVΘ 0 for NVE. With the increase of n, the resistance of heat transfer is decreased. A timestep of 1 fs was applied in all our simulations, and all of the NVΘ simulations were performed on the timescale of 100 ns. Identification of the ice. Because the structure of the solid-like layer formed at the hydrate surface is an important issue, a meticulous analysis scheme should be developed. Although the widely used F 4 and Q 6 order parameters 45,46 can identify ice clusters from its environments, it is difficult to show the topological information about the ice structure. Here we introduced a vertex perception algorithm to obtain ice domains. In ice crystals, each water molecule is tetra-coordinated to four other water molecules by hydrogen bonds, which means a water vertex shared by four polyhedral cages. For both Ih and Ic structure, one water molecule may be involved in (i) one 6 3 cage (all the three hexagons are chair conformations) and three 6 5 cages (two are chair and three are boat conformations); or equivalently involved in (ii) six groups of totally twelve six-membered rings (there are three boat-boat groups and three boat-chair groups), as is shown in Fig. 12. In the present work, the melted water molecules are firstly classified into liquid-like or solid-like by their mean square displacement (MSD). Then, we use the ring perception algorithm 47 and the cage identification algorithm 48 to search the hexagonal structures and the cage structures within the solid-like water molecules. Different from what we did in hydrate phase, the cages in ice are concave polyhedrons. If a water molecule satisfies any of the above conditions (i) and (ii), it will be considered as an ice-like water molecule. A wave packet method to determine the phase interface. Since the hydrate melting does not take place in a layer-by-layer manner, we developed a wave packet method to determine the phase interface. The advantage of this method is that compared with the x-density profile method, it can give more details about the shape of the interface.
6,406.4
2015-10-01T00:00:00.000
[ "Physics" ]
Loss Investigation for Multiphase Induction Machine under Open-Circuit Fault Using Field–Circuit Coupling Finite Element Method : This paper focuses on the loss estimation for the multiphase induction machine (IM) operating under fault-tolerant conditions through the field–circuit coupling finite element method (FEM). Both one-phase and two-phase open-circuit faults of a seven-phase IM are researched, and different spatial positions of the fault phases are taken into consideration. The magnitudes and phase angles of the residual phase’s current are deduced based on the principle of equal magnitude of the residual phase currents and unchanged fundamental magnetic motive force (MMF). The magnetic fields’ coupling between the fundamental and harmonic planes is analyzed. Then, the time-stepping electromagnetic fields calculation of the seven-phase IM are carried out under the commercial software Simplorer–Maxwell environment. The transient and steady performance for both the health and fault conditions are obtained based on the rotor field-oriented control (RFOC) strategy. The Joule loss and iron loss are calculated for the torque step and slope responses. The seven-phase motor driving platform is established to verify the numerical calculation results. The proposed method is effective for predicting the loss and designing a reasonable operating range for multiphase IM operating under fault-tolerant conditions considering the thermal balance. Introduction Multiphase induction machines show remarkable advantages in industrial applications of high-power, high-reliability and high-torque density, such as vessel electric propulsion, more electric aircrafts, locomotive traction and electric vehicles [1,2]. The inherent features of power splitting, better fault tolerance and additional degrees of control freedom cause the upward research in multiphase IM. Abundant efforts have been devoted in areas of designing, modelling and driving the multiphase machine for several decades. Recent research supports the future widespread application of multiphase machines [3][4][5][6][7]. There has been considerable research undertaken on control strategies [8][9][10][11][12][13][14] and fault detections [15][16][17] for open-circuit faults; however, there are few studies on loss and efficiency issues under fault-tolerant conditions. The closed loop control for the phase currents is essential under fault-tolerant conditions due to the asymmetrical nature of the electrical and magnetic structure, so that the field-circuit coupling FEM is adopted to complete the investigation. The field-circuit coupling FEM has been successfully employed to solve the motor issues [18][19][20][21][22]. The code for field-circuit coupling finite element (FE) calculation is presented to calculate the transient performance of a single-phase IM, and the results are compared with the commercial software FLUX in literature [18]. The FEM with external circuit coupling has been used to estimate the stray loss [21]. This paper focuses on the loss calculation and estimation for the multi-phase IM operating under open circuit fault-tolerant conditions. The numerical and experimental works are both carried out to assess the iron loss, rotor Joule loss and total loss. The residual phase currents for various open-circuit fault conditions are deduced based on the principles of unchanged fundamental magnetic motive force (MMF) and the equal amplitudes of residual phase currents. The step and slope responses of torque are performed using fieldcircuit FEM under both health and fault-tolerant conditions using a seven-phase IM in this paper. The iron loss is calculated based on the classical iron model, and the time-harmonics caused by the inverter are taken into consideration [22][23][24][25][26]. The stator and rotor Joule loss are calculated through the RMS of the stator and rotor currents. The spatial vectors of MMF caused by the coupling between the fundamental and harmonic planes are analyzed to illustrate the increasing rotor Joule loss. The total increased loss and efficiencies under the fault-tolerant conditions obtained by the numerical method are compared with the experimental results. The reasonable operation range for load torque is provided for the seven-phase IM under different fault-tolerant conditions considering thermal balance. Residual Phase Currents Reconstitution Unlike the common three-phase machines, the smooth and steady torque can also be obtained for the multiphase machines under open-circuit fault conditions. The principle for the reconstitution of residual phase currents is keeping the MMF of fundamental plane consistent with the health condition, so that the smooth electromagnetic torque can be expected on the fundamental plane. Both one-phase and two-phase open-circuit faults conditions for a seven-phase IM are discussed in this section. Several relative spatial positions for the fault phases are taken into account, which are described in Figure 1. Firstly, the seven-phase phase currents can be given as Equation (1). where i 1 ,i 2 · · · i 7 are seven phase currents, k 1 ,k 2 · · · k 7 are the amplitudes of phase currents and ω is the angular frequency. The constraint for the circular magneto motive force (MMF) can be expressed as Equation (2). where a = e j(2π/7) = cos(2π/7) + j sin(2π/7). The coordinate components of the MMF can be obtained by simplifying Equation (2), which can be described as Equation (3). Equation (3) can be used as one of the constraints when reconstructing the fault-tolerant currents. The constraint of constant amplitudes of the residual phase currents is employed in this paper, which can be given as Equation (5). The magnitudes and phase positions of the residual phase currents can be deduced based on Equations (1)- (5), and the optimized results are shown per-unit in Table 1, from which it can be seen that the phase positions become unsymmetrical under fault-tolerant conditions and the amplitudes of residual phase currents were kept consistent. Coupling between the Fundamental and Harmonic Planes The health condition is symmetrical for both the spatial and electric quantities, so that the relationship for the fundamental and harmonic planes are independent from each other. However, the fundamental and harmonic planes are coupled with each other under fault conditions due to the asymmetrical fault-tolerant currents. The asynchronous rotational MMF will be produced by the interaction between the asymmetrical fundamental currents and the spatial winding harmonics. The seven-phase synthetic magnetomotive force can be expressed as Equation (6). where µ = 1, 3, 5 is the order of the spatial harmonics; v is the phase number and is from Table 1; θ is the spatial electric angle; N µ = N φ /µπ is magnitude of winding function; and N φ is the number of series turns per phase. The coupled harmonic MMF components for the 3rd and 5th harmonic planes are listed in the first column of Table 2, and the magnitudes are listed in the other columns. The speeds of the harmonic magnetic fields are one third and one fifth of the fundamental magnetic field with forward and reverse direction. Field-Circuit Coupling FE Model The field-circuit coupling FEM model is shown in Figure 2, which is composed of the seven-phase half-bridge inverter and a seven-phase IM. A redesigned seven-phase IM with concentrated windings is used as prototype for the finite element calculation and the subsequent experimental validation. The specifications of the seven-phase IM are listed in Table 3. The fault-tolerant operation for the prototype is carried out based on the RFOC strategy using a C program model. The block diagram of the C program model is drawn in Figure 3. The traditional mode of supplying voltage source is infeasible under fault-tolerant conditions due to the asymmetrical of residual phase currents, so that the current closed-loop control strategy is adopted in this simulation. The common dual close-loop control is employed in this scheme, where hysteresis controller is adopted in the current loop and traditional PI controller is used in the speed loop. The constant magnetizing current is used as the d-axis reference current, and the q-axis reference current is the output of the speed loop. The q-axis reference current is calculated based on the electromagnetic torque equation, which is given by Equation (7). where T e is electromagnetic torque, p is pole pairs, L m is magnetizing inductance, L r is rotor self-inductance, i sd is d-axis stator current and i sq is q-axis stator current. The reference fault-tolerant phase currents are calculated based on Table 1, and then the IGBT signals are obtained using reference and feedback currents through the hysteresis current controller. The proportional-integral controller is adopted in the speed loop, and the hysteresis controller is adopted in the current loop for tracking the reconstructed phase currents. The rated speed and rated torque of the prototype motor are 1460 rpm and 30 Nm. The transient equation, field-circuit coupling equation and the mechanical equation are described as Equations (8)- (10). Where µ, A z , J s and σ are, respectively, the magnetic permeability, z component of magnetic vector potential, stator current density and rotor bar conductivity. where ψ s , L e , U s , i s and R s are, respectively, stator flux, stator end leakage inductance, stator terminal voltage, stator phase current and resistance of stator winding. where J, D, T e , T L and ω m are, respectively, the moment of inertia, damping coefficient, electromagnetic torque, load torque and mechanical angular velocity. Transient and Steady Performance The simulated results of the field-circuit coupling FEM are presented in this section. Both the step and slope responses for the load torque are performed for the seven-phase IM under health and fault-tolerant circumstances. For the load step process, the seven-phase IM is firstly operated under the rated speed, while the load torque for the step response is from 10 Nm to 20 Nm. For the slope response process, the load torque is increased from 0 Nm to 30 Nm gradually. The performance for the stator phase currents under the torque step response and slope response are shown in Figures 4 and 5, respectively, where the corresponding load conditions are noted. The transient processes from the health condition to the fault conditions are depicted in the zoom areas of Figure 4. The currents for the fault phases were restricted to zero when the machine went into the fault-tolerant condition, and the amplitudes of the residual phase currents increased for the fault-tolerant condition compared with the health condition. The current waveforms for the gradually increasing torque process under both the health and fault conditions are amplified in the zoom areas of Figure 5. Loss Calculation and Analysis The loss will increase due to the machine operating under the fault-tolerant condition due to the increased rms currents and the additional harmonic MMF. The derating operation should be considered to limit the loss and temperature rise. The calculations of loss under fault conditions are of great concern. The total losses of induction machine include the iron loss, stator copper loss, rotor Joule loss and stray loss, of which the iron and rotor Joule loss are calculated in detail in this section. As the stator current is closed-loop-controlled, the increased stator copper loss can be obtained through Table 1. The stray loss is deemed as 2% of the output power. The classic iron loss model is used in this paper, which can be given as Equation (11). where k h , k e and k a are the hysteresis loss coefficient, classic eddy-current loss coefficient and excess loss coefficient, which are obtained based on the iron loss curves from manufacturer. f is the power supply frequency and B is the amplitude of the flux density. The FE calculation results of the iron loss and rotor Joule loss for both the health and fault-tolerant conditions with 20 Nm load torque are shown in Figures 6 and 7. Both the iron loss and rotor Joule loss are increased for the fault conditions compared with the health condition. The iron losses increase slightly because the amplitudes of fundamental magnetic field are kept unchanged. In addition, the frequencies for the harmonic magnetic fields are relatively low, which are one third or one fifth of the fundamental one, while the rotor Joule loss increases remarkably due to the asynchronous harmonic magnetic fields with elliptic trajectories. Furthermore, the rotor Joule loss is oscillating with the time variance, due to the fact that the periodic harmonic loss is superposed on the fundamental loss. Table 4. The rotor Joule loss for torque slope response is calculated, and the numerical results for health and fault conditions are shown in the right part of Figure 8, from which the rotor Joule loss curves varying with load torque can be obtained. The pulsating for the rotor Joule loss become larger as the torque increases. The trajectories of the MMF spatial vectors for both the fundamental and harmonic planes are drawn in the left part of Figure 8 to further illustrate the increasing loss. The MMF vector trajectories of the fundamental plane are unit circles for the health and each fault-tolerant condition, which can satisfy the principle of the unchanged fundamental MMF. There exists no harmonic MMFs for the health condition, while the MMF vector trajectories of the harmonic planes are ellipses under fault conditions, which is caused by the reconstructed stator currents mapping on the spatial harmonic planes. In addition, the increased iron loss and rotor Joule loss under fault-tolerant conditions caused by the harmonic MMF can be deemed as stray loss. The rotor Joule loss and total loss deteriorate most seriously for the fault-tolerant condition of two adjacent phases open-circuit. The rotor Joule loss obtained by FEM for both the fault-tolerant conditions and health condition with regard to the torque slope response are shown by solid lines in Figure 9. The distinct loss is caused by the different fault conditions, which induce various spatial MMFs. Experimental Validation and Analysis The seven-phase IM driving system is depicted in Figure 10, which is composed of a seven-phase IM with concentrated windings, torque sensor, dSPACE controller, multiphase inverter, DC power supply and the alternating current servo load. A power analyzer with 12 channels is used to complete the efficiency test. The multiphase inverter is composed of the IGBT power module, and the nine-phase half-bridge modules share the common bus voltage. The control signals from the dSPACE controller are transferred through the optical fiber to drive the IGBT switches. The operation procedure is established under MATLAB environment using Simulink and then compiled by dSPACE processor. The upper computer is used as the real-time control interface and displays the real-time datum. The load torque is supplied by servo machine, which operates under constant torque mode. Two sixchannel power analyzers are used to measure the efficiency the seven-phase machine. The input power is obtained by the phase voltage and phase current, while the output power is obtained by the speed and torque. The zoom for the seven-phase protype and load platform is shown in Figure 11. The machine windings are Y connection, and there are seven-phase outgoing lines. The winding distribution of the seven-phase IM is shown in Figure 12. The fault tolerant control strategy designed for the FE calculation is employed in the experimental validation, which can be referred to Figure 3. The no-load test is carried out to obtain the mechanical loss, and the results are adopted in the FE calculation results. The load torque is decreased for the fault-tolerant conditions in comparison to the health condition to ensure the machine does not overheat in a long time running. The load torque is imposed after the multiphase IM converting from the health condition to the fault-tolerant conditions. The process for the step response of torque is shown in Figure 13. The load torque is decreased from 0 Nm to 10 Nm and then decreased to 0 Nm. The transient and steady performance of the experimental seven-phase stator currents for the one-phase and two-phase open-circuit fault-tolerant operations under 10 Nm load torque with 1500 rpm speed are shown in Figure 14. The experimental results are compared with the FE curves using scatter diagrams to verify the numerical calculated results indirectly. The efficiencies obtained from the FEM and experiment are compared in Figure 15, where the extra stray loss is considered as 2% of the output power in the FE results. The efficiency decreased by about 10% for the most serious condition Fault 2, which illustrates that the loss is serious for this fault-tolerant condition. There exists slight error between the simulated and experimental results, which may be caused by the current harmonics when compared with the simulated ones. However, the simulation and experiment results show the similar tendencies. The limitation of the load torque for different fault-tolerant conditions under different speeds is shown in Table 5, where the total loss under health condition with the rated speed is deemed as the limitation. For the most serious condition Fault 2, the load torque should be decreased to about 60% of the rated load torque to maintain the total loss and the thermal balance. Conclusions The residual phase currents are reconstructed based on the principle of unchanged fundamental MMF and equal amplitudes of phase currents. The field-circuit coupling FE model is successfully established, and the numerical calculation for the fault-tolerant operations are performed. The basic iron loss and rotor Joule loss are consistent for the health and fault conditions on account of the unchanged fundamental magnetic field. The coupling between the fundamental and harmonic planes is analyzed under fault-tolerant conditions, and the harmonic MMF vectors are deduced. The increasing loss under fault-tolerant conditions is mainly caused by the asynchronous rotational harmonic MMF and the increased root-mean-square of the reconstructed stator currents. The increased rotor Joule loss, which is dependent on the rotational directions and amplitudes of the asynchronous rotational MMF, contributes the most to the total increased loss. The increased rotor Joule loss for the two-phase fault conditions with different relative positions is distinguished due to the different spatial MMF vectors reflecting on harmonic planes. The derating operations for different fault-tolerant conditions can be designed according to the estimated loss, so that the utilization and reliability for the fault machine can be achieved simultaneously.
4,116
2021-09-09T00:00:00.000
[ "Engineering", "Physics" ]
Advances in neural architecture search ABSTRACT Automated machine learning (AutoML) has achieved remarkable success in automating the non-trivial process of designing machine learning models. Among the focal areas of AutoML, neural architecture search (NAS) stands out, aiming to systematically explore the complex architecture space to discover the optimal neural architecture configurations without intensive manual interventions. NAS has demonstrated its capability of dramatic performance improvement across a large number of real-world tasks. The core components in NAS methodologies normally include (i) defining the appropriate search space, (ii) designing the right search strategy and (iii) developing the effective evaluation mechanism. Although early NAS endeavors are characterized via groundbreaking architecture designs, the imposed exorbitant computational demands prompt a shift towards more efficient paradigms such as weight sharing and evaluation estimation, etc. Concurrently, the introduction of specialized benchmarks has paved the way for standardized comparisons of NAS techniques. Notably, the adaptability of NAS is evidenced by its capability of extending to diverse datasets, including graphs, tabular data and videos, etc., each of which requires a tailored configuration. This paper delves into the multifaceted aspects of NAS, elaborating on its recent advances, applications, tools, benchmarks and prospective research directions. INTRODUCTION Automated machine learning (AutoML) aims to automate the process of developing and deploying machine learning models [1 -3 ].Since AutoML is able to achieve or surpass human-level performance with little human guidance, it has gained tremendous attention and has been widely applied to numerous areas.A complete AutoML pipeline involves various stages of machine learning (ML), including data preparation, feature engineering, model configuration, performance evaluation, etc.The most widely studied research interests in AutoML are hyperparameter optimization (HPO) [4 -8 ] and neural architecture search (NAS) [9 -15 ]-the former is a well-documented classic topic focusing on the hyperparameter configuration, while the latter is a recent topic concentrating on architecture customization.In this paper, we mainly explore the development and advancement of NAS, which has long been a challenging and trending topic. In general, NAS plays a crucial role in discovering the optimal neural architecture automatically, saving human efforts from manual design.Since being first proposed by Zoph et al. [16 ], NAS has achieved excellent performances on various tasks, including image classification [17 -19 ], object detection [20 -22 ], semantic segmentation [23 -26 ], text representation [27 ], graph learning [28 ,29 ], neural machine translation [30 ], language modeling [16 ,31 ,32 ], etc. NAS methods can generally be classified based on tailored designs from the following aspects [9 ]: (i) search space, (ii) search strategy and (iii) evaluation strategy.In particular, the search space can be further categorized into two types: (1) macro space for the entire network and (2) micro space for modules or blocks of the neural network, where the choices for operators wi l l be determined by the given data.For example, a convolution operator may be best suitable for image data, an attention operator can be the best fit for sequence data, an aggregation operator tends to find its most appropriate position for graph data, etc.The search strategy is used to discover the optimal architecture from the search space, which needs to balance the effectiveness and efficiency simultaneously.Take the following two representative approaches as an example: reinforcement learning (RL) chooses operators based on the potential performance gain and the evolutionary algorithm (EA) selects architectures via simulating the process of biological evolution.The evaluation strategy decides how to estimate the performance of different architectures.For instance, we can utilize multiple trials of training from scratch to access architectures stably and accurately at the cost of a huge amount of computation, as well as employ the family of supernetbased methods to approximately estimate performances with greatly reduced training resources. The research focus of NAS has been constantly changing and developing over time.In the beginning, it put emphasis on automatic architecture design and outstanding performance [16 ].With the help of RL, the family of NAS approaches manages to find good architectures for various multimedia data, including images [33 ], texts [34 ], videos [35 ] and tabulars [36 ].However, the computational cost of NAS is extremely expensive for most scenarios, which motivates the devotion of later works to reducing the cost, resulting in the emergence of different strategies such as weight sharing [31 ,32 ], evaluation estimation [37 ,38 ], etc.Meanwhile, relevant benchmarks [29 ,39 -41 ] have been published for time-saving, convenient and fair comparison of various NAS algorithms.As a growing amount of attention has been given to NAS, the adaptations of NAS to new problems and data, e.g.graphs [28 ,42 ], become the cutting-edge topics.The works focusing on the above new problems have pushed forward to research for tailored designs of search spaces, search strategies and evaluation strategies for various important and trending problems, thus popularizing NAS in more areas. This paper is organized as follows.First, we discuss the recent development in NAS from the perspectives of the search space, search strategy and evaluation strategy.The relationship between these three aspects is i l lustrated in Fig. 1 .Then, we introduce graph NA S (GraphNA S), i.e.NAS on graphs, which is a trending research direction involving the adaption of NAS to structured graph data with complex typologies and properties.Next, we present recent advances regarding tools and benchmarks for both NAS and GraphNAS.Last but not least, we summarize the paper and provide promising future research directions for NAS. DEVELOPMENT IN NEURAL ARCHITECTURE SEARCH NAS aims to discover the optimal architecture given a particular dataset, which can be formulated as where a is the architecture to be searched in a designed architecture search space A , and Performance (a, D ) denotes the architecture's performance on dataset D .Generally, NAS consists of three key modules.(1) The search space , which defines the architecture components to be searched, e.g.architecture operations, operation connections, etc.A sophisticated search space may introduce suitable inductive bias and simplify the search process. (2) The search strategy , which aims to explore the potentially large search space and discover the optimal architectures with as few architecture samples as possible. (3) The evaluation strategy , which aims to estimate the architecture's performance and is then utilized in the searching process. Search space The search space is very important for NAS.A welldesigned search space can greatly improve the search cost and performance of the final architecture.A neural architecture is composed of two parts, namely, the operators and their connection modes.The operators can be neural network layers (e.g.convolution, various nonlinear activation functions), complex blocks, (e.g.ConvBNReLU) and simple computations (e.g.addition and multiplication).Besides the choice of operators, the connections between them have a great impact on the performance of the neural architecture.For example, manually designed architectures like ResNet [43 ] have demonstrated the effectiveness of skip connections.At present, the search spaces in NAS mainly consist of the different choices of operators and the possible ways to connect them.Zoph and Le [16 ] designed the first search space for NAS, which is a sequential search space composed of layers, each containing many convolution operators with different kernel sizes, channel numbers and strides.In addition to connecting directly with the upper layer, it also allows skip connections between different layers.Later work by Ma et al. [44 ] attempted to explore more operator options to improve the performance of the architecture, such as the use of channel shuffle operators. It is well accepted that a sufficiently large search space that covers good architecture choices is important in a successful NAS.However, larger search spaces come with higher search costs, which can be unacceptable in many cases.To tackle this problem, several approaches have been developed for different application scenarios to design good search spaces with acceptable sizes. Cell-based search space In order to make the searched architecture portable across different datasets, Zoph et al. [17 ] proposed the first cell-based search space, namely, the 'NASNet search space' .In cell-based search spaces, neural architectures are dissected into a small set of reusable cells, which can be combined in different ways to produce architectures for different datasets.In NASNet, the cells are found by searching on CIFAR-10, while also having great performance when transferred to ImageNet.Subsequential works further focus on the way cells are arranged into whole architectures. Efficient search space Another hot direction is to design efficient search spaces for resource-constrained scenarios.By carefully designing search spaces with operators suitable for specific use cases, it is possible to save a lot of search costs and improve the quality of final architectures [45 ,46 ].Moreover, such scenarios often have certain requirements for the size of the model.Therefore, some works add the sizes of operators as a part of the search spaces to facilitate the search for more efficient architectures [47 ].FBNetV1 [48 ] proposed a lightweight layer-wise search space for mobile device application scenarios.FBNetV2 [49 ] added dimension operators to search for the shape of the architecture. Search strategy The search strategy is a critical component of neural architecture search, which aims to explore the potentially large architecture space efficiently [50 ].Given a search space, the search strategy faces the ex ploration-ex ploitation trade-off that it has to quickly find optimal architectures as well as avoid the local sub-optimality.Based on the way of encoding architectures, the search strategy can be roughly classified into discrete and continuous search, where dis-crete search adopts hard encodings of architectures in the search process and the output architectures are the final ones, while continuous search adopts soft encodings, e.g. a probability distribution of the architecture components, and the final architecture can be derived via discretization of the soft encodings, e.g.use argmax. Discrete search A simple solution is random search to randomly sample the architectures from the search space, and select the best performing architecture.However, it cannot well exploit the relationship between architectures and their performance to accelerate the search process. RL-based NAS [51 ] transforms the problem as a sequential decision-making process, where an agent optimizes their reward and improves their behavior by interacting w ith the env ironment.Specifically, an architecture is constructed by a sequence of actions, e.g.adding a layer of neural network operations, altering the hidden dimension, etc.The architecture is then evaluated by the evaluation strategy, and the performance results, e.g.accuracy, can be taken as the reward.The process is repeated many times to train the reinforcement learning controller to obtain the optimal distribution of actions based on the data and states, so that it can discover the optimal architecture given an arbitrary dataset.Several representative RL methods have been adopted in RL-based NAS. Baker et al. [52 ] adopted Q-learning in NAS.The actions include adding layers as well as finishing building the architecture and declaring it complete.The early architectures serve as the states, and the trajectories sampled from this space correspond to models that are subsequently trained to determine their validation accuracy.The Q function is updated by employing the experience replay.To balance exploration and exploitation, they employed an -greedy approach, where random trajectories are chosen with a probability of .By selecting a trajectory comprising several decision steps, the algorithm eventually reaches a terminal state, and then trains the corresponding model, updating the action-value function, as defined in the Q-learning algorithm. Zoph and Le [16 ] optimized the problem with policy gradient methods, where a stochastic policy is parameterized by an auto-regressive RNN controller that predicts actions based on prior actions.The RNN controller in their approach sequentially samples layers that are appended to form the final network, by sampling from a probability distribution obtained through a softmax operation.The final network is trained to obtain performance estimation, while the parameters of the controller are updated using the REINFORCE [53 ] algorithm. Negrinho and Gordon [54 ] solved the problem via Monte Carlo tree search .By adopting a tree-structured state-action space that can be explored and expanded incrementally, they utilized a UCT [55 ] algorithm to explore the tree based on the upper confidence bound.Some other works [56 ,57 ] extend the solution by introducing surrogate models to accelerate the search process. Evolutionary algorithm-based NAS [58 ] treats the architecture's performance as a black-box function, and adopts evolutionary algorithms [59 ] to discover the best performing architectures, which commonly include the following key components: (1) initialization for generating the initial population, (2) parent selection to choose parents from the population for reproduction, (3) mutation to generate new individuals and (4) survivor selection to select individuals from the population that wi l l survive.Here, the population consists of a pool of individuals, i.e. neural network architectures.The evolutionary process starts from an initialized population, then some fitness functions, e.g.accuracy, are utilized to guide the parent selection, to breed the next generation.The process is repeated iteratively and the last population is expected to be diverse and optimize the fitness function.There are several representative works, such as the following. Real et al. [60 ] focused on discovering competitive convolutional neural network architectures for image classification with evolutionary algorithms.The initial population is constructed by generating thousands of simplest possible architectures, and tournament selection is adopted for parent selection, which first randomly samples several pairs of architectures, and then the superior ones in each pair are retained along with their weights, mutated and trained before being added to the population.The mutations include adding and removing convolutions and skip connections, changing the kernel size, the number of channels, the stride and the learning rate, etc. Xie and Yui l le [61 ] described their search space with an adjacency matrix, e.g.numbers in the matrix denote the choice between operations, and each architecture can be encoded by the matrix.The method adopts a cross-over operation to conduct the mutation, where a pair of architectures has a probability to random swap the bits in their matrix encodings.The fitness function is defined as the difference between its validation accuracy and the minimum accuracy of the population, so that the weakest individual has a survival rate of zero. Real et al. [62 ] incorporated age in the selection of survivors, i.e. indiv iduals w ith better performance but that have spent a longer time in the population might also be removed, which adds the regularization term to the objective function so that the searched architectures are expected to have high performance as well as frequently appear in the population. Continuous search Continuous search relaxes the operation choices of architectures into continuous encodings so that the searching process can be differentiable. Gradient-based NAS optimizes the operation choices by gradient descents.DARTS [32 ] relaxes the operation choices by mixed operations, where the operation choice is represented as a probability distribution obtained by softmax of the learnable vectors, and its output is the weighted average of the outputs of all operations.Then the authors propose optimizing the model weights and architecture parameters by respectively minimizing the loss on the training dataset and validation dataset with gradient-based optimization methods.At the end of the search process, the mixed operations usually have to be discretized to obtain the final architecture by choosing the operations with maximum probabilities.The drawback is that the mixed operations require keeping all the candidate operations and their internal outputs in the memory, which limits the size of the search space.To tackle the memory issues, SNAS [63 ] proposes a factorizable distribution to represent the operation choice so that only one path of the super-network is activated for training, avoiding keeping all operations in the memory.To achieve a similar goal, ProxylessNas [64 ] also adopts a parameterized distribution over the operations and optimizes with a gating mechanism, where each gate chooses the choice of the path based on the learned probability distribution.Xu et al. [65 ] proposed partial channel connections that randomly sample a subset of channels instead of sending all channels into operation selection to save the memory.Tan-gleNAS [66 ] proposes a strategy to adapt gradientbased approaches for weight-entangled spaces. Architecture decoding.As architectures are continuously encoded, they need a further architecture decoding to obtain the final architecture, in comparison with discrete search.It has been shown that simply decoding the architecture with maximum probability magnitude is sometimes inconsistent and fails to obtain the optimal architecture [67 ].A classic group of methods [68 -70 ] tackle the issue with progressive search space shrinking that gradually prunes out weak operations and connections during the search process to reduce the performance gap caused by the discretization.Wang et al. [71 ] evaluated the operation strength by its contribution to the supernetwork's performance, which is estimated by the performance drop after perturbing the operation.Similarly, Xiao et al. [72 ] estimated the operation contribution by Sharley values.Ye et al. [73 ] added an extra β decay loss to alleviate the inconsistency problem by regulating the search process. Evaluation strategy The evaluation strategy estimates the architecture's performance, which includes its expressiveness and generalization abilities [74 ]. A brute-force solution, as adopted in the multitrial search, is to simply train the architecture from scratch on training data and obtain the validation results as the estimated performance [16 ].However, this solution is extremely computationally expensive, limiting its usage in practice. The weight-sharing mechanism [75 ] has been commonly adopted in NAS literature to speed up the performance evaluation of architectures.The idea is to enable the sharing of weights for all architecture candidates so that the training time from scratch can be saved.This technique can be both adopted in discrete search [31 ] and continuous search [32 ].In one-shot NAS , a super-network is designed to be trained only once during the search process, and all architecture candidates are viewed as sub-networks of the super-network.In this way, an architecture can be quickly evaluated by selecting the according operation paths and their weights in the super-network.Although the technique can reduce the search time from thousands of GPU days to less than one GPU day [32 ], it is well known to suffer from inconsistency issues.Given that the weights of sub-networks are highly entangled in the super-network, the training might be severely biased, leading to inaccurate performance estimation [76 ].BigNAS [77 ] finds that the training might be biased to smaller architectures as they have fewer parameters and are easier to converge faster, leading to underestimation of big models.To tackle the issues, they propose a sandwich rule that enforces that the architecture samples should include the biggest and smallest models, to alleviate the training bias with regard to the network size.FairNAS [76 ] proposes to take expectation fairness and strict fairness into consideration and ensures equal optimization opportunities for all architecture candidates to alleviate overestimation and underestimation.Zhao et al. [78 ] tackled the problem from the perspective of super-networks, where they used multiple super-networks, with each super-network covering different regions of the search space to alleviate the performance approximation gap. Predictor-based methods.The weight-sharing mechanism sti l l needs time training.Currently, there exists a series of NAS tabular benchmarks [29 ,39 -41 ,79 ] that documents the performance of all architecture candidates, which can be exploited to train the predictors [80 ] to predict the architecture's performance.ChamNet [81 ] adopts the Gaussian process with Bayesian optimization and builds predictors to predict the latency and performance of the architectures.MetaQNN [82 ] proposes to predict the architecture's performance using features from network architectures, hyperparameters and learning curve data.SemiNAS [83 ] trains an accuracy predictor with a small set of architecture-accuracy data pairs and the predictor is further improved in the search process with newly estimated architectures. Zero-shot methods .To further accelerate the evaluation, zero-shot methods [84 ] estimate the models' performance based on specially designed metrics and avoid the cost of training.ZenNAS [85 ] ranks the architectures by the proposed Zen score that is shown to represent the network expressivity and shows a positive correlation with model accuracy.The calculation of the scores is fast and only takes a few forward inferences through a randomly initialized network without training.NASWOT [86 ] measures the network's trained performance by examining the overlap of activations between data points in untrained networks. Self-supervised methods .In some areas where labels are scare or even unavailable, the evaluation of architectures is difficult since fewer labels may result in inaccurate performance estimation.Some NAS methods replace supervised labels with selfsupervised loss during the search process [87 -91 ].Another approach involves designing specialized metrics that do not rely on labels as proxies for estimating model performance.UnNAS [92 ] employs pretext tasks such as image rotation, coloring images and solving puzzles.Zhang et al. [93 ] trained the model with randomly generated labels, and utilized the convergence speed as the evaluation metric. GRAPH NEURAL ARCHITECTURE SEARCH Besides data in Euclidean space like images and natural languages that are commonly studied in NAS, graph data that are non-Euclidean data are ubiquitous and can model the complex relationships between objects.Graph neural networks (GNNs) [94 ] are state-of-the-art models for processing graph data.To automate the architectures of GNNs, GraphNAS has received wide attention recently [28 ].In this section, we review the advancements in Graph-NAS.Since the performance estimation strategy of GraphNAS is similar to previous works, we mainly focus on rev iew ing the search space and search strategy. Generally speaking, the differences between general NAS and GraphNAS primarily stem from their target data types, search spaces and architectural components.General NAS aims to optimize neural network architectures for a wide array of data, including images, videos, text and tabular data, by exploring a broad search space that includes various layer types and configurations to capture spatial or sequential patterns.In contrast, GraphNAS is specifically designed for graph-structured data, focusing on selecting and configuring components like graph convolutional layers, aggregation functions and neighborhood sampling strategies to effectively capture the relational and topological properties inherent in graphs.While both approaches face challenges like large search spaces and computational costs, GraphNAS additionally addresses complexities unique to graph data, such as varying graph sizes and sparse connectivity.Consequently, the search algorithms and evaluation metrics are also tailored to the specific needs of their respective data types, w ith GraphNA S requiring specialized techniques to handle the intricacies of graph neural networks. Notation and preliminaries First, we briefly introduce graph data and GNNs.Consider a graph G = (V, E ) , where V = { v 1 , v 2 , . . ., v |V| } denotes the node set and E ⊆ V × V denotes the edge set.The neighborhood of node v i is given by N (i ) = { v j : (v i , v j ) ∈ E} .The node features are denoted by F ∈ R |V|× f , where f is the number of features.Most current GNNs follow a message-passing framework [95 ], i.e. nodes aggregate messages from their neighborhoods to update their representations, which is formulated as where h (l) i denotes the node representation of node v i in the lth layer, m (l) is the message for node v i , AGG (l) (•) is the aggregation function, a (l) i j denotes the weights from node v j to node v i , COMBINE (l) (•) is the combining function, W (l) represents the learnable weights and σ (•) is an activation function.The node representation is typically initialized as the node features H (0) = F .Therefore, the final representation is obtained after L message-passing layers, resulting in H = H (L ) . To derive the graph-level representation, pooling methods are applied to the node representations i.e. h G is the representation of G. Search space Since the building blocks of GNNs are distinct from those of other classical deep learning models, e.g.CNNs or RNNs, the search space of GNNs needs to be specifically designed, which can be mainly divided into the following three categories: micro search space, macro search space and pooling functions. Micro search space Based on the message-passing framework shown in equation ( 2), the micro search space defines the mechanism by which nodes exchange messages with each other in each layer.A commonly adopted micro search space [96 ,97 ] comprises the following components. • However, directly searching through all these components leads to thousands of possible choices within a single message-passing layer.Therefore, it is beneficial to prune the search space and focus on a few crucial components, leveraging applications or domain knowledge to guide this searching process [98 ]. Macro search space Similar to other neural networks, one GNN layer does not necessarily solely use its previous layer as the input.These more complicated connectivity patterns between layers, such as residual connections and dense connections [99 ,100 ], form the macro search space.Formally, the macro search space can be formulated as where F jl (•) can be the message-passing layer in equation ( 2), ZERO (i.e.not connecting), IDEN-TITY or an MLP. Pooling search space Pooling search space aims to automate the pooling function in equation ( 4 ).For example, Jiang et al. [101 ] proposed the following pooling search space. • Row-wise sum/mean/maximum: with F pool (•) indicating the sum, mean or maximum.Therefore, h G ∈ R d .• Column-wise sum/mean/maximum: with F pool (•) indicating the sum, mean or maximum.Therefore, h G ∈ R |V| .• Attention pooling: learnable parameters; the dimensionality of the outputs can be adjusted. • Attention sum: More advanced methods, e.g.hierarchical pooling [102 ], could also be incorporated into the search space with tailored designs. Search strategy Early GraphNAS methods directly generalize general search strategies such as reinforcement learning or evolutionary algorithms.To achieve that goal, GNN architectures are usually modeled as a sequence, and methods capable of processing variable-length sequences such as RNNs are adopted as the controller.Differentiable methods can also be directly applied.Though these search strategies are general, they do not consider the explicit characteristics of graphs and thus may not achieve the optimal results.Recent advancements in GraphNAS tackle this problem from different aspects, and we highlight some representative works in the following. AGNN [97 ] proposes a reinforced conservative search strategy that utilizes both RNNs and evolutionary algorithms in the controller, which is trained using reinforcement learning.By generating only slightly different architectures, the controller can more efficiently identify well-performing GNNs. The graph differentiable architecture search model with structure optimization (GASSO) [103 ] proposes to jointly search GNN architectures and graph structures, aiming to tackle the problem that the input graph data may contain noises.Specifically, GASSO modifies the bi-level optimization of NAS as where G * indicates the optimized graph structure and L s is the smoothing loss function based on the homophily assumption of graphs: Here A and A represent the adjacency matrix of G and G , respectively, and λ is a hyper-parameter.By optimizing equation ( 6), GASSO can simultaneously obtain the best graph structure and GNN architecture in a differentiable manner. Graph architecture search at scale (GAUSS) [104 ] further considers the efficiency of searching architectures on large-scale graphs, e.g.graphs with bi l lions of nodes and edges.To reduce computational costs, GAUSS proposes to jointly sample architectures and graphs in training the supernet.To address the potential issues, an architecture peer learning mechanism on the sampled subgraphs and an architecture-important sampling algorithm are proposed.Experimental results show that GAUSS can handle graphs with bi l lions of edges within 1 GPU day. The graph neural architecture customization with disentangled self-supervised learning (GRACES) [105 ] improves generalization capabilities in the face of distribution shifts by creating a tailored GNN architecture for each graph instance with an unknown distribution.GRACES utilizes a self-supervised disentangled graph encoder to identify invariant factors within various graph structures.It then employs a prototype-based selfcustomization strategy to generate the optimal GNN architecture weights in a continuous space for each instance.Additionally, GRACES introduces a customized super-network that shares weights among different architectures to enhance training efficiency.Comprehensive experiments on both synthetic and real-world datasets indicate that the GRACES model can adapt to a variety of graph structures and achieve superior generalization performance in graph classification tasks under distribution shifts [106 ,107 ]. The out-of-distribution generalized multimodal GraphNA S (OMG-NA S) method [108 ] advances the design of multimodal graph neural network (MGNN) architectures by addressing the challenges posed by distribution shifts in multimodal graph data.Unlike traditional MGNAS approaches, OMG-NAS emphasizes the optimization of the MGNN architecture to enhance performance on out-ofdistribution data, aiming to mitigate the influence of spurious statistical correlations.To this end, OMG-NAS introduces a multimodal graph representation decorrelation strategy, which aims to refine the MGNN model's output by iteratively adjusting feature weights and the controlling mechanism to minimize spurious correlations.Additionally, OMG-NAS incorporates a novel global sample weight estimator designed to facilitate the sharing and optimization of sample weights across different architectures.This approach aids in the precise estimation of sample weights for candidate MGNN architectures, thereby promoting the generation of decorrelated multimodal graph representations that focus on capturing the essential predictive relationships between invariant features and target labels.Comprehensive experiments conducted on diverse real-world multimodal graph datasets have validated the effectiveness of OMG-NAS, demonstrating its superior generalization capabilities over state-of-the-art baselines in handling multimodal graph data under distribution shifts. Data-augmented curriculum GraphNAS (DC-GAS) [109 ] introduces a novel approach to enhancing graph NAS for improved generalization in the face of distribution shifts.This method distinguishes itself by integrating data augmentation with architecture customization to address the limitations of existing graph NAS methods, which struggle with generalization on unseen graph data due to distributional discrepancies.DCGAS employs an innovative embedding-guided data generator, designed to produce a plethora of training graphs that facilitate the architecture's ability to discern critical structural features of graphs.Moreover, DCGAS innovates with a two-factor uncertainty-based curriculum weighting strategy, which assesses and adjusts the significance of data samples in training, ensuring that the model prioritizes learning from data that most effectively represent real-world distributions.Through a series of rigorous tests on both synthetic and real-world datasets experiencing distribution shifts, DCGAS has demonstrated its capability to learn robust and generalizable mappings, thereby setting new standards for performance compared to existing methodologies. The robust NAS framework for GNNs (G-RNA) [110 ] introduces a pioneering strategy to enhance the robustness of GNNs against adversarial attacks, addressing a critical vulnerability in their application to sensitive areas.G-RNA redefines the architecture search space for GNNs by incorporating graph structure mask operations, thereby creating a reservoir of defensive operation choices that pave the way for discovering GNN architectures with heightened defense mechanisms.By instituting a novel robustness metric to steer the architecture search, G-RNA not only facilitates the identification of robust architectures, but also provides a deeper understanding of GNN robustness from an architectural standpoint.This approach allows for a systematic and insightful exploration of GNN designs, focusing on their resilience to adversarial challenges.Rigorous testing on benchmark datasets has demonstrated G-RNA's capability to significantly surpass traditional robust GNN designs and conventional graph NAS methods, showcasing improvements ranging from 12.1% to 23.4% in adversarial settings, thereby establishing a new benchmark for the design of robust GNN architectures. Disentangled self-supervised GraphNAS (DS-GAS) [111 ] addresses common scenarios where labeled data are unavailable by identifying optimal architectures that capture various latent graph factors using a self-supervised approach on unlabeled graph data.DSG AS incor porates three specially designed modules: disentangled graph super-networks, self-supervised training with joint architecture-graph disentanglement [112 ] and contrastive search with architecture augmentations.Experiments conducted on several real-world benchmarks demonstrate that DSGAS achieves state-of-the-art performance compared to existing graph NAS baselines in an unsupervised manner. Multi-task GraphNA S w ith task-aware collaboration and curriculum (MTGC 3 ) [113 ] addresses the challenge of GraphNAS in multitask scenarios by simultaneously identifying optimal architectures for various tasks and learning the collaborative relationships among them.MTGC 3 features a structurally diverse supernet that manages multiple architectures and graph structures within a unified framework.This is complemented by a soft task-collaborative module that learns the transferability relationships between tasks.Additionally, MTGC 3 employs a task-wise curriculum training strategy that enhances the architecture search process by reweighing the influence of different tasks based on their difficulties.Several experiments demonstrate that MTGC 3 achieves state-of-the-art performance in multitask graph scenarios. Disentangled continual GraphNA S w ith invariant modularization (GASIM) [114 ] addresses GraphNAS in continual learning scenarios by continuously searching for optimal architectures while retaining past knowledge.It begins by designing a modular graph architecture super-network with multiple modules to facilitate the search for architectures with specific factor expertise.It then introduces a factor-based task-module router that identifies latent graph factors and directs incoming tasks to the most appropriate architecture module, thereby mitigating the forgetting problem caused by architecture conflicts.Additionally, GASIM incorporates an invariant architecture search mechanism to capture shared knowledge across tasks.Several experiments on real-world benchmarks show that GASIM can achieve state-of-the-art performance compared to baseline methods in continual GraphNAS. NAS tools Public libraries are critical to facilitate and advance research and applications of NA S. NA S libraries integrate different search spaces, search strategies and performance evaluation strategies.This different part is modularly implemented and can be freely combined.Users can easily reproduce existing NAS algorithms or extend new ones based on them using the features of the NAS libraries with a small amount of code, which greatly assists NAS researchers and users who wish to use NAS techniques to optimize neural network architectures. NNI [115 ] and AutoGL [116 ] are two opensource NAS libraries.Specifically, NNI automates feature engineering, NAS, hyperparameter tuning and model compression [117 ] for deep learning.We report the experimental results of Au-toGL and some representative baselines on widely adopted node classification benchmarks and graph classification benchmarks.The results are shown in Tables 2 and 3 , respectively.We can observe that the results on AutoGL significantly outperform the results on the baselines including GCN, GAT and the GraphSAGE on node classification task and top-K pooling and the GIN on graph classification NAS benchmarks NAS benchmarks consist of a search space, one or several datasets and a unified training pipeline.NAS benchmarks also provide the performance of all possible architectures in the search space under the unified training pipeline setting.The emergence of NAS benchmarks addresses the following three main issues in NAS research. • The experimental settings, such as dataset splits, hyperparameter configurations and evaluation protocols vary significantly across different studies.Consequently, this variability makes it challenging to ensure the comparability of experimental results from different methods.• The randomness of training can lead to different performance results for the same architecture, making the NAS search process difficult to reproduce.• The performance estimation procedure requires extensive computations and is therefore highly inefficient.The computational demands of NAS research present a significant barrier, rendering it inaccessible to those without substantial computing resources. Through the NAS benchmarks, different NAS methods can be fairly compared using the unified training protocol.Moreover, NAS methods can get consistent performance estimation to reproduce searching trails.High efficiency of accessing architecture performance enables one to develop new NAS methods conveniently.As a result, NAS benchmarks dramatically boost NAS research. NAS benchmarks can mainly be divided into tabular benchmarks and surrogate benchmarks.Tabular NAS benchmarks offer pre-computed evaluations for all possible architectures within the search space through a table lookup.In contrast, surrogate benchmarks provide an efficient surrogate function that predicts the performance of all architectures.Tabular benchmarks have better authenticity since the results are from experiments, but running experiments can cost lots of computational resources and potentially limit the size of the search space.Surrogate benchmarks are more efficient, but the quality of the benchmark highly depends on the surrogate function. FUTURE DIRECTIONS AND CONCLUSIONS Recent advancements in the field of large language models (LLMs) have demonstrated their effectiveness in handling graph tasks [130 -132 ] by leveraging their advantages in in-context learning, textual understanding and reasoning capabilities.One promising future direction is to leverage LLMs for GraphNAS, and empower it with more advanced and generalized abilities such as zero-shot learning, in-context learning, etc.This integration would allow GraphNAS to leverage the contextual understanding and reasoning capabilities of LLMs to discover optimal architectural configurations for graph tasks.By exploiting the strengths of both LLMs and GraphNAS, researchers can unlock new possibilities for improving graph-based learning, enabling more efficient training, and enhancing the overall performance and generalization abilities of graph neural networks.Besides, it is worth studying using the coding abilities of LLMs to introduce meaningful variations to code-defining neural network architecture [133 ].It also remains to be further explored how to conduct efficient NAS for LLMs for automatical ly bui lding LLMs with less costs [134 ]. In addition to graph data, NAS techniques for videos and tabular data is also a promising future research direction, involving automating the design of optimal neural network architectures tailored for specific tasks [135 ].For video data, NAS focuses on optimizing architectures that efficiently capture temporal and spatial features, often integrating three-dimensional convolutions and recurrent neural networks to handle the complex dynamics of video frames.In the realm of tabular data, NAS seeks to identify architectures that can effectively manage the diverse and structured nature of tabular inputs, often leveraging fully connected networks, embedding layers and attention mechanisms.These NAS techniques employ various strategies such as reinforcement learning, evolutionary algorithms and gradient-based methods to explore and refine the search space, ultimately improving model performance and efficiency in handling both video and tabular datasets. The other promising future research direction is multimodal NAS, which is expected to revolutionize how we approach complex, data-rich problems by integrating diverse data types, such as images, text and structured graph data, into a cohesive learning framework.As we move forward, key areas of focus wi l l include developing advanced algorithms that can efficiently navigate the vast search space of possible architectures while effectively fusing multimodal inputs.This necessitates innovations in architecture design to handle the heterogeneity of data types and the development of novel training strategies that can leverage the complementary information contained within different modalities. In summary, the time complexity of NAS techniques is notably high due to the extensive exploration and evaluation of numerous candidate architectures.This complexity is primarily driven by the size of the search space, the computational cost of training and validating each architecture, and the specific search strategy employed.Reinforcement learning-based NAS can be particularly time-intensive, as it requires iterative training of both the controller and the architectures.Evolutionary algorithms also contribute to high complexity through multiple generations of candidate evaluations.Gradient-based methods, while potentially faster, sti l l face significant computational demands due to backpropagation across a large search space.Advancements such as differentiable architecture search (DARTS) and efficient NA S (ENA S) aim to reduce this complexity by streamlining the search process and leveraging weight-sharing or proxy tasks.Despite these improvements, NAS techniques generally remain computationally expensive, often necessitating substantial computational resources and time to identify optimal architectures.It is interesting to study the efficiency of NAS algorithms. Lightweight NAS is also an interesting research topic [14 ,15 ,50 ,136 -139 ] that focuses on identifying efficient neural network architectures that balance high performance with low computational cost, making them suitable for deployment on resourceconstrained devices such as mobile phones and embedded systems.Unlike traditional NAS, which often results in complex and computationally intensive models, LightNAS emphasizes the creation of models that are compact, have fewer parameters and require less computational power without significantly compromising accuracy.Techniques such as pruning, quantization and knowledge disti l lation are frequently incorporated into the search process to further reduce the model size and improve inference speed.LightNAS employs strategies like reinforcement learning, evolutionary algorithms and gradient-based methods, but within a constrained search space tailored to prioritize lightweight operations.This approach ensures that the discovered architectures not only perform well, but are also feasible for real-world applications where computational resources and energy efficiency are critical considerations. Moreover, addressing challenges in scalability, interpretability, robustness, fairness, as well as more training strategies, etc. [47 ,140 -142 ] wi l l also be crucial, as these systems are deployed across a wide range of applications, from healthcare diagnostics to social network analysis.Ultimately, NAS aims to create a new paradigm for deep learning systems, capable of understanding and analyzing the complex, interconnected data that mirror the multifaceted nature of the real world. Figure 1 . Figure 1.The three key aspects of NAS: search space, search strategy and evaluation strategy. Table 1 . A common search space of different types of aggregation weights a i j . Table 2 . The results of node classification. Table 3 . The results of graph classification.
9,301.6
2024-08-01T00:00:00.000
[ "Computer Science" ]
Small Interfering RNA Targeted to IGF-IR Delays Tumor Growth and Induces Proinflammatory Cytokines in a Mouse Breast Cancer Model Insulin-like growth factor I (IGF-I) and its type I receptor (IGF-IR) play significant roles in tumorigenesis and in immune response. Here, we wanted to know whether an RNA interference approach targeted to IGF-IR could be used for specific antitumor immunostimulation in a breast cancer model. For that, we evaluated short interfering RNA (siRNAs) for inhibition of in vivo tumor growth and immunological stimulation in immunocompetent mice. We designed 2′-O-methyl-modified siRNAs to inhibit expression of IGF-IR in two murine breast cancer cell lines (EMT6, C4HD). Cell transfection of IGF-IR siRNAs decreased proliferation, diminished phosphorylation of downstream signaling pathway proteins, AKT and ERK, and caused a G0/G1 cell cycle block. The IGF-IR silencing also induced secretion of two proinflammatory cytokines, TNF- α and IFN-γ. When we transfected C4HD cells with siRNAs targeting IGF-IR, mammary tumor growth was strongly delayed in syngenic mice. Histology of developing tumors in mice grafted with IGF-IR siRNA treated C4HD cells revealed a low mitotic index, and infiltration of lymphocytes and polymorphonuclear neutrophils, suggesting activation of an antitumor immune response. When we used C4HD cells treated with siRNA as an immunogen, we observed an increase in delayed-type hypersensitivity and the presence of cytotoxic splenocytes against wild-type C4HD cells, indicative of evolving immune response. Our findings show that silencing IGF-IR using synthetic siRNA bearing 2′-O-methyl nucleotides may offer a new clinical approach for treatment of mammary tumors expressing IGF-IR. Interestingly, our work also suggests that crosstalk between IGF-I axis and antitumor immune response can mobilize proinflammatory cytokines. Introduction Insulin-like growth factor type I receptor (IGF-IR) signaling has a significant impact on development of many normal tissues, and also on growth of malignant tumors [1]. Epidemiological studies showed that increased serum concentration of insulin-like growth factor I (IGF-I) is associated with increased risk of developing tumors including those of the breast [2]. Moreover, IGF-IR is a potent control point for transformation and is therefore considered as a relevant therapeutic target [3]. Indeed, drugs targeting the IGF axis are under development by major companies and include receptor-specific blocking antibodies and tyrosine kinase inhibitors (TKIs) [4]. Other approaches using nucleic-acid based strategies have been used to investigate the IGF-IR/IGF-I pathway, including antisense oligonucleotides, antisense RNA expression plasmids, ribozymes, triplex-forming oligonucleotides and short interfering RNAs (siRNAs) [5,6,7,8,9,10]. Although nucleic-acid based approaches are theoretically specific and selective, they may have the undesirable effect of silencing non-targeted mRNAs, more particularly in the case of siRNAs and phosphorothioate antisense oligonucleotides [11]. It was previously found that down-regulation of IGF-IR using antisense expression vectors may block tumor growth in vivo [12,13,14,15] For example, murine EMT6 breast cancer cells carrying an antisense IGF-IR vector exhibited a significant decrease in cell proliferation in vitro, lost their ability to form colonies in soft agar, and also lost their tumorigenic property when grafted to syngenic mice [16]. Interestingly, antisense downregulation of IGF-IR can unexpectedly induce an antitumor host response with several of the characteristics of an immune response. Injection of glioblastoma cells stably expressing antisense IGF-IR transcripts in syngenic rats elicited a protective host response that inhibited tumor formation by subsequent injection of wild-type cells associated to the proliferation of cytotoxic CD8+ lymphocytes [17]. This host immune response was also observed in experiments with other syngenic models such as mouse neuroblastoma or melanoma [13,18]. Although few attempts were done to unravel the mechanisms leading to this host response after IGF-IR downregulation, it has been hypothesized that this could be due to induction of in vivo apoptosis or to secretion of immuno-peptides that interact with Major Histocompatibility Complex (MHC) class I antigen, further recognized by CD8+ cells [18,19,20]. We have shown that in vivo administration of phosphorothioate antisense oligonucleotides targeting IGF-IR decreased receptor protein levels and concomitantly inactivated AKT and MAPK signaling pathways leading to C4HD breast tumor growth inhibition [21]. We successfully protected syngenic mice from tumor development induced by wild-type C4HD by inoculating mice with C4HD cells treated with antisense oligonucleotides targeting IGF-IR [22]. Similarly tumor-specific immunity led to inhibition of tumor growth through the generation of a cellular response and of tumor-specific cytotoxic cells. Down-regulation of IGF-IR up-regulated the co-stimulatory molecule CD86 as well as the peptide-chaperone Hsp70 [22]. A significant body of evidence indicates that the IGF-I/IGF-IR axis interferes with immune recognition of tumor cells [13,17,23]. Indeed, triple helix-forming or antisense expression vectors targeting IGF-I induced a host immune response with up-regulation of immunogenic molecules and increased production of apoptotic cells [23,24,25]. Here, we analyzed the effect of transiently silencing of IGF-IR into mouse breast cancer cells through transfection of well-defined small molecules, such as siRNAs modified with 29-O-methyl nucleotides for in vivo use. These short molecules are supposed to be more specific than antisense RNA and devoid of undesired effects [11]. Using the most efficient siRNAs, we inhibited IGF-IR downstream signaling proteins, and confirmed its essential role for in vitro cell growth and cell cycle regulation. Remarkably, blocking IGF-IR signaling in breast cancer cells not only decreased tumor growth in syngenic mice and triggered features of an immune response, but also induced secretion of pro-inflammatory cytokines. These results are strong evidence for significant links between IGF-IR and immune response pathways. Ethics Statements Animal studies were conducted as in accordance with the highest standards of animal care as outlined by the NIH Guide for the Care and Use of Laboratory Animals [26], and were approved by the IBYME Animal Research Committee. The IBYME is approved by OLAW, NIH (assurance #A5072-01). Animals and tumors Experiments were carried out in virgin female BALB/c mice raised at the IBYME. The hormone-dependent ductal tumor line C4HD was originated in mice treated with 40 mg medroxyprogesterone acetate (MPA, medrosterona, Laboratorios Gador, Buenos Aires, Argentina) every 3 months for 1 year, and has been maintained by serial transplantation in animals treated with 40 mg subcutaneously (s.c.) MPA depot in the opposite flank to tumor inoculum [21,27]. The C4HD tumor line is of ductal origin, expresses progesterone and estrogen receptors, IGF-I/IGF-IR, lacks glucocorticoid receptor expression and requires MPA administration to proliferate both in vivo and in vitro [21]. SiRNA and cell transfection The synthesized siRNAs (nomenclature and design in Methods S1) were transfected into EMT6 cells using a reverse transfection procedure with Lipofectamine TM RNAiMAX (Invitrogen, Cergy-Pontoise, France). DharmaFECT-I cationic lipid (Thermo Fisher Scientific, USA) was used for siRNA transfection into C4HD cells using a direct transfection protocol. After transfection, cells were incubated at 37uC for 24 h, to 72 h before further analysis. RNA preparation, quantitative RT-PCR, gel electrophoresis and immunobloting procedures are described in Methods S1. Proliferation and cell cycle assays Cell proliferation was evaluated 48 h post-transfection with siRNAs targeting IGF-IR using CellTiter 96 AQueous Non-Radioactive Cell Proliferation Assay (Promega, Charbonnières, France). Cell proliferation assays performed in triplicate using three independent transfections were reproduced in two independent experiments. Cell-cycle analysis was done using propidium iodide (2.5 mg/mL) staining of methanol-fixed samples treated with RNase (100 mg/mL). All data were analyzed with Dean-Jett-Fox model cell cycle analysis using FlowJo 8.8.6 software (Tree Star, Inc., Olten, Switzerland) including doublet discrimination. In vivo tumor growth experiments C4HD cells growing in MPA (10 nM) were transiently transfected with 29-O-methyl-modified siRNA (100 nM) using Dharmafect-I (Thermo Fisher Scientific, USA). After 48 h, 2610 6 cells were inoculated s.c. in BALB/c females treated with 40 mg MPA depot in the flank opposite to the cell inoculums (n = 5 per group). Tumor volume and growth rate were determined as described [22]. At day 26, animals were euthanized and tumors removed. Tissues were fixed in 10% buffered formalin and embedded in paraffin; 5 mm sections were stained with hematoxylin and eosin (H&E) for microscopy. Immunization of mice with transfected cells was described in Methods S1. Mouse cytokine antibody array Conditioned media were harvested from untreated C4HD cells or siRNA-transfected C4HD cells grown in DMEM/F12 with 2.5% ChFCS and 10 nM MPA for 48 h. Mouse cytokine antibody arrays (Panomics, Redwood City, CA) were used to profile cytokines produced by 2 ml of conditioned media. This experiment was performed twice. Graphs correspond to densitometric analysis of the chemiluminescent signal as described in Methods S1. Statistical analysis Cell data were derived from at least two independent experiments, each with three independent transfection assays. Statistical analyses were conducted using Prism 5.0a GraphPad software. Comparisons among groups were performed with oneway analysis of variance test. If statistically significant, the Dunnett's multiple comparison post hoc test was used. Values of P,0.05 were considered significant and indicated asterisks refer to comparison to samples transfected with control siRNA. Data are presented as the mean 6 SEM [30]. Where indicated, Student's ttest was also used for comparison. For in vivo studies, comparison of tumor volumes between the different groups was done by ANOVA followed by Tukey post hoc test. Linear regression analysis was performed on tumor growth curves, and the slopes were compared using ANOVA followed by parallelism test to evaluate the statistical significance of the differences. Values of P,0.05 were considered significant. Identification of mouse IGF-IR siRNAs As shown by qRT-PCR, all eight siRNA targeted to mouse IGF-IR (mIGF-IR) (siRNA sequences in Table S1) efficiently inhibited production of IGF-IR mRNA in mouse breast cancer EMT6 cells [16]. KSG, DYQ, NNE, ADT and CMV siRNAs inhibited IGF-IR mRNA expression by approximately 70% compared to untreated cells and to cells transfected with control siRNAs ( Figure 1A). After careful examination of all siRNAs for their GC content, their potential secondary structures and the presence of potential immunostimulatory motifs, ADT siRNA was selected for further evaluation [31]. Dose-response experiments performed with ADT siRNA in EMT6 cells showed that 50 nM of ADT decreased IGF-IR mRNA by 83% compared to levels in untreated cells ( Figure 1B). Furthermore, kinetic experiments demonstrated that increasing incubation time after transfection improved silencing efficiency of anti IGF-IR siRNA ( Figure 1C). The observed IGF-IR silencing remains specific even though prolonged exposure of cells with high concentration of siRNAs and cationic lipids led to a decrease in IGF-IR mRNA levels, as shown in Figure 1C with 50 nM CONT1 72 h post-transfection. Western blot analysis revealed an efficient down-regulation of endogenous IGF-IR expression in ADT-transfected EMT6 cells (94% at 25 nM; Figure 1D). Although unmodified siRNAs worked well in mouse breast cancer cells, a higher stability is required for in vivo applications. A range of oligonucleotide modifications confers nuclease resistance to siRNAs such as 29-O-methyl modification [31,32]. Moreover, while unmodified siRNAs were known to produce non-specific immunostimulatory effects due to sequence motifs [31], we decided to substitute few uridine residues in siRNA sense strand by 29-O-methyl uridines to get efficient silencing without immunostimulation non-related to IGF-IR silencing (Table S1). The introduction of these two modifications partially attenuated silencing of IGF-IR in EMT6 cells (Figure 2A). A detectable reduction in efficacy of the modified siRNA was observed when 25 nM of ADT and 25 nM of 29-O-methyl ADT siRNAs were compared ( Figure 1D and 2A). To characterize the specificity of the 29-O-methyl ADT siRNA at high concentration (100 nM), we measured the levels of insulin receptor (INS-R) that is highly homologous to IGF-IR (70% amino acid identity), especially in the tyrosine kinase domain, in which they share 84% amino acid identity ( Figure S1). Western blot analysis revealed that transfection of 29-O-methyl ADT siRNA did not induce significant downregulation of INS-R when compared to 29-O-methyl CONT2 treatment ( Figure 2B). IGF-IR inhibition blocks downstream signaling, induces cell-cycle arrest and decreases cell proliferation In EMT6 cells, there was a clear inhibition of IGF-IR protein levels 48 h after transfection with 29-O-methyl ADT siRNA at concentrations of 50 nM and above. Transfected cells were grown under standard growth conditions (10% FCS) before AKT/PKB and ERK1/2 activities were assessed by immunoblotting with phospho-specific antibodies. IGF-IR knockdown induced inhibition of AKT phosphorylation, as previously described using either unmodified siRNA or antisense phosphorothioate oligonucleotides in other cell lines ( Figure 3A) [21,33]. ERK phosphorylation was also inhibited in breast cancer cells when they were treated with 29-O-methyl ADT siRNA ( Figure 3B). Total AKT/PKB and ERK1/2 protein levels remain constant whatever siRNA was transfected. In our assay conditions, downregulation of p-ERK1/2 and p-AKT/PKB started to reach maximal efficiency at low concentration of siRNA. For in vivo analysis, the highest concentration of siRNA was chosen to maximize effect of IGF-IR silencing on signaling. To assess the effect of IGF-IR downregulation on their growth, EMT6 were transfected with siRNAs and growth rate was analyzed using a colorimetric MTS-based assay ( Figure 3C). Unmodified and 29-O-methyl ADT transfection resulted in a 40-50% decreased cell growth rate at 50-100 nM as compared to cells transfected with control siRNA. To further elucidate how IGF-IR suppression affects cell proliferation, the cell-cycle status of siRNA-transfected cells was determined. Flow cytometric cell cycle analysis indicated that treatment with 100 nM 29-O-methyl ADT arrested EMT6 in the G0/G1 phase of the cell cycle. The proportion of cells in the S-and G2-phases were decreased relative to those in cells treated with control siRNA ( Figure 3D). Moreover, no sub G1 peak was detected with 100 nM 29-O-methyl ADT, suggesting that no apoptosis was triggered after 48 h of IGF-IR down-regulation, similarly to treatment with IGF-IR antisense oligonucleotides [22]. Silencing of IGF-IR decreases in vivo tumor development To explore the in vivo effect of IGF-IR silencing on breast cancer cell growth, we took advantage of the C4HD breast tumor model. We previously showed that IGF-IR plays a key role in in vitro and in vivo proliferation of the progestin-dependent C4HD tumor cells [21]. A functional autocrine loop involving IGF-I and IGF-IR participated in MPA-induced proliferation of C4HD cells. In this model, antisense phosphorothioates targeted to IGF-IR abolished the two main IGF-IR signaling pathways, AKT and MAPK [21]. Here, we transfected C4HD growing in 10 nM MPA with control or specific siRNAs targeting mIGF-IR. The silencing efficiency of mIGF-IR siRNAs in C4HD on IGF-IR ranged from 60 to 80% at 100 nM ( Figure 4A). After 48 h, 2610 6 C4HD cells from each experimental group (n = 5) were inoculated s.c. into female BALB/ c mice. Tumors in mice that had been given C4HD cells treated with IGF-IR siRNA (29-O-methyl ADT) had significantly smaller mean tumor volumes and lower tumor growth rates compared with tumors from control groups ( Figure 4B and Table S2). At day 26, a delay of 7 days in tumor growth was observed in groups injected with 29-O-methyl ADT siRNA-transfected C4HD with respect to tumors that developed in mice injected with C4HD, and of 10 days with respect to tumors growing in mice injected with 29-O-methyl CONT2 siRNA-transfected cells. Histopathological analysis was performed by H&E staining of histological sections obtained from tumors excised at day 26. C4HD tumors grown in mice treated with MPA were ductal mammary carcinomas composed of solid pseudolobules of highly cohesive glandular cells that seldom showed tubular differentiation, separated by scanty fibroblastic stroma ( Figure 4C). In this experiment, about 30% of tumor mass from tumors that developed from 29-O-methyl ADT siRNA-transfected C4HD cells showed fibrosis as well as lymphocytes and polymorphonuclear neutrophils (PMN) ( Figure 4C and Table S3). Moreover, tumors that developed from C4HD cells transfected with IGF-IR-siRNA trigger features of an immune response C4HD cells growing in 10 nM MPA were transfected with 100 nM 29-O-methyl ADT or 29-O-methyl CONT2. After 48 h, cells were inactivated by irradiation and then used for immunization of mice according to previous procedures (Methods S1) [22]. A delayed-type hypersensitivity assay (DTH) was used to evaluate the immune response [22]. As shown in Figure 5A, DTH reactivity increased strongly in the group of mice immunized with 29-Omethyl ADT-transfected C4HD compared with control groups, suggesting that a cellular immune response was triggered by IGF-IR down-regulation in C4HD. Animals were euthanized and the capacity of isolated splenocytes to proliferate in vitro in presence to C4HD cells was evaluated. Only splenocytes obtained from mice immunized with IGF-IR siRNA transfected C4HD significantly proliferated in response to mitomycin C-treated C4HD cells, whereas splenocytes from control groups did not respond ( Figure 5B). To study the cytotoxic potential of the stimulated splenocytes isolated from mice immunized with siRNA-treated C4HD versus untreated C4HD cells, a 51 Cr release assay was performed. Splenocytes prepared from mice injected with C4HD transfected with 29-O-methyl ADT effectively induced lysis of C4HD cells, exhibiting the highest (,50%) cytotoxic activity at an effector to target (E:T) ratio of 100:1 ( Figure 5C). Lower cytotoxicity (20-30%) was observed over the range of E:T ratios tested with the control groups. Based on results from DTH assays, splenocyte proliferation assays and cytotoxicity experiments, we conclude that the immunization protocol with ADT siRNA treated C4HD cells triggered some features of cellular immune response. Down-regulation of IGF-IR in breast cancer cells induces proinflammatory cytokines The effect of down-regulation of IGF-IR expression on cytokine production by C4HD breast cancer cells was profiled by using a cytokine antibody array. While most tested cytokines were not affected by IGF-IR downregulation, secretion of TNF-a and IFN- c were stimulated significantly as demonstrated by enhanced signals in conditioned media of C4HD cells treated with IGF-IR siRNA as compared to untreated cells and 29-O-methyl control siRNA transfected cells ( Figure 6). Other cytokines such as RANTES, G-CSF and IL6 were not significantly affected by specific silencing of IGF-IR. Discussion Small tyrosine kinase inhibitors and antibodies to IGF-IR aimed to block tumor growth in vivo are already tested in clinical trials. Inhibition of IGF-IR/IGF-I expression by nucleic acid based strategies could also be clinically useful as shown by a pilot study of ex-vivo treatment of malignant astrocytomas with antisense oligonucleotides targeted to IGF-IR [34]. This is especially stressed by the observation of IGF-I or IGF-IR inhibition with antisense based approaches leading to an antitumor host response [13,17,22,23]. Other clinical trials have shown that hepatocarcinoma and glioblastoma cells treated with IGF-I antisense RNA constructs can serve as antitumor vaccines, inducing an effective immune response and a significant increase of median survival [35]. The triggering of immune host response after downregulation of IGF-IR/IGF-I using nucleic acids was exemplified in three mouse syngenic models. Injection of melanoma cells pretreated with IGF-IR antisense oligonucleotide prevented the growth of s.c. injected untreated cells [18]. The presence of an immune response after down-regulation of IGF-IR using antisense RNA construct was also confirmed in a mouse neuroblastoma model [13]. Similar host response was described with antisense oligonucleotide treatment in a mouse breast cancer model [22]. However the mechanism linking the immune response and specific inhibition of IGF-I or IGF-IR expression has not yet be determined. It may occur via specific silencing of IGF-IR or via specific motifs present in nucleic acid sequences used to inhibit IGF-I or IGF-IR. Indeed, plasmids or oligonucleotides carrying non-methylated CpG sequences are known to trigger innate immune responses and can develop strong toxicity in human cells [36,37]. Here, we used siRNAs rather than antisense RNA vectors or antisense oligonucleotides to analyze the effects of IGF-IR inhibition on immune response triggering. Since specific motifs in unmodified siRNA duplexes may activate cellular sensors of foreign RNA, leading to interferon induction and cell death [31], the siRNAs used in in vivo experiments were modified at uridine positions of the sense strand with 29-O-methyl-uridine. This modification reportedly blocks the immunostimulatory effect of the RNA duplex without significantly attenuating RNAi [31]. However, we noticed a reduction of efficiency with modified siRNA even though 29-O-methyl residues were not introduced at positions 9 and 10 in the sense strand; modifications at these positions are known to reduce RISC assembly [31]. Similarly, a detectable reduction in silencing efficacy was found with modified IGF-IR siRNAs, where 29-O-methyl nucleotides were placed in alternating positions on both strands [38]. Moreover, we showed that our modified siRNAs targeting IGF-IR did not inhibit insulin receptor, which possess high homology with IGF-IR, unlike anti- Inhibition of IGF-IR reduced AKT and ERK phosphorylation and consequently reduced the rate of cellular proliferation. Furthermore, down-regulation of IGF-IR arrested cells in the G0/G1 phase of the cell cycle; the major effect probably occurred at the G1-S interface and was presumably mediated through the PI3K-AKT and/or ERK pathways [1]. Interestingly, phosphorylation of AKT was inhibited at 25 nM ADT siRNA whereas no reduction of IGF-IR levels was observed. We proposed that at this concentration, IGF-IR was transiently inhibited leading to sustained decrease of p-AKT. After recovery of IGF-IR expression due to instability of siRNA and cell division, p-AKT was still strongly inhibited as previously observed by others [33,39]. To achieve extended silencing of IGF-IR with strong inhibition of AKT phosphorylation, high concentration of siRNA was therefore chosen for in vivo experiments. The inhibition of IGF-IR expression in C4HD mammary tumor cells significantly reduced tumor growth in vivo. This suppression of tumor growth might arise from several intracellular mechanisms and/or host antitumoral immune response as previously described [18]. The blockade of IGF-IR signaling may decrease either cell proliferation or increase apoptosis as shown with prostate cancer xenografts treated with IGF-IR antibody [40]. C4HD cell proliferation inhibition could occur in vivo while IGF-IR silencing decreased in vitro cell proliferation [41]. The absence of in vitro apoptosis after silencing does not preclude the possibility that prolonged IGF-IR silencing could induce a massive apoptosis in vivo. Indeed, it has been shown that antisense IGF-IR treatment can cause a partial growth arrest of glioblastoma without strong apoptosis and at the same time elicit almost complete cell death in vivo [42]. The reduced tumor growth could also result from activation of antitumoral immune host response, possibly induced by apoptotic cells [18,43]. Histopathological analysis of tumors in mice treated with IGF-IR siRNA-transfected cells showed lower number of mitotic events concomitantly with lymphocytes and PMN infiltration, these cells being indicators of good prognosis in cancers. Moreover, we clearly demonstrated the presence of inflammatory features after vaccination with cells downregulated for IGF-IR according to our previous observations [22]. Our results confirmed that inhibition of IGF-IR might lead to immune response triggering through antisense RNA, antisense oligonucleotides and siRNAs [17,22]. Tumors transfected with 29-O-methyl ADT siRNA developed at later time points, suggesting loss of inhibitory siRNA during in vivo growth or emergence of tumor cells resistant to host immune response. Similarly, it was found that tumors may arise in vivo due to loss of the expression plasmid expressing IGF-IR antisense RNA [44]. We cannot exclude a recovery of IGF-IR after silencing depending on cell division rate. However, others showed that IGF-IR silencing with unmodified siRNAs can last for six days before re-expression [39]. To determine if arising tumors escape from immune response, previous tumor growth studies used immune deficient mice [13,45]. Interestingly, growth of glioblastoma or neuroblastoma cells downregulated for IGF-IR in athymic nude mice was delayed, but to a lesser extent than in syngenic rodents. This delay in tumor growth of glioblastoma was found proportional to the extent of cell apoptosis induced by IGF-IR antisense treatment [45,46]. Nevertheless, it was shown that antitumoral immune host response triggered by IGF-IR downregulation in syngenic animals was highly effective when most of the injected cells underwent massive apoptosis [43]. It was therefore proposed that expression of antisense IGF-IR induced apoptosis concomitantly with the secretion of cytotoxic substances, both potentially stimulating immune response [17,43]. Similar mechanism may occur with C4HD cells in syngenic mice, despite the absence of in vitro apoptosis induced by IGF-IR silencing. Our study provides the first demonstration that inhibition of IGF-IR using 29-O-methyl siRNA increased secretion of TNF-a and IFN-c, two proinflammatory cytokines. These two cytokines are multifunctional and are produced mainly by activated macrophages and lymphocytes, although non-hematopoietic cells such as malignant cells or tumor stroma cells also synthesize TNFa [47]. This latter is a key mediator of the inflammatory response, and can play a dual role in tumor environment, inducing paradoxical effects. Due to its proapoptotic activity, TNF-a can inhibit in vitro growth of some tumor cells including breast tumor cell lines [48]. Moreover, forced expression of LIGHT, a TNF superfamily member, in tumor tissue induced priming of naive T cells and led to rejection of established tumors in mice [49]. Besides TNF-a action to suppress proliferation and induce apoptosis in a variety of cancer cells, it can also exert a growthpromoting effect on normal epithelia [50]. Its production by malignant or host cells in tumor microenvironment was associated with increased malignancy of tumors and favored metastasis [51,52,53]. We have also shown that exogenous supply of TNF-a to C4HD cells promoted their in vitro growth through NF-kB dependant pathways [29]. Consequently this cytokine is able to exert pleiotropic effects on cells, with the paradox outcome of cell death or growth, depending on the context [54]. TNF-a may also act concomitantly with IFN-c to block tumor stroma formation [55]. Both cytokines can sensitize metastatic colon carcinoma cells to TRAIL-induced apoptosis in vitro [56]. They play a role as immunoadjuvant through induction of MHC class I molecules, activate immature dendritic cells or trigger an adaptative immune response through induction of CD8 + T cells [57,58]. Our data support the existence of crosstalk between IGF-IR axis, immune response and secretion of TNF-a and IFN-c cytokines. Interactions between endocrine and immune systems are well documented [59,60]. Indeed, IGF-I is known to play a prominent role in the regulation of immunity and inflammation [61]. Anti apoptotic IGF-I can reduce TNF-a cytotoxicity in the inflammatory response to acute renal injury [62], whereas it may also potentiate TNF-a induced apoptosis in specific cell types [63]. Proinflammatory cytokines often acted as negative regulatory signals that temper the action of hormones and growth factors [59]. TNF-a blocked growth of breast cancer cells by impairing IGF-IR signaling [64]. TNF-a also promoted neurodegeneration through inhibition of IGF-I survival signal [65]. Interestingly, TNF-a and IFN-c were shown to affect IGF-IR promoter activity and decrease IGF-IR protein levels in human sarcoma cell lines [66]. Whereas high expression of TNF-a was therefore correlated with diminished IGF-IR levels, our work showed that silencing IGF-IR in the C4HD breast cancer model increased TNF-a and IFN-c secretion. We may speculate that when IGF-IR is silenced, TNF-a and IFN-c secretion contributed to decreased tumor growth either priming T cell response, blocking tumor stroma or triggering tumor apoptosis. Also, we cannot exclude that these cytokines elicit an anti apoptotic effect to counteract the tumor inhibition induced by IGF-IR silencing. Moreover, in addition to these cytokines, triggering of host cell response would need involvement of multiple factors, that warrant further investigations such as genome-wide expression profiling or complementary cytokine antibody array studies [67,68]. Presently, siRNA-based therapies are being evaluated clinically. However, delivery of these large and highly charged molecules still represents a major barrier to therapeutic application. Novel RNAi delivery methods are therefore under intense investigation [69]. The association of efficient delivery vehicles and siRNA sequences is essential for achieving specific and efficient antitumor immune response. For example, modifications of siRNAs by introducing micro-RNA motifs, aptamers or polymers could be envisaged to obtain bifunctional molecules leading to specific silencing activities associated with proinflammatory properties, cell targeting or cell penetration increase [70,71,72]. Moreover, vehicles like polyethyleneimine may also harbor intrinsic immunostimulatory activities, which in association with siRNA may enhance antigen-presenting capacity of mouse tumor-associated dendritic cells and induce direct tumoricidal activity [73]. Here we showed that 29-O-methyl modified siRNA targeted to mouse IGF-IR are able to block IGF-IR signaling and tumor growth, by inducing features of antitumor immune response. In further studies, siRNAs targeting IGF-IR will be modified and complexed to specific carriers with adjuvant properties, to improve the effect of IGF-IR downregulation and consequently modulate antitumor immune responses with the goal of developing local and systemic RNAi therapy. It remains to be assessed whether TNF-a and IFN-c are effective biomarkers for the efficacy of anti-IGF-IR therapy. Supporting Information Methods S1 Supplementary methods.
6,472.2
2012-01-03T00:00:00.000
[ "Biology" ]
CioSy: A Collaborative Blockchain-Based Insurance System : The insurance industry is heavily dependent on several processes executed among multiple entities, such as insurer, insured, and third-party services. The increasingly competitive environment is pushing insurance companies to use advanced technologies to address multiple challenges, namely lack of trust, lack of transparency, and economic instability. To this end, blockchain is used as an emerging technology that enables transparent and secure data storage and transmission. In this paper, we propose CioSy, a collaborative blockchain-based insurance system for monitoring and processing the insurance transactions. To the best of our knowledge, the existing approaches do not consider collaborative insurance to achieve an automated, transparent, and tamper-proof solution. CioSy aims at automating the insurance policy processing, claim handling, and payment using smart contracts. For validation purposes, an experimental prototype is developed on Ethereum blockchain. Our experimental results show that the proposed approach is both feasible and economical in terms of time and cost. Introduction The insurance industry has seen an unprecedented growth during the last decade due to, at least in part, advancements in communication and computation technologies. The new futuristic technologies have positively impacted our lives in many sectors, such as health, transport, business, and so on. Like other beneficiaries of the today's cutting edge computation and communication technologies, the insurance industry is no exception. To keep up with the emerging trends, the insurance industry is also harnessing the benefits of the existing futuristic technologies. It is worth mentioning that the insurance industry covers many dimensions among which life, Property and Casualty (P&C), and health are primarily important. Without loss of generality, the processes involved in the insurance industry depend on the transacting entities for initiation, maintenance, and closure of the insurance policies [1]. Each insurance policy, which is a contract between the insurer and the insured, referred to as the policyholder, determines the claims that the insurer is legally required to pay, as well as the premium that the insurer promises to pay periodically (e.g., monthly). The generic existing insurance systems require manual interactions across different transaction processes resulting in slow processing and lengthy payment settlement time. Moreover, the insurance industry spends tens of millions dollars each year on processing claims and loses millions of dollars to fraudulent claims [2]. To address these limitations, several researchers have investigated the application of the blockchain technology in the insurance industry [3,4]. For instance, insurance policies can be transformed into smart contracts that will eventually help in automating claim processing, verification, and payment. It will provide several-fold advantages, for instance, saving time, reducing costs, and preventing potential fraud. Indeed, a smart contract can be written to register customers 1. We highlight the benefits of using the blockchain technology and smart contracts to enable peer-to-peer collaborative insurance, where customers rely on each other to meet their insurance needs while eliminating a centralized authority and adopt them in a new scheme, called CioSy. To the best of our knowledge, none of the existing solutions for digitized insurance consider covering all the insurance business process steps (from the insurer teaming up until the claim handling). 2. We introduce the machine-readable and self-enforcing insurance policies and claims based on voting mechanisms and external oracles as well as detail the working principles of the proposed blockchain-based insurance system. 3. We evaluate CioSy performance and scalability when the number of insurance claims and insurers increases. We analyze the performance of the proposed system in terms of execution time, scalability overhead, and the consumed gas. The rest of this paper is organized as follows. Section 2 discusses the existing blockchain-based insurance approaches and Section 3 introduces some key concepts followed by our proposed collaborative blockchain-based insurance scheme in Section 4. Section 5 describes the main framework functionality, and Section 6 discusses the implementation details and performance evaluation of the proposed scheme. Finally, Section 7 concludes the paper with possible future directions. Related Work Blockchain technology has proved to be a disruptive technology for many sectors including finance, governance, trade, ownership, and so on. Indeed, blockchain is used for cost reduction, transparency, and tokenization of objects. For instance, Fallucchi et al. [7] aimed to "certify the data" without the need for a centralized organization using blockchain to ensure traceability by storing data hashes in blockchain transactions and IPFS to guarantee the data availability. Several governments are launching blockchain technology pilot projects [8,9] to adapt to new technologies. According to the authors, their framework could be easily applicable to the Valls city council open data portal project, which publishes the datasets in the municipal web portal using blockchain and IPFS technologies [8] or to the blockchain project funded by the Department of Homeland Security (DHS) for secure digital identity management [9]. Similarly, blockchain technology has also penetrated the insurance industry and it can be a game-changer technology to address the limitations of the existing insurance solutions including claim processing, automated payment, asset transfers, and limit fraud [2]. In this context, we can distinguish between three categories, namely automated claim handling insurances [10,11], pay-per-use insurances [3,4], and peer-to-peer insurances [12,13]. In [10], oracles are used to gather information from the real-world and invoke automatically the smart contract pertaining to the claim. For instance, in travel insurance, an oracle can periodically check flight status, then a smart contract can read these external data, and trigger a payment to refund the insured travelers in case of a flight delay. Insure chain [11] is another interesting proposal that is based on a smart contract which includes the rules associated with setting the premium and the settlement verification. The verification of reimbursement conditions is based on the services of an oracle whose task is to certify the corresponding weather data and ensuring its authenticity. Using oracles can speed up claim handling and reduce manual administrative mistakes; however, relying on external oracles can be adopted only for a limited number of use-cases. The majority of claims are processed by insurance companies that still need to be evaluated by an expert before being settled. Smart contract-based payments have enabled new revenue sources, such as pay-peruse insurances [3,4]. Lamberti et al. [3] proposed an on-demand car insurance system using smart contracts and Internet of Things (IoT) for decreasing policy modification costs and limiting insurance fraud. Similarly, Vo et al. [4] proposed a blockchain-based pay-as-you-go car insurance application. This application allows drivers who rarely use cars to only pay the insurance premium for particular trips they would like to travel. Blockchain-based payper-use insurances can save the insured money compared to classic insurance offers and bring the insurance company a competitive advantage by attracting young customers and technology-enthusiast customers. However, these pay-per-use mechanisms require nearreal time data of the insured in order to limit the insurance fraud. Thus, these approaches need to focus on one non-functional requirement of insurance applications which is privacy protection for the insured. Peer-to-peer insurance models enable the transfer of an asset without the need for an intermediary. For instance, Friendsurance [12] is a digital, scalable, and modular digital bank-assurance platform for banks, insurers, and other partners who want to retain and monetize their customers through meaningful services. Moreover, Dynamis [13] is a blockchain-based peer-to-peer insurance system aiming at providing unemployment insurance for a community of self-managed people in terms of underwriting and claims acceptance and processing. However, according to [2], existing peer-to-peer insurances are not "real" peer-to-peer models because they have a traditional insurance model or risk carrier behind them to support the heavy part of the insurance business. Permissionless blockchains are decentralized systems designed to allow anyone to join the system, including Bitcoin and Ethereum, whereas permissioned blockchains are blockchain systems in which the participation in some or all roles is restricted to a set of users, such as Hyperledger Fabric. In the insurance industry, several distributed ledger technologies, such as Ethereum [10][11][12][13], Hyperledger Fabric [14,15], and IOTA [16] have been used. For instance, BlockCIS [14] is a cyber insurance system that aimed to provide an automated, real-time, and immutable feedback loop among the involved parties for assessing cyber risks. It has been built using the permissioned Hyperledger blockchain framework to isolate enterprise transactions from public access. While permissioned blockchains solve low performance and limited data confidentiality capabilities of permissionless blockchains, they come at the cost of sacrificing complete decentralization. To this end, we note that the existing blockchain-based solutions for insurance used blockchain technology to automate the payment while eliminating the intermediary entities. However, they still use a traditional insurance model. While our solution also uses the automation feature of the smart contract, it is to be thought of as a continuous processing peer-to-peer insurance system. Such collaborative insurance could be dedicated to electric cars, pet care, etc., but is not proposed to replace the traditional insurance model. Therefore, we aim at proposing a new collaborative insurance model to allow people who have similar profiles to come together in small communities to insure themselves in a more supportive, fair, and transparent way in an untrustworthy and tamper-proof network. We also note that the current solutions do not address the necessity of claim expert views in terms of claim verification in both cases of automating claim handling and peer-to-peer insurances. Therefore, in this paper, we address this limitation by relying on oracles only to automate a claim creation. Then, in order to take into consideration the created claim, authorizations from both the insurer and the insured are required. Blockchain Technology and Smart Contracts for Insurance In this section, we present the key concepts of blockchain and smart contracts, and then we introduce the impact of using blockchain in the insurance industry. Blockchain and Smart Contracts Blockchain is a distributed and chronological database of transactions where the transactions are stored in blocks. It is almost impossible to tamper with the blocks in blockchain and thus it can be trusted. Trust is built in blockchain without the need for a central authority. Such a distributed ledger can contain digital or physical assets that can be shared in a network throughout many institutions and geographical locations where all members of the network must have their identical copy of the ledger [17]. Blockchain technology was originally designed to play a role primarily in the financial field, but in recent years it has also been exploited in other areas, such as healthcare information exchange [18], fairness-based packing of industrial IoT data [19], supply chain management for food traceability [20], and blockchain-based solutions for insurance [10][11][12][13]. Smart contracts are computer programs deployed on the blockchain. They are triggered and perform pre-defined actions when specific conditions are met. The smart contract functions will always respond when invoked and they cannot be censored or altered once deployed [17]. Smart contracts gave network automation and the ability to convert paper contracts into digital contracts. Compared to traditional contracts, smart contracts enabled users to codify their agreements and trust relations by providing automated transactions without the supervision of a central authority [21]. It is also worth mentioning that smart contracts cannot communicate directly with the external systems, thus this communication is carried out by oracles. An oracle is a third-party information source that provides information to the smart contract in blockchain through Application Programming Interfaces (APIs). For instance, Provable Ethereum API is the leading oracle service for smart contracts and blockchain applications [22]. Blockchain at the Service of Insurance Blockchain provides many advantages and benefits to financial engineering and particularly the insurance sector: Automation: Smart contracts provide a high degree of automation by encoding the business rules of an insurance policy in software code deployed on the blockchain. The business processes in the insurance industry are automated and fast (from the customer registration all the way to claim handling). Time saving: Without involving the banks, asset transfers can be made faster because cryptocurrencies are directly moved from a wallet's address to another without intermediate steps. Thus, the blockchain-based transactions are quicker than traditional bank transfers (especially in the case of overseas asset transfer). Reduced cost: By removing intermediaries, the cost of money transfers can be reduced (e.g., bank commissions are not needed). Moreover, by eliminating manual interactions across insurance system entities, the administrative and operational costs can be reduced. Improved transparency: Transparency is guaranteed because the blockchain can be accessed worldwide. In addition, the blockchain can become the repository of a huge amount of information that cannot be repudiated and can be used for data analytic in the insurance sector. Thus, such transparency enables regulators and auditors to detect suspicious transaction patterns and market behaviors. In this section, we present a system model and the description of the involved entities in blockchain-based insurance system. System Model Main Goals Although multiple researchers have studied the impact of blockchain technology on the insurance industry, there are still many outstanding challenges to be addressed to enable collaborative insurance. In essence, the collaborative insurance model is inspired by the concept of a collaborative economy. The latter "is an economic model where ownership and access are shared among corporations, startups, and people. This results in market efficiencies that bear new products, services, and business growth" [23]. Therefore, collaborative insurance, as defined/considered in this paper, is a peer-to-peer network where customers rely on each other instead of traditional insurance companies to meet their insurance needs with the help of a web-based middleman. Several profit-driven models of the collaborative economy, such as Airbnb, Uber, etc., exist. For the insurance industry, eliminating the necessity to trust a middleman (e.g., a web-based middleman) is required in order to incite customers to join the peer-to-peer insurance network and share their money. Moreover, a distributed data storage is needed to eliminate the single point of a trust/failure problem where all the collaborative insurance transactions are stored by a centralized authority. Furthermore, a machine-readable and self-enforcing insurance policy need to replace the traditional insurance policy in order to automate and speed-up the insurance business process. Finally, a collaborative insurance solution needs to take advantage of the digital signature in order to guarantee the three security properties, i.e., confidentiality, integrity, and the sender's identity (i.e., authentication data) during insurance data exchange to limit any potential fraud. To the best of our knowledge, none of the existing solutions for digitized insurance consider all the aforementioned requirements throughout the insurance business process (from the insured registration until the claim handling). For this purpose, we propose CioSy, a blockchain-based collaborative insurance solution that harnesses the benefits of both blockchain technology and smart contracts. The main idea is to develop a continuous monitoring and processing collaborative insurance system by (i) managing the collected money of the insurers using a smart contract to eliminate the need to trust the involved insurance parties, (ii) implementing the insurance policies and the claims as smart contracts, and (iii) deploying the contracts in a distributed platform using blockchain for both automating the execution of the agreement between the insurer and the insured as well as (iv) recording all the insurance transactions in a transparent and tamper-proof manner. Functional Entities As depicted in Figure 1, there are mainly four entities in our system: insured, insurer, third-party web APIs, and auditor. The role of these entities is described as follows. 1. Insured: This entity is interested in purchasing policies offered by an insurer in order to be covered (depending on the type of insurance). In case of a claim request, the insured can receive refunds from the insurer (subject to verification). 2. Insurer: This entity is represented by a smart contract shared among several customers (e.g., people, banks, insurance companies, etc.). These customers collaborate together to pool their contributions/premiums and protect each other by refunding any customer facing a situation that warrants a refund. The refunded entity is then required to contribute to the fund that is used to pay future claims. This smart contract provides insurance for the insured by proposing multiple insurance policy types. Each insurance policy determines the claims that the insurer is legally required to pay. To reduce manual interactions, claim requests are automatically invoked after notifications/warnings are sent by third-party web APIs. 3. Third-Party Web APIs: These entities provide specialized services that are useful to invoke claim requests; for instance, an airline's API that notifies the policy smart contract in case of a flight delay or IoT-equipped vehicles that report near-real time accidents. In case of a dispute between the insurer and the insured, there may be a need for an auditor. 4. Auditor: This entity investigates and audits the insurance transactions stored on the blockchain to settle some legal dispute between the insurer and insured. The blockchainenabled distributed platform facilitates the auditor's task, thanks to the salient features of blockchain, i.e., transparency, tamper-proof, and the non-repudiation. In the following section, we discuss the proposed blockchain-based framework for insurance in detail. Blockchain-Based Framework for Insurance To exchange insurance-related transactions in an untrustworthy network, we leverage smart contracts to define a blockchain-based insurance framework. The latter aims at automating and speeding up business processes in the insurance industry, improving insurance transaction transparency, non-repudiation, and reducing administrative and operational costs by eliminating manual interactions across insurance system entities. Proposed Smart Contracts In order to automate the execution of the agreement between the insurer and the insured, three smart contracts are proposed: InsurancePool, InsurancePolicy, and Claim. Figure 2 shows the proposed smart contracts, which automatically enforce the agreement between the insurer and the insured. The smart contract's functions are executed when a set of predefined conditions are satisfied. • InsurancePool smart contract: it is hosted on the blockchain and used by multiple clients interested in proposing multiple insurance offers. The InsurancePool smart contract is designed to enable several insurers to collaborate on a common project that offers a collaborative insurance to refund the insured for possible damage(s) during designated incidents. This smart contract defines a set of functions, i.e., a (i) payContribution function that enables each insurer to participate by paying an amount of money to the insurance pool, (ii) updateAPI function that enables the insurers to update the link to the third-party web API, this function invokes automatically one of the Claim smart contract's functions, (iii) voteToAuthorize function that enables the insurers to decide whether to authorize or not opening a claim, this function also invokes automatically one of the Claim smart contract's functions, and (iv) distributeSurplus function that is invoked at the end of the year in order to distribute the surplus of the collected money to all insurers who have not had any claims during the last year (each according to its contribution). • InsurancePolicy smart contract: it is created by the customer interested in purchasing policies offered by the insurer and hosted on the blockchain. The InsurancePolicy smart contract is designed to enable the insured, known as the policyholder, to hold a machine-readable and self-enforcing insurance policy. This smart contract defines a set of functions, namely a (i) payPremium function that enables the policyholder to pay periodically a premium which is a fixed amount of money to the insurer, (ii) cancelPolicy function that enables the policyholder to cancel a purchased insurance policy, thus the insurance policy status is updated to "Canceled" and the insurance policy is canceled, (iii) updateClaimDetectionURL function that is invoked by the updateAPI function defined in the appropriate InsurancePool smart contract instance in order to update the link to the third-party web API, and (iv) claimCreation function that is connected to an external third-party web API, then in case of a received claim notification, this function creates/deploys automatically a new instance of the Claim smart contract. The InsurancePolicy smart contract inherits the usingProvable smart contract [22] which helps to connect our proposed smart contract with the external data provided by the third-party web APIs. • Claim smart contract: it is created by an InsurancePolicy smart contract and hosted on the blockchain. The Claim smart contract is designed to automate the claim processing, verification, and payment. This smart contract defines a set of functions, namely a (i) authorizeOpen function that enables the insurer to update the status of the claim from "Created" to "Open" or to "Rejected", this function is invoked by the voteToAuthorize function defined in the InsurancePool smart contract, (ii) triggerPayment function that is automatically invoked in order to refund the claimed amount and ask for closing the claim, (iii) closeClaim function that is automatically invoked once the insured is refunded by updating the claim's status to "Closed", and (iv) cancelClaim function that enables the insured to cancel a claim, then the claim status is updated to "Canceled" and the claim is canceled. Due to the lack of space, we provide the full definition of the smart contracts at our Github-repository (https://github.com/Floukil/BlockchainBasedInsurance, accessed on 1 June 2021). Now, we explain the main functions offered by and working principles of the proposed framework for blockchain-based insurance. Main Functions and Working Principles of the Proposed Blockchain-Based Insurance Based on the proposed smart contracts, our blockchain-based insurance framework includes the following functions: (i) gathering the insurers as a collaborative community, (ii) purchasing an insurance policy offered by an insurer, (iii) creating a claim by the insurance policy, and (iv) automating claim processing and refunding payment. Gathering the Insurers as a Collaborative Community In order to gather several insurers as a collaborative community, an interested insurer can initiate a collaborative insurance network through the following steps: • Step 1: Create (i.e., write and compile) an InsurancePool smart contract. • Step 2: Send a transaction to deploy the created smart contract onto the blockchain. Once hosted, the InsurancePool smart contract instance got a unique blockchain address. In order to participate in the collaborative insurance, other interested customers can contribute to the insurance pool through the following steps: • Step 3: Send a transaction to call the payContribution function defined in the created InsurancePool smart contract to pay periodically an amount of money. Then, each insurer can purchase an insurance policy in order to be insured. Purchasing an Insurance Policy Offered by an Insurance Pool In order to facilitate the management of the insurance, an interested customer can purchase an insurance policy offered by an insurance pool through the following steps: • Step 1: Create (i.e., write and compile) an InsurancePolicy smart contract. • Step 2: Send a transaction to deploy the created smart contract onto the blockchain with a precise insurer's blockchain address (i.e., the blockchain address of the Insurance Pool smart contract) and the fixed premium payment amount. The sender of this transaction becomes the owner of the InsurancePolicy smart contract instance, known as the policyholder. • Step 3: Send a transaction by the defined insurer to call the updateClaimDetectionURL function defined in the InsurancePolicy smart contract to update the link to the thirdparty web API, which is responsible for claim notifications. Creating a Claim by the Insurance Policy In case of a designated incident (e.g., car accident, flight delay, etc.), an InsurancePolicy smart contract instance can create a new claim through the following steps: • Step 1: Receive a notification from an associated third-party web API that a potential claim has happened. • Step 2: Call internally the claimCreation function defined in the InsurancePolicy smart contract by the callback function of the usingProvable smart contract [22] in order to deploy a new Claim smart contract instance onto the blockchain. The same InsurancePolicy smart contract instance can create multiple Claim smart contract instances. Once created, both the insurer and the insured receive the blockchain address of the new Claim smart contract instance. Automating Claim Processing and Refund Payment Both the insurer and the insured can interact with the Claim smart contract instance through the following steps: • Step 1: The insured sends a transaction to call the cancelClaim function defined in the Claim smart contract to cancel the claim and update the claim's status from "Created" to "Canceled". One of the most common reasons why the insured might want to cancel a claim is not wanting to pay the deductible, which is the amount of money paid by the insured before the insurer refunds the claimed amount. The objective of the previous step is two-fold: either the insured abandons the refund by canceling the claim or authorizes the claim handling. In case of an authorization, the insurer starts the claim handling through the following steps: • Step 1: Send a transaction by the insurer to call the authorizeOpen function defined in the Claim smart contract to authorize opening the claim or reject it after a claim verification by claim experts and the insured confirmation. • Step 2: Once the claim is open, call internally the triggerPayment function in order to transfer the deductible amount from the insured account to the insurer account and transfer the claimed amount from the insurer account to the insured account. • Step 3: Call internally the closeClaim function to update the claim's status to "Closed". Performance Evaluation This section provides experimental results to evaluate performance and demonstrate the feasibility of the proposed framework. We first introduce the experimental setup, define a use-case for insurance policy handling, and evaluate the performance of the proposed blockchain-based solution. Finally, we discuss the relationship of premiums, refund payments, and risk management in collaborative insurance systems. Experimental Setup Ethereum is currently the most commonly used blockchain platform for the development of smart contracts [17]. Hence, we implemented our proposed smart contract using the Solidity language [24] and deployed it to the Ethereum test network. The experiments were performed based on a Remix tool that supports testing, debugging, and deploying smart contracts on Ethereum blockchain. In order to deploy a lightweight blockchain, we used Ganache as a personal blockchain for Ethereum development [25]. Therefore, we created a test system using a Truffle development framework [26], which is the most popular development framework for Ethereum. The reason behind the use of the Ethereum blockchain is that Ethereum enables developing and executing advanced and customized smart contracts using the programming language Solidity, which is supported by other blockchain platforms, such as Hyperledger Fabric that supports multi-language smart contracts. Thus, the proposed design can be supported by several blockchain platforms. While alternative blockchains are emerging, Ethereum is considered as the most solid and widespread blockchain that allows decentralized applications to be built on top of it. All the experiments were conducted on a computer with an Intel Core i5 CPU (running at 2.30 GHz with 8 GB RAM). Use-Case: Insurance Policy Handling We implemented a test system that consists of several nodes, namely 50 insurers, 1000 insured, 1 third-party Web API, and 1 auditor. We assumed that each node (except the third-party Web API) is represented by an Ethereum address associated with a pair of public/private keys. In our experiments, we used the contract events to automate the actions taken by the different nodes. Then, we implemented event callbacks in the test system using the web3.js library [27]. Suppose a home is equipped with sensors monitored by smart applications that can notify a smart contract in case of serious damage or a designated event. Thus, such an application can initiate a claim or contact a repairer for a quicker assistance when needed. Let the insurer be a group of individuals, the insured be the home's owner, and the third-party Web API be an API that is provided by the home applications. First, an individual creates an instance of the InsurancePool smart contract on the blockchain to create a collaborative group that insures group members against water damage. Each group member can purchase an insurance policy to be insured. Thus, the home's owner creates an instance of the InsurancePolicy smart contract on the blockchain to generate an insurance policy while indicating the blockchain address of the insurer. The created smart contract serves as the home's insurance policy that is connected to the home application API. In case of damage, the home application API notifies the InsurancePolicy smart contract that waits for both the insurer and the insured authorizations to deploy automatically an instance of the Claim smart contract. Suppose that the home's owner does not want to pay a lot of money if a group member claims damages so he/she sets a fixed limit amount per single premium in his/her insurance policy. To ensure fairness, in case of a claim, the home's owner will not receive more than his/her fixed limit premium per group member even if the other group members accept to pay a higher premium in their insurance policies. At the end of the year, if the insurers have no claims, they recover part of their contribution thanks to the distributeSurplus function. For validation purposes, we show in the rest of this section the insurance policy handling feasibility by implementing smart contracts, and interacted with them by sending a set of transactions. During our experiments, we recorded the computing time, in milliseconds, of each aforementioned step. Each step consisted of one or several transactions that invoked the appropriate smart contract's functions in order to read or write on the deployed smart contract. Performance Evaluation Metrics To evaluate the performance of our proposed scheme, we consider computation overhead in terms of time cost, scalability overhead, the consumed gas, and computational cost for claim authorization. Computation Time Cost We compute the processing time needed to validate a premium payment transaction that invoked a chain of functions that include a payPremium function, an API link update transaction that invoked the updateClaimDetectionURL function, a claim creation trans-action that invoked the claimCreation function, and a policy canceling transaction that invoked the cancelPolicy function. Thus, we measure the processing time of invoking the aforementioned functions defined in the InsurancePolicy smart contract. Figure 3 depicts the computational cost of the invoked functions for one insured entity. Only 222 milliseconds are needed to deploy a new Claim smart contract instance by an insured instead of the several minutes or hours it usually takes when using a traditional insurance policy contract. The processing time of the other invoked functions varies from 100 to 160 ms. Thus, our solution is able to meet our fixed objectives, namely fast insurance business processes, reduce administrative and operational costs by eliminating manual interactions, and record the insurance transactions in a tamper-proof manner. Scalability Overhead We generated up to 1000 different insured accounts, and compiled and deployed an InsurancePolicy smart contract for each account. Figure 4 depicts the time taken while invoking the InsurancePolicy smart contract functions. We observe that the processing time increases with the number of insured accounts, ranging from 0 to 240 s. Furthermore, Figure 4 shows that the total time required for smart contract interaction is strictly linear to the number of the insured accounts. Cost Overhead We also evaluated the consumed gas by a transaction while invoking one of the InsurancePolicy smart contract's functions, namely claimCreation, payPremium, update ClaimDetectionURL, and cancelPolicy after deploying it in the blockchain. Figure 5 depicts the cost incurred in terms of gas by different invoked functions of the InsurancePolicy smart contract. We observe that the consumed gas is changed by changing the invoked function. This can be explained by the fact that the functions that need more computational resources cost more gas than functions that require fewer computational resources. As expected, the deployment of a new InsurancePolicy smart contract instance and the invocation of the claimCreation function, which deploys automatically a new Claim smart contract instance, require more gas than the invocation of the rest of functions. Currently, 1 gas costs about 20 Gwei (i.e., 20 × 10 −9 Ether) and the exchange rate is about 2211 USD for 1 Ether at the time of writing. Thus, we compute the gas cost by multiplying the used gas by the gas price for transactions that invoke the smart contract functions. Computational Cost for Claim Authorization We compute the confirmation time to authorize opening or rejecting a claim while varying the number of insurer accounts from 1 to 50. Thus, we measure the processing time of invoking the authorizeOpen function defined in the Claim smart contract. As aforementioned, the authorizeOpen function is invoked by the voteToAuthorize function defined in the InsurancePool smart contract once all the insurers give their votes. Figure 6 depicts the correlation between the number of collaborative insurance community members (i.e., insurers) and the corresponding time taken for opening or rejecting one claim. We observe that the community size is proportional to the authorization time. Therefore, the larger the number of insurers, the higher the computational cost for claim authorization. This is caused by the requirements for a larger number of insurer votes. Even with a significant number of collaborative insurance community members, the computational cost for claim handling is lower that of a traditional insurance system that requires manual interaction and human confirmation. Discussion The proposed framework is a peer-to-peer insurance where the insured are comfortable just sharing risk with each other. Thus, CioSy is more suitable for insurance products with a very low expected value of risks. In fact, solidarity is a major rule in such collaborative insurance. By using the premiums, everybody pays for the damages of the others and may not wish to generate profit. However, catastrophes may happen, such as a hurricane that might disrupt flights for a few days or a volcano eruption that closes down numerous airports for several days. Thus, premiums could be insufficient to fulfill all the claims. Therefore, a collaborative insurance requires a multi-layer risk management strategy to offload and manage the risks involved. For instance, villagers setting up a collaborative insurance need to manage a catastrophe risk in case of a disaster that hits their village and affects everyone. Therefore, two possible solutions could be considered, including reinsurance by traditional insurance companies and collateralization with cryptographic tokens. The first one is relying on insuring the actual insurance pool by another insurance company, which in turn is often insured by yet another, usually state-owned, insurer. The second solution consists of tokenizing one or more of the insurance pools and making them available to any investor seeking opportunities to earn passive income on their crypto assets [28]. In order to keep the decentralization characteristic of our proposed framework and prevent relying on one traditional reinsurance company, CioSy could be extended to include a risk pool, where a portion of the premium originally paid by the insureds is paid to investors who are willing to accept a catastrophe risk. In this context, risk pool tokens, which are proposed based on the Etherisc Protocol [29], could be used. Therefore, investors could buy risk pool tokens by paying some ethers to an appropriate smart contract. In the absence of a catastrophe, the token holder periodically receives a portion of all premiums paid by insureds as compensation for insuring the risk of a catastrophe, and on the expiration date of the token, the smart contract returns the original investment to the token holder's wallet. If there is a catastrophe, the tokens' holders may lose some or all of the original investments. All the previous improvements in the proposed framework need more investigations in future work. Conclusions In this paper, we proposed CioSy, a collaborative blockchain-based insurance system for monitoring and processing insurance transactions. We discussed the proposed framework's main functionality and implemented it on Ethereum blockchain with smart contracts. We also conducted experiments to evaluate the performance of the proposed scheme. The obtained results showed that our approach is feasible and enables time and cost savings. The most relevant issue on Ethereum blockchain is the gas price [30]. Developments are in progress and several improvement proposals would solve the problem. One of the most well-known proposed solutions is the Ethereum Improvement Proposal (EIP-1559) [31], while Ethereum 2.0 will use proof-of-stake consensus instead of proof-of-work, which is less expensive and more energy-efficient. As future work, we plan to provide a formal security proof for the proposed model. Furthermore, we also plan to examine the possibility of investing the collected money by an insurance pool using the blockchain technology in order to incite banks and insurance companies to join the proposed collaborative insurance system.
8,390.2
2021-06-03T00:00:00.000
[ "Computer Science" ]
Evaluation of Acetylcholinesterase Biosensor Based on Carbon Nanotube Paste in the Determination of Chlorphenvinphos An amperometric biosensor for chlorphenvinphos (organophosphorus pesticide) based on carbon nanotube paste and acetylcholinesterase enzyme (CNTs-AChE biosensor) is described herein. This CNTs-AChE biosensor was characterized by scanning electron microscopy (SEM) and electrochemical impedance spectroscopy (EIS). The SEM result shows the presence of CNTs and small lumps, due to the enzyme AChE, which has a type of cauliflower formation. From EIS analysis is possible to observe increased R tc for CNTs-AChE biosensor when compared to the carbon nanotube paste electrode for the reaction [Fe(CN)6]4−/3−. Using a chronoamperometric procedure, a linear analytical curve was observed in the 4.90 × 10−7–7.46 × 10−6 M range with limit of detection of 1.15 × 10−7 M. The determination of chlorphenvinphos in the insecticide sample proved to be in agreement with the standard spectrophotometric method, with a 95% confidence level and with a relative error lower than 3%. In this way, the CNTs-AChE biosensor presented easy preparation, fast response, sensitivity, durability, good repeatability, and reproducibility. Introduction Carbon nanotubes (CNTs) consist of cylindrical graphene sheets with nanometer diameters and present many unique characteristics, such as large ratio of surface area to mass, high electrical conductivity, and remarkable mechanical strength. CNTs include both single-walled and multiwalled structures. Single-wall CNTs (SWCNTs) are comprised of cylindrical graphite sheets of nanoscale diameter capped by hemispherical ends. Multi-wall CNTs (MWCNTs) are comprised of several to tens of incommensurate concentric cylinders of these graphitic shells with a layer spacing of 0.3-0.4 nm. MWCNTs tend to have diameters in the 2-100 nm range and can be considered as a mesoscale graphite system [1]. Since their discovery in 1991 [2], extensive applications have been found in the physical, chemical, and material science fields. The advantages of CNTs, such as their high surface area, favorable electronic properties, and electrocatalytic effect, have recently attracted considerable attention for the construction of electrochemical biosensors. Electrochemical biosensors, particularly enzyme electrodes, have benefited greatly from the ability of CNT-based transducers to promote the electron-transfer reactions of enzymatically generated species, such as hydrogen peroxide [3,4] or NADH [5], and from the resistance to surface fouling of transducers. Electrochemistry is a powerful tool for real-time detection compared to fluorescence and spectrophotometry, which involves expensive detection systems. A combination of enzymatic reactions with the electrochemical method of monitoring electroactive enzymatic products allowed the development of enzyme-based electrochemical biosensors for sensitive and rapid determination of important environmental pollutants. Chlorphenvinphos is an organophosphate (OP) compound used as an agricultural and household pesticide [6]. The toxic action of chlorphenvinphos is based on its ability to irreversibly modify the catalytic serine residue in acetylcholinesterase (AChE) and effectively prevent nerve transmission by blocking breakdown of the transmitter choline [7]. For these reasons, the rapid determination and reliable quantification of trace levels of chlorphenvinphos are important for health and environmental reasons. Biosensors based on the inhibition of AChE have been widely used for the detection of OP compounds. The methodology involves the measurement of the uninhibited activity of the enzyme, followed by an incubation period for the reaction between enzyme and the inhibitor, and the measurement of enzyme activity after the inhibition. Recently, CNTs have been used for the construction of biosensors based on the inhibition of AChE activity for the determination of OP compounds [8]. The biosensor was prepared by mixing CNTs with mineral oil. Such composite electrodes combine the ability of CNTs to promote electron-transfer reactions with the attractive advantages of paste electrode materials. These materials allow easy immobilization, reproducible electrochemical behavior, and useful physical characteristics [9][10][11]. This study describes the preparation and application of acetylcholinesterase biosensors based on carbon nanotube paste (CNTs-AChE biosensor) in the amperometric detection of chlorphenvinphos. Compared to other analytical techniques, such as gas and liquid chromatography, enzyme-based electrochemical biosensors represent good selectivity, sensitivity, rapid responses, and reduced sizes in the determination of pesticide. Reagents and Solutions. All reagents were of analytical grade and used as received. The solutions were prepared with reverse osmosis water Gehaka (OS20 LX FARMA). The multiwall carbon nanotubes (MWCNT, 20-40 nm diameter, 5-15 μm) came from Schenzhen Nanotech Port Co. Ltd. (Schenzhen, China). Acetylcholinesterase (AChE) (0.3 U mg −1 ) came from bovine erythrocytes. Acetylthiocoline iodide was purchased from Aldrich, and a 1.2 × 10 −2 M stock solution was prepared in phosphate buffer pH 7.4. Chlorphenvinphos was purchased from Aldrich, and a 1.0 × 10 −3 M stock solution was prepared in methanol. 2.2. Apparatus. The electrochemical measurements were performed using a model PGSTAT20 Autolab (Eco Chemie, Utrecht, Netherlands) potentiostat/galvanostat coupled to a personal computer and controlled with GPES 5.8 software. The electrochemical impedance spectroscopy (EIS) data were obtained using a PGSTAT30 Autolab (Eco Chemie, Utrecht, Netherlands) potentiostat/galvanostat controlled with FRA software. The electrochemical cell was assembled using a conventional three-electrode system: an Ag/AgCl in KCl (3 mol L −1 ) reference electrode, a Pt counter electrode, and a CNTs-AChE biosensor working electrode (1.2 mm diameter). All experiments were carried out at room temperature. An NIR Cary Model 5G spectrophotometer was used for comparative method coupled to a personal computer and controlled with Cary Win UV software with a quartz cell (optical path of 1.00 cm). Scanning electron microscopy was performed in an FEG-VP Zeiss Supra 35 microscope, operated at 5 kV, at different magnitudes. Preparation of the CNTs-AChE Biosensor. In previous studies carried out by our group, the most successful carbon nanotube paste was found using 6/4 (w/w) CNTs/Nujol. Therefore, this carbon nanotube paste composition was used in the present investigation for the construction of the biosensor using acetylcholinesterase enzyme. The carbon nanotube paste electrode modified with AChE was prepared by mixing carbon nanotubes and Nujol in an agate mortar with pestle. Subsequently, 0.050 g of this mixture was modified by adding 6.75 mg of AChE 250 UN and mixing until a uniformly wetted paste was obtained. After this, the paste was packed into a glass tube (φ = 1.2 mm), and a copper wire was embedded in the paste for electrical connection. Procedures. Square wave voltammograms for 3.0 × 10 −4 M acetylthiocoline iodide in phosphate buffer pH 7.4 and 0.14 U of acetylcholinesterase were obtained between 0 and 1.0 V at increments of 2 mV and a frequency 50 Hz in order to evaluate the oxidation process of thiocholine, the product of the enzymatic reaction using carbon nanotube paste electrode 60% (w/w). Amperometric analysis was performed using acetylthiocoline iodide 3.0 × 10 −3 M in phosphate buffer pH 7.4 and CNTs-AChE biosensor. The potential applied was 0.3 V. For the determinations of chlorphenvinphos samples, experiments of standard addition were carried out using the amperometric method. Insecticide samples were prepared by dissolving in methanol and diluting to volume with phosphate buffer pH 7.4. After this, an aliquot of this solution was transferred into the cell and amperometric measurements were recorded in triplicate. Next, three successive additions of 100 μL of a standard 2.5 × 10 −4 M chlorphenvinphos solution in phosphate buffer pH 7.4 were performed. After each addition, amperometric measurements were recorded and the mean current was determined. A spectrophotometric method for the determinations of chlorphenvinphos was used to compare the obtained analytical results with the proposed method. The electrochemical impedance spectroscopy (EIS) data were obtained for frequencies from 10,000 Hz to 0.01 Hz at an amplitude of 10 mV. The impedance spectra were obtained within the ac potential, in 5 mM potassium ferricyanide in 0.5 M KCl, in the format of Nyquist plots. Results and Discussion CNTs have been known to promote electron transfer reactions due to their electronic structure, high electrical conductivity and redox active sites [12,13]. The electrocatalytic action of CNTs facilitates low-potential measurements of the product of enzymatic reaction. For this reason, the CNTs-AChE biosensor was prepared without introduction of redox mediators (RM), which are able to shuttle electrons between the active site of redox enzymes, and an electrode replacing the natural cosubstrate of the enzyme. The CNTs-AChE biosensor combines the ability of carbon nanotube paste to promote electron-transfer reactions with the attractive advantages of paste electrode materials. These materials allow easy enzyme immobilization, reproducible electrochemical behavior and useful physical characteristics. The use of CNTs as a matrix for immobilization exhibits advantages for chemically modified electrodes, primarily in the diversity of preparation methods for sensors and biosensors. As CNT matrices are effective in the immobilization process as transducer material, they are used together in the composite production as carbon paste. The coupling of the biocatalytic material and the electrode surface can be promoted through the interaction between the functional groups of the materials and the enzyme through the terminal amino acids [14]. For this reason, the CNTs-AChE biosensor was prepared without introduction of solid support for immobilization of the AChE. Micrographies of the carbon nanotube paste electrode (60% w/w) and CNTs-AChE biosensor surfaces after polishing with 600 grit sandpaper are presented in Figure 1 for different magnitudes. The comparison of images shows a significant difference in the morphology of materials. In Figure 1(d), it is possible to observe the presence of CNTs and small lumps, due to the enzyme AChE, which has a type of cauliflower formation. The EIS experiments, from Nyquist plots, allow the obtainment of charge transfer resistance values for the electrode process being studied. As such, the experiments were carried out in the following conditions: (A) carbon nanotube paste electrode (60% w/w), at 0.346 V (ac potential), (B) CNTs-AChE biosensor, at 0.192 V, all 5 mM Fe(CN) 6 3− in 0.5 M KCl solution. All responses ( Figure 2) presented typical semicircles at high frequencies and a straight line at low frequencies, corresponding to kinetic and diffusional processes, respectively. To fit the EIS data, the corresponding spectra were modeled using Randle's equivalent circuits of mixed kinetic and diffusional control (insets in Figure 2(a)), where R s is the electrolyte resistance, C the interface capacitance, and R ct the charge-transfer resistance (domain of the kinetic control), resulting from the diffusion of Fe(CN) 6 3− towards the electrode surface from the bulk of the electrolyte. As evidenced in the Nyquist plots, the simulated symbols (cross) based on the model agree with the experimental results. The estimated parameters obtained by assuming Randle's model are listed in Table 1. The value of R tc for the CNTs-AChE biosensor increased threefold compared to the carbon nanotube paste electrode for the reaction [Fe(CN) 6 ] 4−/3− . This is probably due to the presence of the AChE enzyme at the electrode surface, as seen in micrographs, which is not conductive. Optimization of the CNTs-AChE Biosensor Response. In the current study, the voltammetric characteristics of acetylthiocholine iodide (substrate) and thiocholine (enzymatically-generated product) on the carbon nanotube paste electrode were investigated by square wave voltammetry in phosphate buffer. Typical square wave voltammograms are shown in Figure 3. Two oxidation processes (peaks I and II) of thiocholine are observed at 0.045 and 0.250 V (versus Ag/AgCl) as observed by Liu et al. [13] for a glassy carbon electrode modified with carbon nanotube film. The anodic oxidation peak of acetylthiocholine iodide was observed at 0.620 V and its oxidation process begins at 0.500 V. As the potential of oxidation of thiocholine and the initial oxidation of acetylthiocholine iodide are close to one another, a study on the working potential was achieved using chronoamperometry. The optimum potential for biosensor operation and current-time responses were obtained in thiocholine on a carbon nanotube paste electrode. The potential range evaluated was from 0 to 350 mV and the chronoamperometric responses obtained are presented in Figure 4. The maximum current responses were observed at 300 and 350 mV and the potential at 300 mV was selected for amperometric measurements of thiocholine with the CNTs-AChE biosensor. This potential was selected due to its greater substrate oxidation potential. In order to optimize the CNTs-AChE biosensor's performance, the following experimental variables were investigated: substrate concentration and incubation time. The influence of substrate concentration on the biosensor response was studied, with the purpose of increasing the signal obtained for enzymatic reaction. Thus, the effect of the substrate concentration on the CNTs-AChE biosensor's response was investigated between 0.5 and 3.0 × 10 −3 M acetylthiocholine iodide solution in phosphate buffer pH 7.4. The highest analytical signal was obtained at 3.0 × 10 −3 M. Following this, the incubation time was studied ranging from 2 to 20 min. Incubation time is the time the biosensor remains immersed in the solution containing the pesticide and must be sufficiently extensive. The incubation time selected was 10 min. However, the maximum value of the inhibition was not 100%, which can likely be attributed to the binding equilibrium between pesticide and binding sites in the enzyme. Thus, these experimental conditions were selected for further experiments. Table 2 summarizes the range over which each variable was investigated and the optimum value found in the optimization of the proposed method. The repeatability of the CNTs-AChE biosensor was determined from five different measurements in the same solution containing 3.0 × 10 −3 M acetylthiocholine iodide in phosphate buffer pH 7.4. The electrode surface was renewed after each determination resulting in a mean peak current of 1.199 ± 0.013 10 −7 A (n = 5). This result indicates good repeatability. Reproducibility was investigated considering three biosensors prepared independently. An acceptable reproducibility was obtained with a relative standard deviation of 10% for measurements carried out in 3.0 × 10 −3 M of acetylthiocholine iodide in phosphate buffer pH 7.4. The stability and life span of the biosensor are very important parameters in analytical determinations. For this reason, these parameters were investigated for the proposed biosensor in consecutive measurements without surfacing for over 20 days. When the CNTs-AChE biosensor was stored at −4 • C with measurements taken every day for 20 days, no noticeable change was observed in the response obtained in 3.0 × 10 −3 M of acetylthiocholine iodide in phosphate buffer pH 7.4. Electroanalytical Method. The electrochemical determination of chlorphenvinphos was performed through the inhibition of the reaction of AChE with the substrate, acetylthiocholine, in order to allow the maximum inhibition to be achieved. The percentage of inhibition caused by chlorphenvinphos on the enzymatic activity of the biosensor was calculated using the following equation: where ΔI 0 and ΔI 1 are the biosensor responses before and after the incubation procedure, respectively. Under optimized conditions, a linear response between the percentages of inhibition as a function of chlorphenvinphos concentration was obtained in the range investigated ( The result is presented in Figure 5, which shows the curve obtained with surface renewal between successive determinations. The favorable characteristics presented by the proposed biosensor allowed its application for the direct determination of chlorphenvinphos in real samples. Consequently, the performance of the CNTs-AChE biosensor was tested by applying it for the determination of chlorphenvinphos in the insecticide sample using the standard addition method. The results obtained by the CNTs-AChE biosensor were compared with those obtained by the spectrophotometric method. The results are summarized in Table 3. Applying a paired t-test to the results obtained by this procedure and spectrophotometric method, it was found that all results are in agreement at a 95% confidence level and with a relative error lower than 3%. These results therefore suggest that the CNTs-AChE biosensor is suitable for the determination of chlorphenvinphos in an insecticide sample. Conclusions In this paper, we have described the use of a CNTs-AChE biosensor for determination of chlorphenvinphos. The proposed system does not require any complicated immobilization procedure for the construction. The CNTs-AChE biosensor was prepared without the introduction of redox mediators. It is therefore concluded that the CNTs-AChE biosensor presents easy preparation, fast response, sensitivity, durability, good repeatability and reproducibility. Furthermore, this biosensor can be used in chronoamperometry for the determination of chlorphenvinphos in an insecticide sample, producing a relative error lower than 3%. The proposed method is, therefore, simple, fast and sensitive.
3,693.2
2011-05-02T00:00:00.000
[ "Materials Science" ]
Contributions of modern Gobi Desert to the Badain Jaran Desert and the Chinese Loess Plateau It is well known that the Gobi Desert is the dominant source area of the Badain Jaran Desert (BJD) and the Chinese Loess Plateau (CLP). However, due to the absence of quantitative analyses, there are nearly no exact assessments of its actual contribution. Combinations of field investigations, wind tunnel experiments, and wind field analyses revealed that the potential erosion depth on modern Gobi Desert varied between 0.41 and 0.89 mm a−1. Results indicated it would take an average theoretical time of 80.8 ka and 4,475.9 ka to form the current dimensions of the BJD and CLP, respectively, which means the Gobi Desert may provide substantial sand sources to the modern BJD, while its contribution to the loess of modern CLP might be overestimated despite it was the key sources of the CLP in Quaternary. Scientific RepoRts | (2019) 9:985 | https://doi.org/10.1038/s41598-018-37635-y rates on the Gobi Desert varied between 0.41 and 0.89 mm a −1 , which were potential sand sources for the BJD formation with relatively low contributions to the loess of the CLP. Materials and Methods On the Gobi Desert, the gravels cover most of the surface, leaving only about 10% covered by other landforms such as mobile sand sheets, dunes, wadis, and residual hills. The primordial underlying landforms of gobi deserts are alluvial fans, playas, and wadis, and the dominant sediment sources are the adjacent Gobi Altai Mountains, the Heihe River, and the highlands of southern Mongolia, which are transported by intermittent floods from the upper Pleistocene to the early Holocene 21,22 . 15 intact gobi surface samples were collected for further wind tunnel experiments, and more details of sampling strategies are provided in Supplementary Information S3. Wind tunnel experiments were performed in the Key Laboratory of Desert and Desertification, Chinese Academy of Sciences, China. Details of the experimental processes are described in Supplementary Information S4. After all wind-tunnel experiments, the collected aeolian materials were weighed and proceeded by particle size analysis, and more details of particle size analyses are described in Supplementary Information S5. Once the particle size analyses were finished, the aeolian transports under different wind speeds could be acquired according to the results of wind tunnel experiments (Supplementary Information S6). Additionally, wind velocity observations during 1951 to 2015 from 2 weather stations (Ejin and Guaizihu, Fig. 1) within the Gobi Desert were employed to further analyses. These data were recorded in accordance with the World Meteorological Organization (WMO) and China National Meteorological Center (CNMC) standards. Because most datasets started after 1960, only the wind records from 1960 to 2015 were used to evaluate the temporal variation in the aeolian transport potentials. More detailed descriptions of the data processing are shown in Supplementary Information S7. Results and Discussion The wind tunnel experiments and the particle size analysis showed that the averaged sand fractions (50~2000 μm in diameter) and loess-sized fractions (<50 μm in diameter) were 93.9% and 6.1%, respectively. The average total transports of the 12 samples under wind velocity of 8 to 22 m s −1 spanned between 6.06 and 152.65 g m −2 , and when considering the proportions of fine fractions (<50 μm in diameter) in transported materials, the average sand transports under wind velocity of 8 to 22 m s −1 varied between 5.66 and 143.59 g m −2 (Fig. 2). From 1960 to 2015, the potential sand transports in Ejin varied between 112 and 2,722 g m −2 a −1 with an average of 1,047 g m −2 a −1 , while in Guaizihu those figures were 1,471, 3,399, and 2,265 g m −2 a −1 , respectively ( Supplementary Fig. S7). When considering the proportions of the fine fractions emitted from the source regions and settled in situ again (Supplementary Information S7), and considering the areas of the upwind Gobi which have potential effects on the BJD and the CLP ( Supplementary Information S8), the results showed that the average annual sand and loess-sized transports from 1960 to 2015 were 0.16 and 0.34 km 3 a −1 , 0.0029 and 0.0062 km 3 a −1 in Ejin and Guaizihu, respectively (Table 1). At present, the total sand dimensions of the BJD were about 1.292 ± 0.362 × 10 4 km 3 ( Supplementary Information S9), while those for the modern CLP varied from 8,908 to 18,180 km 3 with an average value of 12,980 km 3 (Supplementary Information S10). Therefore, assuming that the Gobi Desert is the sole source of the sand and loess, it will at most take 103.4 ka and 6,269.0 ka (i.e., theoretical time) to reach the modern dimensions of the BJD and CLP, respectively ( Table 2). Given that the Gobi Desert was developed in the Upper Pleistocene (circa 420 ka B.P.), with the knowledge of the age of BJD (Supplementary Information S11), the experimental and statistical results showed that under the modern wind regime the Gobi Desert could provide nearly all sand sources for the BJD formation. By comparison, fluvial processes, taken Heihe River for instance ( Fig. 1), would spend at least 1,681 ka to supply adequate sand materials to achieve the modern dimensions of the BJD based on the evidence that the Heihe River can only transport 40,000 tons of sand per day 23 from Qilian Mt. to alluvial fan western of the BJD, which is much longer than most of the known ages of the BJD (Supplementary Information S11). In addition, assuming that the Gobi Desert is also the sole source of the loess in CLP, our experimental results show that at least 3.07 Ma is needed for the CLP development. But, in fact, the acknowledged ages of the basal loess-paleosol sequence (L33) in CLP was about 2.6~2.8 Ma 24,25 , and the wind patterns changed during glacial-interglacial cycles with diverse loess sources as well 26,27 , which suggests that the Gobi Desert could not be reckoned as the sole source. Besides, the results of the erosion rates of loess over the past 25 ka in CLP (Supplementary Information S12) and partly deposition of the fine fractions indicate that the Gobi Desert may not be the main source of loess with very low contributions to the CLP under modern wind regime. Conclusions Although some previous studies had acknowledged that in the Quaternary the Gobi Desert was the key source areas of the Badain Jaran Desert (BJD) and the Chinese Loess Plateau (CLP), there are no quantitative estimations for the potential contributions of the Gobi Desert. Results of comprehensive field investigations, wind tunnel experiments, and the modern wind regime analyses showed that the modern wind erosion depth on the Gobi Desert varied between 0.41 and 0.89 mm a −1 , and the average theoretical time needed to form the current dimensions of the BJD and CLP were respectively 80.8 ka and 4,475.9 ka based on the rates of transported sand and loess-sized fractions. The aeolian processes of the adjacent Gobi Desert may provide substantial sand sources for the formation of BJD, while its contributions to the loess of the CLP were relatively low. However, based on the calculation of potential transport rates, the changes in wind regime, and the formation and development of the Gobi Desert, there might be some differences between the estimated and actual time to form the current dimensions of the BJD and CLP and further researches are expected to fill the gaps with more precise estimation. Table 2. Scales of the theoretical time needed for the BJD and CLP formation with the Gobi Desert as the sole availability. The estimation was based on modern wind regime recorded in Ejin Station as this station is more representative than Guaizihu Station which is in front of a mountain pass with funneling effects ( Supplementary Fig. S7) and located in the Gobi Desert zone (Supplementary Fig. S9).
1,834
2019-01-30T00:00:00.000
[ "Environmental Science", "Geology" ]
Heterotic supersymmetry, anomaly cancellation and equations of motion We show that the heterotic supersymmetry (Killing spinor equations) and the anomaly cancellation imply the heterotic equations of motion in dimensions five, six, seven, eight if and only if the connection on the tangent bundle is an instanton. For heterotic compactifications in dimension six this reduces the choice of that connection to the unique SU(3) instanton on a manifold with stable tangent bundle of degree zero. The bosonic fields of the ten-dimensional supergravity which arises as low energy effective theory of the heterotic string are the spacetime metric g, the NS three-form field strength H, the dilaton φ and the gauge connection A with curvature F A . The bosonic geometry considered in this paper is of the form R 1,9−d × M d where the bosonic fields are non-trivial only on M d , d ≤ 8. One considers the two connections ∇ ± = ∇ g ± 1 2 H, where ∇ g is the Levi-Civita connection of the Riemannian metric g. Both connections preserve the metric, ∇ ± g = 0 and have totally skew-symmetric torsion T ± ijk = g sk (T ± ) s ij = ±H ijk , respectively. The bosonic part of the ten-dimensional supergravity action in the string frame is [1] where R is the curvature of a connection ∇ on the tangent bundle and F A is the curvature of a connection A on a vector bundle E. The string frame field equations (the equations of motion induced from the action (1.1)) of the heterotic string up to two-loops [2] in sigma model perturbation theory are (we use the notations in [3]) ∇ g i (e −2φ H i jk ) = 0; (1.2) ∇ + i (e −2φ (F A ) i j ) = 0, The field equation of the dilaton φ is implied from the first two equations above. The instanton equation, the last equation in (1.3) means that the curvature 2-form F A is contained in the Lie algebra of the Lie group which is the stabilizer of the spinor ǫ. It is known that in dimension 5,6,7 and 8 the stabilizer is the group SU (2), SU (3), G 2 and Spin (7), respectively. An instanton (a solution to the last equation in (1.3)) in dimension 5,6,7 and 8 is a connection with curvature 2-from which is contained in the lie algebra su(2), su(3), g 2 and spin (7), respectively [5,4,6,7,8,9,10]. The Green-Schwarz anomaly cancellation mechanism requires that the three-form Bianchi identity receives an α ′ correction of the form A class of heterotic-string backgrounds for which the Bianchi identity of the three-form H receives a correction of type (1.4) are those with (2,0) world-volume supersymmetry. Such models were considered in [11]. The target-space geometry of (2,0)-supersymmetric sigma models has been extensively investigated in [11,4,12]. Recently, there is revived interest in these models [13,14,15,9,3] as string backgrounds and in connection to heterotic-string compactifications with fluxes [16,17,18,19,20,21,22,23]. In writing (1.4) there is a subtlety to the choice of connection ∇ on M d since anomalies can be cancelled independently of the choice [24]. Different connections correspond to different regularization schemes in the two-dimensional worldsheet non-linear sigma model. Hence the background fields given for the particular choice of ∇ must be related to those for a different choice by a field redefinition [25]. Connections on M d proposed to investigate the anomaly cancellation (1.4) are ∇ g [4,9], ∇ + [14], ∇ − [1,16,3,26], Chern connection ∇ c when d = 6 [4,20,21,22,23]. It is known [27,15] ( [3] for dimension d = 6), that the equations of motion of type I supergravity (1.2) with R = 0 are automatically satisfied if one imposes, in addition to the preserving supersymmetry equations (1.3), the three-form Bianchi identity (1.4) taken with respect to a flat connection on T M, R = 0. According to no-go (vanishing) theorems (a consequence of the equations of motion [28,27]; a consequence of the supersymmetry [29,30] for SU(n)-case and [9] for the general case) there are no compact solutions with non-zero flux and non-constant dilaton satisfying simultaneously the supersymmetry equations (1.3) and the three-form Bianchi identity (1.4) if one takes flat connection on T M , more precisely a connection satisfying T r(R ∧ R) = 0. Therefore, in the compact case one necessarily has to have a non-zero term T r(R ∧ R). However, under the presence of a non-zero curvature 4-form T r(R ∧ R) the solution of the supersymmetry equations (1.3) and the anomaly cancellation condition (1.4) obeys the second and the third equations of motion but does not always satisfy the Einstein equation of motion (the first equation in (1.2)) [3]. A quadratic expression for R which is necessary and sufficient condition in order that (1.3) and (1.4) imply (1.2) in dimension five, six, seven and eight are presented in [31,32,33]. In particular, if R is an instanton the supersymmetry equations together with the anomaly cancellation condition imply the equations of motion. In this note we show that the converse statement holds. The main goal of the paper is to prove In the compact case in dimension six, it is shown in [32, Theorem 1.1b] that the no-go theorems in [29,30] force the flux H to vanish and the dilaton φ to be a constant for any compact solution to the heterotic supersymmetry (1.3) such that the (-)-connection on the tangent bundle is an SU (3)-instanton, i.e. such a solution is a Calabi-Yau manifold. This result combined with Theorem 1.1 leads to This suggests that in order to find compact heterotic supersymmetric solutions to the equations of motion (1.2) in dimension six one needs to start with a conformally balanced hermitian six manifold admitting holomorphic complex volume form with stable tangent bundle of degree zero and take the corresponding unique SU (3)-instanton in (1.4) and (1.1). Six dimensional compact supersymmetric solutions with non-zero flux H and constant dilaton of this kind are presented in [32]. In the context of perturbation theory the curvature R − of the (-)-connection is an one-loop-instanton due to the well known identity R + ijkl − R − klij = 1 2 dT ijkl , the first equation in (1.3) and (1.4) taken with respect to the (-)-connection. We thank the referee reminding this point to us. In this case, according to Theorem 1.1, the supersymmetry (1.3) together with the anomaly cancellation (1.4) imply the heterotic equations of motion (1.2) up to two loops. In fact the SU(3) case in dimension six has originally been dealt in [3]. The G 2 case in dimension seven has been investigated in [37, Section 6] when the anomaly cancellation has no zeroth order terms in α ′ . Compact up to two loops solutions in dimension six with non-zero flux H and non-constant dilaton involving the (-)-connection are constructed in [38]. If the anomaly cancellation has zeroth order term in α ′ (for example in heterotic near horizons associated with AdS 3 investigated in the very recent paper [39]) then R − is no longer one-loop instanton. In particular, in dimension six, Corollary 1.2 and Remark 1.3 is applicable suggesting a possible lines for further investigations. One can take the anomaly contribution which appears at order α ′ as exact. Suppose that (1.4) is exact in the first order in α ′ . Then, in dimension six Corollary 1.2 applies and arguments in Remark 1.3 could be helpful in further developments. Conventions: We choose a local orthonormal frame e 1 , . . . , e d , identifying it with the dual basis via the metric and write e i1i2...ip for the monomial e i1 ∧ e i2 ∧ · · · ∧ e ip . We rise and lower the indices with the metric and use the summation convention on repeated indices. For example, The Hodge star operator on a d-dimensional manifold is denoted by * d . A consequence of the gravitino and dilatino Killing spinor equations is an expression of the Ricci tensor Ric + mn = R + imnj g ij of the (+)-connection, and therefore an expression of the Ricci tensor Ric g of the Levi-Civita connection, in terms of the suitable trace of the torsion three-form T = H (the Lee form) and the exterior derivative of the torsion form dT = dH (see [40] for dimensions 5 and 7, [29] for dimension 6 (more precisely for any even dimension) and [44] for dimension 8 as well as [31,32,33]). We recall that the Ricci tensors of ∇ g and ∇ + are connected by (see e.g. [40,33]) The existence of ∇ + -parallel spinor in dimension 5 determines an almost contact metric structure whose properties as well as solutions to gravitino and dilatino Killing-spinor equations are investigated in [40,41,31]. We recall that an almost contact metric structure consists of an odd dimensional manifold M 2k+1 equipped with a Riemannian metric g, vector field ξ of length one, its dual 1-form η as well as an endomorphism ψ of the tangent bundle such that ψ(ξ) = 0, The Reeb vector field ξ is determined by the equations η(ξ) = η s ξ s = 1, (ξ dη) i = dη si ξ s = 0, where denotes the interior multiplication. The fundamental form F is defined by F (., .) = g(., ψ.), F ij = g is ψ s j and the Nijenhuis tensor N of an almost contact metric structure is given by An almost contact metric structure is called normal if N = 0; contact if dη = 2F ; quasi-Sasaki if N = dF = 0; Sasaki if N = 0, dη = 2F . The Reeb vector field ξ is Killing in the last two cases [49]. 2.2. Dimension six. Proof of Theorem 1.1 in d = 6. The necessary and sufficient condition for the existence of solutions to the first two equations in (1.3) in an even dimension were derived by Strominger [4] and investigated by many authors since then. Solutions are complex conformally balanced manifold with non-vanishing holomorphic volume form satisfying an additional condition. In dimension six any solution to the gravitino Killing spinor equation reduces the holonomy group Hol(∇ + ) ⊂ SU (3). This defines an almost hermitian structure (g, J) with non-vanishing complex volume form [4] which is preserved by the torsion connection. We adopt for the Kähler form Ω ij = g is J s j . The Lee form θ 6 is defined by An almost hermitian structure admits a (unique) linear connection ∇ + with torsion 3-form preserving the structure, i.e. ∇ + g = ∇ + J = 0, if and only if the Nijenhuis tensor is totally skew-symmetric [40]). In addition, the dilatino equation forces the almost complex structure to be integrable and the Lee form to be exact determined by the dilaton. The torsion (the NS three-form H) is given by [4] (2.10) Since ∇ + g = ∇ + J = 0 the restricted holonomy group Hol(∇ + ) of ∇ + contains in U (k) and Hol(∇ + ) ⊂ SU (k) is equivalent to the next curvature condition found in [29, Proposition 3.1] In addition to these equations, the vanishing of the gaugino variation requires the 2-form F A to be of instanton type. In dimension six, an SU (3)-instanton (or a hermitian-Yang-Mils connection) is a connection A with curvature two form F A ∈ su(3). The SU (3)-instanton condition is In complex coordinates the condition (2.12) reads F A µν = F Ā µν = 0, F A µν Ω µν = 0 which is the well known Donaldson-Uhlenbeck-Yau instanton. Theorem 1.1 in dimension 6. Proof. We need to investigate the Einstein equation of motion in dimension 6. Substitute the second equation of (2.10) into (2.11) and the obtained equality insert into (2.1) and use 2∇ g = ∇ + − T to get [29] where we used that on a complex manifold dT = 2 √ −1∂∂Ω is a (2,2)-form and therefore J s j dT islm Ω lm is symmetric in i and j. We briefly recall the notion of a G 2 structure. Consider the three-form Θ on R 7 given by Θ = e 127 − e 236 + e 347 + e 567 − e 146 − e 245 + e 135 . The subgroup of GL(7, R) fixing Θ is the Lie group G 2 of dimension 14. The 3-form Θ corresponds to a real spinor and therefore, G 2 can be identified as the isotropy group of a non-trivial real spinor. The Hodge star operator supplies the 4-form * 7 Θ given by * 7 Θ = e 3456 + e 1457 + e 1256 + e 1234 + e 2357 + e 1367 − e 2467 . We have the well known formula (see e.g. [53,9,54,55]) A 7-dimensional Riemannian manifold M is called a G 2 -manifold if its structure group reduces to the exceptional Lie group G 2 . The existence of a G 2 -structure is equivalent to the existence of a global nondegenerate three-form which can be locally written as (2.3). The 4-form Φ is self-dual and the 8-form Φ ∧ Φ coincides with the volume form of R 8 . The subgroup of GL(8, R) which fixes Φ is isomorphic to the double covering Spin (7) of SO (7). The 4-form Φ corresponds to a real spinor and therefore, Spin (7) can be identified as the isotropy group of a non-trivial real spinor. We have the well known formula (see e.g. [9]) A Spin (7)-structure on an 8-manifold M is by definition a reduction of the structure group of the tangent bundle to Spin(7). This can be described geometrically by saying that there exists a nowhere vanishing global differential 4-form Φ on M which can be locally written as (2.29).
3,257.8
2009-08-20T00:00:00.000
[ "Philosophy" ]
Thermally Activated Delayed Fluorescent Gain Materials: Harvesting Triplet Excitons for Lasing Abstract Thermally activated delayed fluorescent (TADF) materials have attracted increasing attention because of their ability to harvest triplet excitons via a reverse intersystem crossing process. TADF gain materials that can recycle triplet excitons for stimulated emission are considered for solving the triplet accumulation problem in electrically pumped organic solid‐state lasers (OSSLs). In this mini review, recent progress in TADF gain materials is summarized, and design principles are extracted from existing reports. The construction methods of resonators based on TADF gain materials are also introduced, and the challenges and perspectives for the future development of TADF gain materials are presented. It is hoped that this review will aid the advances in TADF gain materials and thus promote the development of electrically pumped OSSLs. Introduction Laser technology has been one of the most important inventions in the last century and has been widely applied in various fields, such as telecommunications, medical diagnosis, industrial manufacturing and academic research. With the first discovery of ruby lasers in the 1960s, [1] organic materials have been considered to be used as gain materials to achieve lasing owing to their advantages of a large excited cross-section, tunable emission wavelength and low-cost manufacturing process. [2,3] In 1966, Sorokin and Lankard reported the first dye laser. [4] Since then, organic lasers have undergone rapid development. In 1967, Soffer and McFarland doped rhodamine 6G in polymethylmethacrylate as an organic gain medium, which led to DOI: 10.1002/advs.202200525 the development of the first organic solidstate laser (OSSL). [5] Compared with liquid dye lasers, OSSLs are more attractive because of their advantages of stable operation, low weight, and easy miniaturization and integration. [6] The innovation of gain materials plays an important role in the development of OSSLs. [7] Organic semiconducting materials exhibit excellent luminescence properties and carrier mobility. Owing to their excellent photoelectric properties, organic semiconductor materials have been widely used in organic optoelectronic devices, such as organic light-emitting diodes (OLEDs), [8][9][10][11] organic solar cells, [12][13][14] and organic thin-film transistors. [15,16] In 1996, organic conjugated polymers poly(p-phenylenevinylene) (PPV) and PPV derivatives with semiconductor properties were first used as gain media by Heeger and Friend (and their coworkers) respectively. [17,18] Since then, organic semiconductor materials have received extensive attention as gain media. [19,20] The successful application of organic semiconductor materials in OSSLs facilitates combining OSSLs with electricity. For organic gain materials, the stimulated emissions were mainly achieved based on a population inversion mechanism. [20] In contrast to inorganic materials, organic molecules have several vibrational energy levels in both the ground and excited states. An inherent four-level energy system can be formed by these vibrational energy levels, which guarantees an effective population inversion and facilitates stimulated emission. [21] For some organic molecules, phototautomerization reactions, such as excited-state intramolecular proton transfer (ESIPT), can occur during excitation, in which more effective four-level energy systems can be formed by the ground and first excited states of normal forms and tautomer forms. [22][23][24][25] This type of more effective four-level energy system can lead to easier population inversion and further result in a lower laser threshold. To date, various organic materials, such as dyes, polymers, metal complexes, and semiconductive molecules, have been developed and investigated as gain materials in OSSLs. However, most of these materials exhibit high optical gain properties only under photopumped conditions. Electrically pumped OSSLs remain a challenge. [26,27] It is well known that 25% of singlet excitons and 75% of triplet excitons are generated during the recombination of electrons and holes under current-injection conditions in organic electroluminescent devices according to spin quantum statistics (Figure 1a). [28] Phosphorescent emission from triplet excitons is usually in relatively low efficiency owning to the spin-forbidden nature, which makes it difficult to achieve optical gain. Although a recent example of light amplification by stimulated emission in phosphorescent materials has been reported, in most cases, only singlet excitons can be used in the stimulated emission process of organic materials. [29] Therefore, under current-injection conditions, the lasing threshold would be three times higher than it should be, even though it is already predicted to be very high. [30] In addition, as the lifetime of the first triplet state (T 1 ) is usually much longer (over three orders of magnitude) than that of the first singlet state (S 1 ), serious triplet accumulation occurs under a high current density. Triplet accumulation can lead to triplet-triplet annihilation and triplet-singlet annihilation through T 1 absorption, which reduces the S 1 population and aggravates optical losses in the process of light amplification. [31] Therefore, minimizing the optical losses caused by T 1 excitons is an important problem for the realization of electrically pumped OSSLs. To achieve this, strategies for managing T 1 excitons have been employed. For example, T 1 excitons can be eliminated by introducing triplet annihilators such as oxygen, [32] cyclooctatetraene, [32][33][34] and anthracene derivatives. [34,35] Thus, most excitons are destroyed, and the stability of the laser devices is significantly reduced by the active components or heat generated by triplet annihilation. [32,36] Thermally activated delayed fluorescent (TADF) materials can transfer T 1 excitons back to S 1 excitons through a reverse intersystem crossing (RISC) process with the aid of thermal activation. [9,10,37] The proposed energy diagram of the TADF gain materials under current injection conditions is shown in Figure 1b. Both S 1 and T 1 excitons can be generated under current injection conditions. The S 1 excitons can return to the ground state (S 0 ) by fluorescence emission or stimulated emission. For T 1 excitons, the direct S 0 ← T 1 transition is forbidden. As the singlet-triplet energy gap (ΔE ST ) is very small for TADF materials, long-lifetime T 1 excitons are converted into S 1 excitons via RISC with the aid of thermal activation. The regenerated S 1 excitons can then return to the S 0 state by fluorescence emission or stimulated emission, which is known as delayed fluorescence or lasing. Thus, TADF materials can effectively utilize triplet excitons for fluorescence or lasing. In theory, 100% of the excitons produced by electrical excitation in TADF materials can be used. Therefore, in TADF OLEDs, an internal quantum efficiency close to 100% and an external quantum efficiency (EQE) over 25% have been achieved, which exceed the efficiency limitation. [38,39] Although the realization of electrically pumped OSSLs is still difficult, photopumped continuous-wave or quasi-continuous-wave lasing, which also faces triplet accumulation problems, has already been achieved by the reuse of T 1 excitons. [40][41][42] Based on these previous studies, TADF molecules with laser activity have become the most likely organic gain material for electropumped organic lasers. Further progress has been made in recent years. [43][44][45][46][47][48][49][50][51][52] Since the first attempt to use TADF materials in OLEDs, [53,54] various TADF materials have been designed and synthesized. [55][56][57] However, only some of these materials have been reported to exhibit laser activity. [43][44][45][46][47][48][49][50][51][52] The relationship between the molecular structures of TADF materials and their laser properties remains unclear. Recent progress in TADF gain materials is reviewed in this paper to illustrate the recent advances in TADF gain materials, raise awareness of this new field and extract the fundamental design principles and strategies of TADF gain materials. First, the fundamental photoelectric properties and laser performances of the reported TADF materials are summarized, based on which the design principles of TADF gain materials are proposed. Subsequently, methods for achieving lasing based on TADF materials are introduced. Finally, the challenges and perspectives for developing novel TADF gain materials are presented. TADF Gain Materials In recent years, TADF gain materials have drawn increasing attention owing to their potential applications in electrically pumped OSSLs. Several pioneering studies have been conducted to date. [43][44][45][46][47][48][49][50][51][52] The molecular structures of the reported TADF gain materials are summarized in Figure 2 and sorted by laser wavelength from blue to near-infrared. The photoelectric properties and laser performance under optically pumped conditions are summarized in Table 1. As previously reported, for classical TADF molecules, large steric electron donor and electron acceptor units were used to induce the spatial separation of the highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO, respectively) to reduce ΔE ST . [10,55,56] Because of their strong electron-donating ability, nitrogen-containing aromatic groups, such as carbazole, diphenyl amine, phenoxazine, and their derivatives, are often used as donors. Nevertheless, several types of acceptor groups can be used to tune the photoelectric performance of TADF materials. These rules can also be applied to TADF gain materials. In this section, representative materials based on acceptor type are discussed in detail. TADF Gain Materials with Triarylboron-Type Acceptors Owing to the vacant p-orbital in the central boron atom, the triarylboron groups exhibit strong electron-attracting properties. Donor-acceptor (D-A)-type molecules containing amine-based donors and triaryboron acceptors often exhibit strong intramolecular charge transfer properties. In addition, nitrogen and boron atoms have opposite resonance effects, which can facilitate the efficient separation of HOMOs and LUMOs. [58] In 2017, Adachi et al. reported two triphenylboron-based materials (compounds 1a and 1b), in which two nitrogen atoms joined with adjacent phenyl groups to form a rigid polycyclic aromatic framework. [43] For both compounds, ΔE ST s are very small (0.18 and 0.14 eV, respectively), which ensured their TADF activities. In addition, both compounds 1a and 1b showed high oscillator strengths of the S 0 -S 1 transition (f = 0.205 and 0.415, respectively), which are beneficial for obtaining large excited cross-sections and thus beneficial for realizing stimulated emission. For compound 1b, the introduction of substituents can improve the oscillator strength without affecting the localization of the molecular orbitals. Accordingly, a lower amplified spontaneous emission (ASE) threshold was achieved for compound 1b than for compound 1a under excitation by a pulsed laser. However, because of the low rate of RISC (k RISC ), which is more than four orders of magnitude lower than the radiative decay rate (k r ) of S 1 , the contribution of RISC to the light amplification process is negligible. Nevertheless, this study demonstrates that TADF materials can be used as gain media for OSSLs. More recently, based on previously reported works, Liao et al. have developed compounds 2a and 2b. [44] As shown in Figure 2, a mesitylboron group was used as the acceptor and phenoxazine and acridine derivatives were used as donors. Bulky mesityl groups were introduced to the acceptor segment to improve the stability of the aromatic boron group. For the donor segments, a single intramolecular lock was introduced to increase molecular rigidity and flatness. Both compounds exhibited TADF activities and ASE characteristics with low thresholds. Theoretical calculations showed that the S 1 states of locked structures have higher oscillator strengths (f = 0.37 and 0.32 for compounds 2a and 2b, respectively) and more locally excited character compared with the lock-free counterparts without ASE activities. As evident from the above study, increasing the molecular rigidity contributes to the improvement of the oscillator strength of the S 0 -S 1 transition, thus leading to easier lasing. TADF Gain Materials with Cyano-Based Acceptors The cyano group has a strong electron-withdrawing ability. 2,3,5,6-Tetrakis(carbazol-9-yl)-1,4-dicyanobenzene (4CzTPN, compound 3), with two cyano groups and four carbazole groups connected to a benzene skeleton, is a classical TADF molecule. [9] The use of 4CzTPN in OLEDs to achieve a high EQE has been reported for ten years. [9] Owing to its high photoluminescence quantum yield and short delayed fluorescence lifetime, 4CzTPN was chosen as the gain material by Zhao et al. to develop an OSSL. [45] When 4CzTPN was doped in polystyrene (PS) microspheres, lasing was achieved under photopumped conditions, even though the threshold was slightly high. Notably, the lasing threshold can be reduced by increasing the pump laser frequency. This indicated that T 1 excitons can be effectively used in the RISC process. Therefore, a high k RISC is a prerequisite for the efficient utilization of triplet excitons to achieve lasing for TADF materials. Ref. 1a TADF Gain Materials with Difluoroboron-Based Acceptors Difluoroboron complexes such as acetylacetonate boron difluoride can be used as acceptor units in TADF materials because of their electron-deficient properties. In 2018, a boron difluoride curcuminoid complex (compound 7 in Figure 2) with two triphenylamine groups as donors and an acetylacetonate boron difluoride derivative as an acceptor was designed to achieve near-infrared emission. [48] Using this material, a high EQE can be obtained in OLED devices, and light amplification can be realized via ASE in doped 4,4′-bis(N-carbazolyl)-1,10-biphenyl (mCBP) films. By increasing the doping concentration from 4 to 60 wt%, the ASE peak wavelength can be tuned from 740 to 799 nm with the threshold increasing from 4.7 to 36.7 μJ cm −2 . Unlike other TADF materials with small ΔE ST s, compound 7 exhibited a relatively high ΔE ST (as high as 0.37 eV). The authors suggested that nonadiabatic coupling effects between low-lying excited states may facilitate the RISC process. This unusual mechanism endows the molecule with a high oscillator strength, which enables the achievement of a low-threshold ASE. Based on compound 7, a dimeric boron difluoride curcuminoid derivative (compound 8) was designed to achieve an ASE of over 800 nm with a low threshold in doped thin films. [49] In 2019, Fu et al. reported a new material (compound 6, CAZ-A) with a molecular structure similar to that of compound 7. [50] Lasing from 650 to 725 nm was achieved in the doped microrings. Very recently, the authors have also developed a set of difluoroboron-based TADF gain materials (compounds 4a-4d) with the boron atom directly connected to an oxygen atom and a nitrogen atom, with the exception of two fluorine atoms. [46,47] The lasing emission wavelengths of these materials can be modulated from green to yellow to red by changing the substitutes. In contrast to classical TADF materials, compounds 4a-4d have large ΔE ST s (S 1 -T 1 ) (0.74, 0.72, 0.56, and 0.57 eV respectively) which would hinder the RISC processes. However, the ΔE ST s (S 1 -T 2 ) were calculated to be small enough (0.19, 0.02, 0.12, and 0.12 eV respectively) to permit the RISC processes to occur, and the large energy gaps between T 1 and T 2 make the transition of the internal conversion process difficult. Moreover, these compounds were calculated to have high spin−orbital coupling strengths between S 1 and T 2 compared with that between S 1 and T 1 . According to these analyses, the RISC processes of compounds 4a-4d were believed to be achieved between S 1 state and T 2 state. Therefore, in Table 1, the data of ΔE ST (S 1 -T 2 ) are provided instead of ΔE ST (S 1 -T 1 ) for these four compounds. Design Principles of TADF Gain Materials As only a few cases have been reported on TADF gain materials, the design principles of these materials need to be extracted from the existing reports. However, extracting general design principles from such a small number of cases is difficult. Therefore, only a few partial views are listed here based on existing reports and the design principles of TADF luminescent materials. In traditional TADF materials, twisting the donor and acceptor moieties with large steric hindrance is important for achieving HOMO and LUMO separation to reduce ΔE ST (Figure 3a). [10,55,56] In general, increasing the rigidity of the donor and acceptor skeletons helps in reducing the nonradiative losses caused by molecular vibration and thus improves the luminescence efficiency. However, for general organic gain materials, a four-level system constructed using the molecular vibration level is an important prerequisite for realizing population inversion. Therefore, a reasonable limitation of molecular vibrations should be considered when designing TADF gain materials. For example, for compounds 2a and 2b, enhancing the rigidity of the molecular structures resulted in lower ASE thresholds. [44] For the materials shown in Figure 2, the molecular rigidities are not too high, except for compounds 1a and 1b. Compounds 1a and www.advancedsciencenews.com www.advancedscience.com 1b belong to another category of TADF material: multiresonant-TADF (MR-TADF) materials. [58,59] MR-TADF materials are a class of fused polycyclic aromatic materials having mutually ortho-disposed electron donating atom (donor) and electron deficient atom (acceptor) in a rigid planar polycyclic aromatic framework (Figure 3b). [59] Due to the complementary resonance effects of the donor and acceptor the effective HOMO and LUMO separation and thus small ΔE ST can be achieved. More importantly, the multiple resonance effect can cause a relatively high oscillator strength, and a high oscillator strength can lead to a large stimulated emission cross-section, which is an important factor in achieving low-threshold lasing. [60] Hence, MR-TADF materials may have unique advantages in achieving optical amplification. In addition to increasing molecular rigidity, the introduction of suitable substituents can also improve oscillator strength. In conclusion, appropriate molecular rigidity and substituents are important when designing TADF gain materials. Molecular aggregation should be carefully considered in laser devices, particularly in non-doped devices. As most TADF materials adopt a D-A-type molecular structure, intermolecular charge transfer (CT) interactions that can cause intermolecular orbital delocalization may affect the effective HOMO-LUMO separation in the aggregate state. [61] For example, Fu et al. reported a gain material (compound 5) with TADF properties in an aggregated state but without TADF properties in another aggregated state. [51] As shown in Figure 3c,d, in J-aggregation, the intermolecular CT interaction can be minimized, whereas in H-aggregation, the intermolecular CT interaction can be enhanced. For J-aggregated crystal, the ASE threshold was calculated to be 23.6 μJ cm −2 which is about three times lower than that of H-aggregated crystal (93.2 μJ cm −2 ). It proved that the thresholds of TADF gain materials are lower than that of traditional fluorescence gain materials owing to the effective utilization of T 1 excitons. OSSLs Based on TADF Gain Materials Like any other laser, the OSSL consists of three basic units: pumping source, gain medium, and resonant cavity. Currently reported OSSLs are mainly operated under optically pumped conditions, and OSSLs under electrically pumped conditions are still difficult to achieve. [26] Therefore, this section mainly discusses OSSLs based on the preparation of different types of resonators. Except for studies on the ASE properties of TADF gain materials without resonators, reports on OSSLs based on this type of material are limited. As summarized in Figure 4, only a few studies have been conducted. It worth noted that all reported OSSLs are constructed by microcavities like microspheres and microcrystals of doped or pure gain materials. The microspheres and microcrystals with ultrasmooth surfaces and regular morphologies can work as gain materials and resonators simultaneously without extra reflective mirrors. As mentioned in Section 2, Zhao et al. doped 4CzTPN (compound 3) in PS and prepared PS microspheres with controllable and uniform sizes through solution self-assembly using a surfactant (Figure 4a). [45] These microspheres with perfect circular boundaries and ultrasmooth surfaces can be used as whispering-gallery-mode (WGM) resonators, in which photons can be strongly confined by means of successive total internal reflection along the sphere circumference. Under excitation by a pulsed laser, laser emission was achieved using these 4CzTPNdoped microspheres (Figure 4b). The WGM-type cavity resonance was proved by the linear relationship of 2 /Δ versus diameters of the microspheres at = 562 nm, where Δ is the laser mode spacing. The cavity quality (Q) factor was calculated to be as high as 10 3 . PS microspheres are inexpensive and easy to prepare but are not suitable for electrically pumped lasers because of the nonconducting PS. To achieve good electrical conductivity, organic semiconductive small-molecular materials, such as mCBP, are usually used as host materials for active layers in OLEDs. [62,63] mCBP was used as a host material to study the ASE properties of organic gain materials by Adachi et al. [48,49] More importantly, mCBP microcrystals with specific morphologies can be easily fabricated using a poly-(dimethylsiloxane) (PDMS) template-confined solutiongrowth method (Figure 4c). [64] Microlasers can be constructed by doping gain molecules into mCBP microcrystals, which can then be used as resonators. In 2019, Fu et al. realized TADF lasers by doping CAZ-A (compound 6) into microring arrays with a high Q factor of ≈1300 (Figure 4d). [50] Very recently, another study has been reported by Zhao et al. using the same strategy. [52] By doping compounds 1a, 3, and 7, respectively, into mCBP microplates using the aforementioned method, TADF microlasers with blue, green, and red emission colors were achieved. Vivid laser displays have also been demonstrated using microlaser arrays as display panels under programmable excitation (Figure 4e). Organic micro/nanocrystals with high purity, minimal defects, ordered structures, and potential applications in on-chip devices have been widely used as active media and optical resonators simultaneously to develop organic lasers. [65][66][67] Last year, Fu et al. reported four types of nondoped TADF microcrystals of compounds 4a-4d. [46,47] One-dimensional microcrystals that can function as Fabry-Pérot (FP) cavities can be formed via a simple solution self-assembly method (Figure 5a). [66,68] Lasing emissions with different colors were realized with low thresholds (Figure 5b). For the microrods of compound 4a, the Q factor was estimated to be as high as ≈1313 at a wavelength of 525 nm. For the microribbons of compound 4d, the Q factor was calculated to be as high as 2161. These studies provide an effective way to realize nondoped organic TADF laser diodes. Challenges and Perspectives With the continuous pursuit of electrically pumped OSSLs, TADF gain materials that can use T 1 excitons via the RISC process have become a hot research topic. This mini-review summarized the pioneering reports in recent years mainly from the viewpoint of materials. The types of molecular structures were introduced, and superficial design principles were extracted based on existing reports and the general design principles of TADF materials. OSSLs based on TADF gain materials with different types of resonators were also discussed. These studies provide good references for the realization of optical amplification using TADF materials. However, certain aspects should be considered to realize electrically driven OSSLs with TADF gain materials. First, for most TADF materials, T 1 excitons can only be effectively used in the RISC process in thermodynamics. In fact, in dynamics, the rate of RISC (k RISC ) is very low compared with the radiative decay rate [45] Copyright 2020, Wiley-VCH . c) Schematics of the poly-(dimethylsiloxane) (PDMS) template-confined solution-growth method. d) Bright-field (left) and fluorescence (right) microscopy images of compound-6-doped mCBP microring arrays. Reproduced with permission. [50] Copyright 2019, American Chemical Society. e) Fluorescence microscopy images of mCBP microplates doped with different materials (compounds 1b, 3, and 7 from left to right) used as microlaser displays. Reproduced with permission. [52] Copyright 2021, American Chemical Society. of S 1 and the generation rate of T 1 excitons under a high current density. For example, for compound 1b, k RISC is only on the order of 10 3 s −1 , which is four orders of magnitude lower than k r . [43] The rate of intersystem crossing (k ISC ) in TADF molecules is usually on the order of 10 6 -10 11 s −1 . [10] In this case, most of the T 1 excitons cannot be converted into S 1 excitons in time. Therefore, the accumulation of T 1 excitons cannot be completely avoided. Thus OLEDs based on TADF materials always face a serious rolloff of the EQE. Second, the contradiction between the rapid RISC process and the high oscillator strength in molecular design may be another problem. To enhance the RISC rate, a twisted D-A structure is usually adopted to reduce ΔE ST through HOMO-LUMO separation. However, this type of structure always leads to a low oscillator strength. A part of the orbital overlap, instead of complete orthogonality, may help balance this contradiction. In addition, for new types of TADF materials such as MR-TADF materials, this problem may be easier to overcome. Moreover, the poor photostability of TADF gain materials owing to thermal degradation at high current densities is another challenge. For electrically pumped OSSLs, the threshold is predicted to be very high. For example, the threshold of the first reported electrically pumped organic laser device is as high as 600 A cm −2 . [26] On the Figure 5. a) Schematics of the solution self-assembly method and fluorescence microscopy images of microcrystals of compounds 4a-4c (from top to bottom on the right side). b) Normalized lasing spectra of microcrystals of compounds 4a-4c (from left to right). Inset: corresponding fluorescence microscopy images. Reproduced with permission. [46] Copyright 2021, American Chemical Society. one hand, to improve the photostability, the structural stability of TADF gain materials should be fully considered in molecular design; on the other hand, a lower laser threshold will be more helpful by reducing the pumping energy considering the nature of organic materials. In theory, for TADF gain materials, the lasing threshold can be reduced dramatically because that most of energy losses caused by triplet excitons in traditional fluorescent gain materials can be avoided. However, the progress of TADF gain materials is still in the early stage and the lasing thresholds of reported TADF materials are not lower than that of fluorescent gain materials. Nevertheless, the TADF gain material still has a huge potential. We believe that with the development of TADF gain materials the lasing threshold can be lowered greatly. As mentioned above, to realize electrically driven OSSLs some decisive factors should be considered when designing TADF gain materials in addition to previously reported design principles of TADF luminescent materials: (1) High k RISC and short TADF lifetime. The faster the RISC process, the more T 1 excitons that can be recycled. (2) Shot lifetime of prompt fluorescence; The fluorescence lifetime should be as short as several nanoseconds to avoid the nonradiative decay of singlet excitons. In fact, the lifetimes of most effective gain molecules were shorter than 1 ns. (3) High oscillator strength; Theorical calculations of oscillator strength can be employed to predict the lasing properties of designed molecules before tedious synthetic work. (4) Robust molecular structure; The thermal degradation of organic gain materials at high current densities is still one of the most important problems compared with that of inorganic materials. For conventional TADF materials, introducing other photochemical processes, such as ESIPT, may be helpful in improving their laser activities because a more effective four-energy-level system can be easily formed for ESIPT materials. This will result in easier population inversion and thus benefit stimulated emissions. [24] In addition to TADF materials, other kinds of materials can utilize T 1 excitons, such as hybridized local and charge-transfer and triplet-triplet annihilation upconversion (TTA-UC) materials. [69,70] Studies using the TTA-UC process to recycle T 1 excitons have been reported, and quasi-continuous-wave organic lasers have been achieved. [42] Although the theoretical efficiency of TTA-UC materials (a maximum of 62.5%) is lower than that of TADF materials (a maximum of 100%), the TTA process can be very fast at high concentrations of T 1 excitons. [71] Therefore, the utilization efficiency of triplet excitons may be higher for TTA-UC materials. Hence OLED devices based on TTA-UC are more stable. [72] However, TTA-UC materials with laser activity have not yet been reported. In summary, considerable work remains to be done to achieve electrically pumped OSSLs based on TADF gain materials. The most important problem is the lack of effective TADF gain materials. Only when more TADF gain materials are designed and synthesized can the structure-property relationship be studied. A greater choice of materials will provide more opportunities to overcome the existing problems in realizing practical currentinjected OSSLs. In addition to the material design, carefully designed resonators may help achieve a lower threshold. For example, in the first reported electrically pumped OSSL, a mixed-order distributed feedback (DFB) structure was used. [26] The DFB and distributed Bragg reflector (DBR) structures have been demonstrated to be more effective resonators. [73][74][75][76] Therefore, the use of DFB and DBR structures may accelerate the development of TADF lasers. We believe that our review will attract more scientists to this field, aiding the advances in TADF gain materials, and facilitate the development of electrically driven OSSLs.
6,522.6
2022-03-28T00:00:00.000
[ "Physics", "Materials Science" ]
Effective fabrication of zinc-oxide (ZnO) nanoparticles using Achyranthes aspera leaf extract and their potent biological activities against the bacterial poultry pathogens Nanotechnology is one of the most significant area of research worldwide because of its tremendous applications linked to the high surface area to volume ratio, improved pharmacokinetic profile and targeted drug delivery. In the current study, zinc oxide nanoparticles (ZnO NPs) were synthesized from Achyranthes aspera leaf extract, characterized by UV-visible spectroscopy, XRD, SEM, FTIR, AFM and evaluated for antibacterial efficacy against poultry pathogenic bacterial strains. UV-visible absorption peak was found at 370 nm. XRD showed hexagonal wurtzite structure of ZnO NPs while SEM results indicated an average size less than 100 nm with a minimum and maximum size of 28.63 and 61.42 nm, respectively. Further analysis of synthesized nanoparticles by FTIR showed stretching frequency at 3393.14 cm−1, 2830.99 cm−1, 2285.23 cm−1, and 2108.78 cm−1. The antibacterial activity of synthesized nanoparticles was investigated against common poultry pathogens Salmonella gallinarum and Salmonella enteritidis by the agar well diffusion method. The zone of inhibition with a diameter of 31 mm was observed against S. enteritidis and 30 mm against S. gallinarum that was greater than the antibiotic (tetracycline) used. The minimum inhibitory concentration (MIC) was 0.195 and 0.390 mg ml−1 for different bacterial strains. Characterization with different techniques showed a uniform and stable synthesis of ZnO NPs. Furthermore, the findings confirm the higher antibacterial activity of nanoconjugate in comparison to leaf extract and pure drug against pathogenic bacteria. Introduction Nanoparticles are atomic or molecular aggregates having size less than 100 nm and they act as active carriers towards their targeted site. In recent years, the synthesis and use of nanoparticles increased rapidly in every field of life including healthcare, cosmetics, electronics, etc. The fabrication of nanoparticles has been attempted by using several chemical and physical methods. Similarly, an array of biological systems and natural products such as bacteria, yeast, fungi, and plants bioactive compounds have been used in the synthesis of nanoparticles [1]. Among the various nanoparticles, synthesis of zinc oxide nanoparticles (ZnO NPs) by using green technology (natural products) has progressively emerged as an area of interest because of its efficient synthesis by using inexpensive natural products as reducing agents in comparison to other conventional methods of synthesis [2]. Plant extracts are rich in phytochemicals that act as reducing and stabilizing agents for synthesis of ZnO nanoparticles [3]. The ZnO NPs, like other metal oxides, exhibit significant antibacterial activities against a broad spectrum of bacterial strains. The nanoparticles get in direct contact with the cell wall and as a result, destruct the integrity of bacterial cell. In this context, the present study is planned to synthesize ZnO NPs by Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. using leaf extracts of Achyranthes aspera (A. aspera) plant. A. aspera belongs to family Amaranthaceae and is distributed in many tropical and temperate regions. The plant is an annual or perennial herb and has several common names such as chaff-flower, prickly chaff flower, Chirchra, Kutri and Grootklits. It is a medicinal plant and has been an important part of herbal medicine system throughout the Asian and African countries. This herb has numerous pharmacological activities such as anti-inflammatory, analgesic, and antipyretic activities. It is used to treat a lot of disorders such as piles, fever, cough, digestive problems, dysentery, paralysis, spleen enlargement, control of fertility and post-partum bleeding. It has been employed for the treatment of bites of mad dogs and snakes. It is also used to treat oral diseases as some people used the root of this plant as a toothbrush [4]. The world is facing an issue of undiscriminating use of antimicrobials in the poultry industry that has accelerated the increase of antimicrobial resistance in pathogens. Antimicrobial resistance in poultry pathogens has resulted in financial loss, due to the expenditure on less efficient antimicrobials, as well as a lot of untreated poultry diseases [5]. The poultry industry is a dynamic and flourishing area contributing 5.76%, 26.9%, and 1.27% to the total meat production, farming sector and total GDP, respectively. The poultry sector in Pakistan has flourished with tremendous output within the last few years and functioned as protein rich nutrient source for general public. Poultry meat consumption is progressively increasing worldwide and has extended to 14.2 kg/person/year [6]. In developing countries, poultry is the fastest rising sub-sector of food industry. The poultry sector is expanding across the globe as the production of meat and eggs is constrained by growing population and urbanization [7]. In this perspective of increasing consumption and production, the microbial safety of poultry meat products is one of prime concern. Ingestion of microbial infected poultry meat and eggs has shown to be the major source of foodborne outbreaks in the USA between 1998 and 2012 [8]. The present study was designed to synthesize ZnO nanoparticles from A. aspera leaf extract, their characterization by various advanced spectral techniques and then to evaluate their antibacterial activity against the most common poultry pathogens Salmonella gallinarum and Salmonella enteritidis to combat with the issue of microbial resistance and flourish the poultry industry worldwide. Materials and methods Synthesis of ZnO NPs with A. aspera leaf extract was performed at Department of Chemistry, UMT Lahore-Pakistan and antibacterial activity was carried out at the Institute of Biochemistry and Biotechnology, UVAS, Lahore-Pakistan. Chemicals The chemicals used in the study were purchased from Sigma-Aldrich while distilled water was obtained from the laboratory of the university. Collection and processing of leaves of A. aspera Fresh leaves of A. aspera plant were collected from the University of Punjab, Quaid-e-Azam campus, Lahore. Identification of plant was done by the Department of Botany, GC University, Lahore. The collected leaves were washed with distilled water properly and then dried in an oven at 60°C for 1 h. The dried leaves were ground to acquire the powdered material [9]. Extract was prepared by adding 20 g of leaves powder in 200 ml distilled water and heating for 20-30 min on the heating mantle. The obtained extract was filtered using Whatman filter paper No. 1 and kept in the refrigerator for further experimentation [10]. Preparation and characterization of zinc oxide nanoparticles For the synthesis of nanoparticles, 10 ml of aqueous extract obtained from leaves of A. aspera was mixed with 0.1 mM zinc acetate [Zn (CH 3 COO) 2 ] solution and placed on a magnetic stirrer. After 30 min, NaOH (2 g/ 50 ml) was added dropwise. This mixture was stirred continuously for 4 h and centrifuged at 10,000 rpm for 30 min. The supernatant was discarded and the pellet was centrifuged with deionized water followed by absolute ethanol three to four times. Finally, pellet was placed in hot air oven at 60°C for 24 h to obtain the zinc oxide nanoparticles. Characterization of zinc oxide nanoparticles was carried out by using UV-Visible spectrophotometer of Shimadzu, UV-1280, x-ray diffraction (XRD) of Rigaku, XtaLAB mini II, Fourier transform infrared spectroscopy (FTIR) of Shimadzu, Prestige-21, scanning electron microscopy (SEM) of Agilent Technologies, TV 1001 SEM and atomic force microscopy (AFM) of CoreAFM [11]. Preparation of bacterial culture and stock solutions The characterized bacterial strains of Salmonella gallinarum and Salmonella enteritidis were obtained from the Department of microbiology, UVAS Lahore, Pakistan. Bacterial culture was maintained on a nutrient agar plates at 4°C. The stock solution of ZnO nanoparticles was prepared by adding 100 mg of ZnO/ml of DMSO. Similarly, a solution of pure antibiotic was prepared by adding 100 mg of antibiotic (tetracycline) in 1 ml of distilled water. Preparation of inoculum The inoculum was prepared by direct colony suspension method that was picked from an 18 to 24 h agar plate. Normal saline solution (10 ml) was taken in test tubes and autoclaved at 121°C, 15 psi for 15 min. A loopful culture of S. enteritidis and S. gallinarum was inoculated in two different test tubes containing autoclaved normal saline tubes. To standardize the inoculum density, the turbidity of inoculums were compared with 0. Assessment of antibacterial activity of ZnO NPs by Well diffusion method For antibacterial assay, a sterile cotton swab was immersed in bacterial inoculum and was evenly swabbed on the nutrient agar plate in 3 to 4 planes. In the last, the edge of agar was swabbed and left the plates undisturbed for 10 to 15 min. Then the wells of 5 mm diameter and 4 mm depth were made on inoculated nutrient agar plates using a sterile glass borer. The wells were sealed with molten agar. Nanoparticles suspension (50 μl) from solution (100 mg ml −1 ) was dispensed into respective well. Tetracycline was used as a positive control whereas DMSO and plant extract as the negative control. All the experiments were run in triplicate to get précised and accurate results. The plates were incubated for 24 h at 37°C, and the zone of inhibition was measured in mm produced around the colonies [13]. Determination of minimum inhibitory concentration by broth micro-dilution method After the well diffusion method, MIC was determined by broth micro dilution method to confirm the concentration of ZnO nanoparticles that inhibited the bacterial growth after overnight incubation. In 96 well micro titre plate, 50 μl of nutrient broth was added in all wells using sterile micropipette. Then 100 μl of prepared bacterial inoculum was added up to 11th well. ZnO nanoparticles solution (50 μl) containing 2500 μg ml −1 nanoparticles was added in first well and mixed with media. By using micropipette, 50 μl was transferred from 1st well to 2nd well to perform two-fold serial dilution. The procedure was continued up to 10th well to obtain the concentrations of 1250, 625, 312.5, 156.25, 78.125, 39.06, 19.53, 9.76 and 4.88 μg ZnO NPs/ml. In the positive control (11th well), only the organism was grown without NPs nanoparticles while negative control (12th) contained only nutrient media. These steps were repeated for each bacterial strain under test. All the tubes were incubated at 37°C for 24 h. Finally, the absorbance was taken at 630 nm by means of an ELISA reader. The minimum concentration of extract that showed least growth was termed as MIC value and reported [14]. Statistical analysis All experiments were run in triplicate. Data were analyzed by using descriptive analysis mean and standard deviation on SPSS-20 [13,14]. UV visible spectroscopy The UV/visible spectroscopy was utilized to indicate the presence of ZnO nanoparticles. When light hits nanoparticles, its electromagnetic field oscillation produces a collective oscillation of metal's electrons, which induce charge separation with respect to the ionic lattice. At a specific frequency, amplitude of oscillation was maximum, which is called surface Plasmon resonance (SPR) [15]. UV-visible analysis of synthesized ZnO nanoparticles showed surface Plasmon resonance absorption peak (SPR) of ZnO nanoparticles at 370 nm (figure 1) that was a characteristic peak of ZnO nanoparticles. X-Ray diffraction analysis X-ray diffraction pattern of biosynthesized ZnO nanoparticles is given in figure 2. It indicated the hexagonal wurtzite structure of ZnO nanoparticles. The average crystalline size of ZnO nanoparticles calculated by Debye Scherer's formula was 34 nm. Atomic force microscopy AFM was performed to find topological appearance, size, height distribution and morphology of nanoparticles of biosynthesized ZnO nanoparticles. The 2D and 3D images of ZnO nanoparticles were obtained and the average roughness revealed. The results showed hexagonal shape of ZnO Nps as depicted in figure 4. SEM analysis SEM was carried out to find the structure, size and shape of ZnO NPs. SEM images proved that the synthesized particles were in nano range with hexagonal arrangement and uniform distribution. Figure 5 revealed the particle size distribution of synthesized nanoparticles that concludes the average size ranged from 28.63 to 61.42 nm. 3.6. Antimicrobial potential of zinc oxide nanoparticles Antibacterial activity of biosynthesized ZnO nanoparticles was observed against S. entertidis and S. gallinarum. The results of the zone of inhibitions (mm±S.D) are displayed in figure 6 and table 1. Determination of MIC by broth dilution method MIC was done to check the minimum inhibitory concentration of ZnO nanoparticles that inhibit the growth of bacterial. The standard protocol was followed to observe the MIC of nanoparticles for S. gallinarum and S. enteritidis that was 0.390 mg±0.00 and 0.195 mg±0.00 respectively as indicated in figure 7 and table 1. Discussion Microorganisms, especially bacteria, cause many health issues and chronic infections. Antibiotics are used as effective agents against bacterial infections due to their cost-efficacy and powerful consequences. Antibiotic resistance against Gram-negative bacteria is a major problem because they have double-layered cell membranes making it more difficult for the drugs to penetrate. Organisms such as S. gallinarum and S. enteritidis are poultry pathogens that cause infection in humans as well as in poultry animals. Various nanoparticles have been synthesized and their antimicrobial activity has been determined but this is the first study on preparation of ZnO NPs by using A. aspera and their utilization against the poultry pathogens. In the present study, A. aspera leaf extract was used as reducing agent to produce zinc oxide. The synthesized nanoparticles were characterized using UV-visible spectroscopy, XRD, FTIR, SEM and AFM. UV-Visible analysis confirmed the presence of ZnO nanoparticles in a colloidal solution. The absorption by ZnO nanoparticles produced a specific peak at 370 nm (figure 1). As indicated in literature, ZnO NPs display characteristic peak between 320 nm-380 nm [14,16]. FTIR analysis of ZnO nanoparticles showed absorption band at 3393.14 cm −1 that indicate O-H stretches, while bands at 2830.99 cm −1 is due to C-H stretches. The bands at 2285.23 cm −1 and 2108.78 cm −1 is due to the Table 1. Antimicrobial activity and MIC of ZnO NPs produced by A. aspera leaf extract against pathogenic strains. Microbial strains Aq. extract of A. aspera leaves (100 mg ml −1 ) ZnO NPs of A. aspera leaves (100 mg ml −1 ) Tetracycline +ve control (100 mg ml −1 ) MIC±SD [19] also reported the synthesis of ZnO nanoparticles using aqueous leaf extract of Cochlospermum religiosum where average crystalline size was less than 100 nm that supports present results. AFM describes the roughness and shape of ZnO nanoparticles. It was observed by tip corrected AFM measurement that ZnO nanoparticles were hexagonal in shape. Gupta et al [20] also narrated similar results of AFM of naoparticles. Nagarajan et al [21] showed 2D and 3D images of ZnO NPs using a line profile. The result revealed size of 36 nm and has a spherical hexagonal shape. Salmonella enteritidis SEM analysis of ZnO nanoparticles revealed the size and shape of sample. The current study showed that ZnO NPs have spherical and radial shape. The maximum and minimum particle size of ZnO NPs was 28.63 nm and 61.42 nm, respectively. Present results were in good agreement with Chauhan et al [22] that documented green synthesis of ZnO NPs from Cassia siamea leaves extracts. Whereas Datta et al [10] stated that biosynthesized nanoparticles were radial, cylindrical and spherical accumulated in small clusters. Yung et al [23] showed that ZnO NPs have size in the range of 50-70 nm that is closely related to results presented in current investigation. Siddiqi et al [24] and Jones et al [25] stated that zinc oxide nanoparticles have a significant antimicrobial effect on a number of Gram-positive as well as on Gram-negative bacteria. The antibacterial activity of biologically synthesized ZnO nanoparticles was studied against S. enteritidis and S. gallinarum in current research (table 1). ZnO nanoparticles showed better antibacterial activity as compared to antibiotic and plant extract. The diameter of zone of inhibition produced by ZnO nanoparticles (100 mg ml −1 ) against S. enteritidis strain was 31 mm±0.37 while against S. gallinaruma was 30 mm±0. 26. The results indicated that use of nanoparticles is a better option to control the problem of antibiotics resistance. The ZnO NPs act by penetrating the bacterial cell wall and cause the lipid peroxidation by generating reactive oxygen species. ZnO nanoparticles also disturb the bacterial cell integrity that causes the death of the bacterial cell. The results of antibacterial activity of ZnO nanoparticles against different bacterial strains as reported in literature are compared in table 2. Irshad et al [14] also reported similar trend in results as, the aqueous extract of Camellia sinensis leaves did not showed any antimicrobial activity against different microbial strains under study, while zinc oxide showed maximum zone of inhibition in comparison to standard antibiotic (gentamycin) and plant extract at same concentration. Although, Yadav et al [4] reported the antimicrobial activity of A.apsera against Streptococcus sp, but it was not effective antimicrobial agent against the poultry pathogens tested in present report. The reason is that both the plant extract and zinc oxide nanoparticles produced from it have different mode of action on bacteria. The ZnO NPs act by penetrating the bacterial cell wall and cause the lipid peroxidation by generating reactive oxygen species. ZnO nanoparticles also disturb the bacterial cell integrity that causes the death of the bacterial cell. In present research work, the antibacterial activity of ZnO nanoparticles against Salmonella enteritidis and Salmonella gallinarum was tested that is not reported yet in literature. MIC of ZnO nanoparticles for S. gallinarum and S. enteritidis was 0.390 mg±0.00 and 0.195 mg±0.00, respectively. The comparison of results of MIC of ZnO nanoparticles against different bacterial strains as reported in literature are stated in table 3. Conclusions The utilization of nanoparticles is the best alternative against antimicrobial-resistant pathogens. Biosynthesis of ZnO NPs through aqueous extract of leaves of A. aspera is a safe, effective and ecofriendly approach to fight with infectious microbes. The characterization techniques like UV-vis, FTIR, SEM, AFM, and XRD are rapid and sensitive techniques to identify and explain stability, shape, roughness, and integrity of synthesized nanomaterial. The excellent results of antimicrobial activity of ZnO NPs against poultry pathogens in comparison to standard drug encourage the use of these nanoparticles as antibacterial agents. Further in-vivo testing is required to apply it in pharmaceutical formulations
4,183.2
2021-02-26T00:00:00.000
[ "Materials Science" ]
A Need Analysis Survey on the Development of On-The-Job Training Web Application in Vocational College On Job Training (OJT) is best for skill development and attitude change. Implementation of OJT focuses on the transition of students to working life. Vocational college under the branch of TVET, needs to be based on recognized job standards, with an emphasis on practical components, psychomotor skills and exposure to training in the industry. Industrial management in vocational colleges should be systematic and efficient in line with the technology era. Therefore, a needs analysis survey was conducted to identify the needs of the OJT web application in vocational colleges and to identify several key elements that needs to be prioritized in the development of the OJT web application. This needs analysis survey was conducted on 70 vocational colleges OJT supervisors, 2 Technical and Vocational Education Division (BPLTV) officers under the vocational curriculum development unit of the TVET curriculum sector, 30 OJT students and 10 construction industry companies. The results of the study found that vocational college OJT management requires an OJT web application for more effective and systematic management. The results of this study found that some important elements should be emphasized during the development of the OJT web application. It includes elements such as curriculum, time management, screen display and industry database. Introduction Referring to the 2021 OJT Guidelines issued by the Vocational Technical Training Education Division (BPLTV), industrial training for vocational college students is known as OJT.OJT is mandatory for Malaysian Vocational Diploma Program (DVM) students in all vocational Vol 13, Issue 15, (2023) E-ISSN: 2222-6990 colleges.However, the management of OJT at the Vocational College should be more systematic and efficient in line with the digital revolution of today's technology.This research is focused on the need analysis survey on the development of an OJT web application and its applicability to vocational college OJT management.The OJT web application can be used in OJT management to improve management efficiency.The use of computers and technological tools in education makes one educational institution more advanced and growing rapidly in line with current progress.In addition, today's rapid development of information and communication technology (ICT) has also breathed new life into the development of education.According to Nicodemus Kalugho Mwambela & Simon Nyaga Mwendia (2019), digitization in education management refers to the ability to use digital technology to generate, process, share, and transaction information.Mohd Faizal Md Karim (2018) also stated that, many public and private education sectors in Malaysia have applied web-based information system management.Therefore, web applications are developed for faster loading, interactive and mobile (Lawal Olarotimi Badru, Vani Vasudevan & Govinda Ishwar Lingam,2022).Web technology is a revolution to the use of the internet and the web that has the characteristics of information sharing, easy to operate, and user-friendly design on the World Wide Web (Zuki & Khalid, 2016).This is supported by He Zhao (2022), in which the integration system through web applications in education information management not only facilitates the sharing of information but also improves the management quality of an education department.Web applications also facilitate the storage of large amounts of student data and minimize the use of paper (Pangestu & Samsinar, 2018).According to Adam et al (2022), the starting point for the success of industrial training depends on three main elements, which are robustness of planning, implementation and evaluation.Based on the study of effective system functionality in public education institutions in Malaysia, UTM, UKM, and UPSI have already used a web application system widely in the field of industrial training management (Tan & Arshad, 2016) whereas vocational college in Malaysia still doesn't have an online management system for OJT management. Literature Review Vocational education and training (VET) are commonly known as skill-based education that provides hands-on, job-specific training and occupational experience.According to Ali Rizwan et al (2021), ensuring the sustainability of these programs is highly dependent on the inclusion of vocational education and training (VET) as a crucial element.With the rise of Industry 4.0, mobile applications have become an essential part of our daily lives.It is imperative for all parties to adapt to the changes to remain competitive (Ali & Ibrahim, 2018).Abdul Rahman et al (2020) suggest that the education industry can utilize the widespread popularity of the internet to introduce digital learning.Additionally, the authors argue that implementing digital learning through the internet not only helps to achieve national objectives but also enhances the country's competitiveness in the knowledge-based global era.According to Nur Aiman bin Zainudin' (2021), wireless technology in education can help bridge the digital divide among developing countries.This study also highlights how the Ministry of Education Malaysia (KPM) is promoting the Digital Education Learning Initiative Malaysia (DELIMa) through the distribution of Google ID MOE-DL.DELIMa empowers digital education among students and the management of digital education at the college level.This is supported by the increased positive perception of school management documentation practices through the utilization of an application system.However, referring to the results of the Ministry of Higher Education's survey in the TVET Empowerment Book in Malaysia A Review (2019 Edition), it was found that there is no standard system to measure the capabilities of an institution clearly and consistently.Hence, it becomes challenging for the government to allocate funds based on Outcome-Based Budgeting.This refers to the efforts to improve performance management in the public sector and to assess the efficiency and effectiveness of a TVET institution.Thus, graduates need to be prepared with knowledge and skills from a relevant and appropriate technical point of view to overcome challenges in the industry through high resilience and good judgment.In order to prepare for world innovation, appropriate education needs to be taken by the new generation (Fadel & Ishar, 2022).Therefore, the use of computer technology that combines various media such as text, graphics, animation, video and audio controlled by computers is necessary in the world of educational management (Ismail et al., 2022).In line with the statement, the development of the OJT web application not only improves the management efficiency of the vocational college OJT management but also benefits its users consisting of students, lecturers and industry.This is supported by Rita Irviani and Pontianus Setiawan (2017) who stated that web applications speed up the communication process that initially happened traditionally (conventional) to become more modern (based on web technology).Indirectly, communication happens more effectively, quickly, and efficiently.Web technology is also needed as a web-based information service medium (Chumairoh et al., 2014).Several studies have also been conducted to identify the effectiveness of existing OJT applications or systems in government institutions.The results of the study are shown in Table 1.Studies shows, the OJT web application facilitates the OJT management process and it helps to coordinate between college, students, and industry. Methodology The use of mixed-methods enables researchers to answer research questions with sufficient depth and breadth (Enosh et al., 2014).For example, the quantitative approach helps a researcher to collect data from a large number of participants; thus, increasing the possibility to generalise the findings to a wider population.Dawadi et al (2021) stated that-the qualitative approach, on the other hand, provides a deeper understanding of the issue being investigated, honouring the voices of its participants.In other words, quantitative data bring breadth to the study and qualitative data provides depth to it.Therefore, both qualitative and quantitative techniques have been used by the researcher to obtain data.In the context of this study, three constructs are studied for OJT web application development needs.The construct is the view of students, OJT supervisors from college, and industry towards the current OJT management.Interviews were conducted with 2 BPLTV officers under the vocational curriculum development unit of the TVET curriculum sector and 5 OJT supervisors who were randomly selected from vocational colleges.Quantitative research is done by using questionnaire instruments.Questionaire instruments were given randomly to 70 OJT supervisors selected from the construction technology course, 30 OJT students from the construction technology course, and 10 construction industry companies. Results Interviews with several representatives of polytechnic and ILP lecturers found that the institution has also used a web-based system in the management of student industrial training.Apart from that, the results of the need analysis survey show, public and private educational institutions such as USIM, USM, UPM, UNISZA, UNISEL, and UTHM also have a web application system for the management of industrial training students.Based on the above statement, it can be concluded that the use of web applications is a necessity in creating a systematic industrial training management system in the field of education.Yusof and Mohiddin (2018), stated that industrial relations and training units should have an efficient and systematic administrative management system.The results of surveys and interviews with students, lecturers, and the industry found that Vocational Colleges do not have a systematic system or application to manage the OJT course.Apart from that, the results of the survey also found that students who do not get an industrial training place need to contact the OJT management directly for application matters.Based on a survey conducted on 10 construction industry companies, it was also found that student's personal information, OJT guidelines and college supervisor information are quite difficult to reach and refer to.Continuous triangulation between curriculum, educational opportunities and employer needs should be aligned to ensure students' ability to explore pathways from college to the workplace.The vocational college is responsible for determining that graduates get the necessary industry exposure as preparation before stepping into the world of work by ensuring that they manifest the knowledge learned while at the college.Lecturers, students and the industry are directly involved in OJT management.As a result of the interview conducted with the OJT coordinator of the vocational college, it was found that the vocational college does not have a system or access to a systematic OJT management web application.The OJT coordinator has also informed us that the filling in of marks is still carried out manually on Excel Pre-Adop and Adop templates.On the other hand, the work load is added when the vocational college OJT management has to send a report to the director of the Vocational College and the Technical and Vocational Training Education Division.This has to be done manually.Therefore, the development of an application is needed to give access to the college, industry, and students.Interviews with 2 BPLTV officials from the curriculum development unit found that BPLTV still does not have a systematic online application system for OJT management.Apart from that, attendance and daily reports of students are also difficult to obtain through manual methods.Student monitoring is also very difficult.They also stated that by developing the OJT web application, OJT students management will be more systematic.BPLTV officials also stated that this OJT web application can also be used as a reference in curriculum development with the existence of an industry database.A needs analysis survey was conducted on 70 Malaysian Vocational College OJT supervisors through questionnaires.80% of lecturers stated that 15 OJT forms are used maximally in OJT management.Apart from that, 89% of the lecturers stated that the OJT evaluation marks were still calculated using the manual method while 60% of the lecturers stated that it was not easy to get an OJT offer.62.8% of the lecturers stated that briefing on the use of the form was not given to the industry before the OJT training begin.71.5% of lecturers stated that the application process for industrial training takes time.This clearly shows that the OJT management of the Vocational College needs a web application that can facilitate the implementation of the OJT course at the Vocational College Referring to Table 3 above, a needs analysis survey was conducted on 30 random vocational college students from the year 2017 to 2020 intake.A total of 63.3% of students stated that it was quite difficult to obtain industry offers and 86.7% of students stated that the process of applying for job offers took a long time.60% of students stated that they did not receive the list of industrial training places as a reference.A total of 76.7% of students stated that the OJT form was a bit confusing.Besides, 93.4% of students stated that the OJT handbook is always taken to the industry as a reference.According to Fazeera Syuhada Abdullah et al ( 2019), the manual method not only complicates the students, but also makes it difficult for the lecturers who manage industrial training.This is supported by Satrio (2019), who states that students have to spend time preparing important documents that need to be sent to an organization even though their application has not necessarily been accepted or rejected.Table 4 shows the needs analysis findings from 10 Construction Industry Companies.Based on a survey conducted on 10 construction industry companies, it was found that student profile information, OJT guidelines, and college supervisor information are quite difficult to reach and refer to.87% of companies stated that the industry needs a platform to share information about the job market with the vocational college OJT management.The industry also suggests the use of mobile web applications to improve the effectiveness of industrial training management.This clearly shows that the OJT web application is necessary to create systematic and effective digitalization management.Apart from that, there is a gap between employers' performance expectations and students' performance in the context of work efficiency (Siddoo et. al., 2018).This is supported by Piah and Haron (2018), who state that the existence of vocational colleges is one of the approaches taken by the education system in Malaysia to reduce the gap of mismatch among students with the workforce needs in the industrial sector.Therefore, by developing the OJT web application, OJT management can fill that gap by creating a database of construction industry companies that can be reached by students, industry and college lecturers.Triangulation between curriculum, educational opportunities and employer needs will help to ensure students' ability to explore pathways from college to the workplace.Therefore, the development of web applications as one of the industry's information channels can also strengthen the regulation by giving a positive impact. Elements that needs to be priotized in the development of the OJT web application OJT Assessment Rubric OJT is being introduced to empower competency and improve students' workability.Therefore, OJT assessment is vital to ensure the effectiveness of OJT in Malaysian vocational colleges.Lecturers and industry professionals should work together to produce a reliable and valid assessment rubric to measure student's performance to ensure student quality is at par with the industry requirement (Abdul Musid et al., 2020).A good assessment rubric should have sufficient criteria for assessing students' performance in the OJT.It will help in producing competent graduates that possess high potential in becoming skilled workers in the future.However, only a little attention has been given to the criteria of the OJT assessment rubric.Therefore, curriculum development should be in line with industry needs.Database of industrial companies and job market information should be emphasized during web application development (Celarta & Esponilla, 2021).Developing an OJT rubric that is aligned with the needs of the industrial market is essential in ensuring that students are adequately prepared for the workplace.Collaboration between educational institutions, industry partners, and the curriculum development department can help ensure that the OJT rubric is comprehensive and covers all the necessary competencies and skills required in the industry.Educational institutions, on the other hand, can provide insights into the academic and practical requirements of students.They can also ensure that the rubric is aligned with the curriculum and the learning outcomes of the program.The curriculum development department can provide guidance and support in the development of the OJT rubric.They can also ensure that the rubric is in line with the national or regional standards and benchmarks for vocational education.In conclusion, a collaboration between all stakeholders is crucial in the development of an effective OJT rubric that meets the needs of the industry and adequately prepares students for the workforce. Elements in the web application screen display The Industrial and Alumni Relations Unit (UPIA) delivers information related to industrial training to students who will undergo industrial training as well as to students currently undergoing industrial training.Among the information that OJT management should share is, the industrial training syllabus, industrial training calendar, industry list and the latest information and documents that can be downloaded by students (Tawyer et al., 2021).In order to achieve the objective of producing graduates with high marketability, potential employers can also advertise job offers through web networking.Referring to Table 5 below, there is a gap in the current industrial training system (Tan & Arshad, 2016).Thus, the OJT web application development could, indirectly create a triangulation network between students, lecturers, and the industry. Jobmarket Database The effectiveness of vocational education and training (VET) depends on the quality of interactions between the actors from the education and employment systems, which ensure the correspondence of skills supply and demand.According to Bolli et al (2018), surveying VET experts from 18 countries suggests that countries with dual VET have the highest education employment linkage, while the included Asian countries score lowest in terms of education employment linkage.The analysis further reveals that the three most important sub-processes are employer involvement in the definition of qualification standards; employer involvement in deciding the timing of curriculum updates; and the combination of workplace training with classroom education.Rapid research progress in science and technology (S&T) and continuously shifting workforce needs to exert pressure on each other and on the educational and training systems that link them.Higher education institutions aim to equip new generations of students with skills and expertise relevant to workforce participation for decades to come, but their offerings sometimes misalign with commercial needs and new techniques forged at the frontiers of research (Katy Börnera, Olga Scrivnera & Mike Gallanta, 2018).Therefore, the industrial database should be included as a key element during the development of the OJT web application. Recommendation & Conclusion The mismatch between the industry's demand for the skills possessed by TVET graduates is also caused by the lack of the industry's involvement in curriculum development and expertise sharing with TVET institutions (KPT, 2020).The development of the OJT web application will strengthen the more organized and efficient management of OJT.The use of digital technology can change the atmosphere of teaching and learning to be more modern and interesting compared to traditional methods.The results of the study found that the strategy of using web applications in education in the era of Industrial Revolution 4.0 is based on four main elements namely creativity, reflectivity, reciprocity, and responsibility.In addition, the technological progress of a country will be a benchmark for the level of progress that has been achieved.It shows the ability to apply technology towards the universal good. It is recommended to develop the OJT web application as a curriculum evaluation reference.Besides, education institutions can get feedback from the industry and professional bodies, improve relationships with the existing industry.Finally, it will help to achieve a meaningful two-way relationship to find a meeting point between the supply and demand of human capital.Overall, it can be concluded that, the development of OJT the application provides a better management system.OJT web application can be managed easily and is less complex between the college coordinator and the industry.This system can manage all the documents needed for every student and training coordinator.Indirectly, it will reduce OJT management workload.Besides, it will simplify communication between the company and Industrial Training Coordinator.This web-based system will provide a clear guideline to students, companies and college supervisors.There has been much concern about the quality of Malaysian graduates.Employers in the country generally feel there is a gap in graduate skills, suggesting that universities do not necessarily provide enough opportunities for students to develop the abilities critical to the labour market.The Education Minister of Malaysia reported that nearly 60 percent of degree holders and above remain unemployed after one year of graduating.With today's rapidly changing pace of the job market, employees need to keep abreast of new knowledge and technology.It is heartening to note that many companies value industrial placement as a way to train future employees and consider offering such training programs as their corporate social responsibility.Therefore, the development of a web application with an industrial database helps students obtain hands-on experience and know the real job scenario besides strengthening the relationship between industry, students and OJT management. Table 1 Effectiveness of existing OJT web application/system Table 2 Table 2,3 and 4 below shows the findings of need analysis from OJT supervisors, students and construction industries engaged with Vocational College.Needs Analysis Findings from Vocational College OJT Supervisors Table 3 Need Analysis Findings from OJT students from the year 2017-2020 Table 4 Need Analysis Findings from 10 Construction Industry Companies Table 5 Comparison of system function (Adaptation from Nurul Asyikin Zamri Tan dan Marina Md.Arshad 2016)
4,771.2
2023-10-07T00:00:00.000
[ "Education", "Computer Science", "Engineering" ]
Observing microscopic structures of a relativistic object using a time-stretch strategy Emission of light by a single electron moving on a curved trajectory (synchrotron radiation) is one of the most well-known fundamental radiation phenomena. However experimental situations are more complex as they involve many electrons, each being exposed to the radiation of its neighbors. This interaction has dramatic consequences, one of the most spectacular being the spontaneous formation of spatial structures inside electrons bunches. This fundamental effect is actively studied as it represents one of the most fundamental limitations in electron accelerators, and at the same time a source of intense terahertz radiation (Coherent Synchrotron Radiation, or CSR). Here we demonstrate the possibility to directly observe the electron bunch microstructures with subpicosecond resolution, in a storage ring accelerator. The principle is to monitor the terahertz pulses emitted by the structures, using a strategy from photonics, time-stretch, consisting in slowing-down the phenomena before recording. This opens the way to unpreceeded possibilities for analyzing and mastering new generation high power coherent synchrotron sources. Interaction of a relativistic electron bunch with its own created electromagnetic field can lead to the so-called microbunching instability. It is encountered in systems based on linear accelerators 1 , solar flares [2][3][4] , as well as in the widely-used storage rings facilities [5][6][7][8][9][10][11][12][13][14] (synchrotron radiation facilities), where electron bunches are forced to circulate onto a closed loop trajectory. Above a threshold electron bunch density, a longitudinal modulation or pattern appears with a characteristic period at the millimeter or sub-millimeter scale [5][6][7]15 . This structure emits intense pulses of terahertz radiation (typically more than 10000 times normal synchrotron radiation), called Coherent Synchrotron Radiation (CSR). Each CSR pulse shape may be viewed as an "image" of the electron bunch microstructure. As a consequence, a particularly efficient way to study this fundamental physical effect consists in using existing user-oriented storage rings (i.e., synchrotron radiation facilities). Indeed, recording CSR pulses emitted at each turn in such a storage ring would be theoretically sufficient to follow the electron microstructure evolution over a long time. This has been recently demonstrated in a special case where the microstructure wavelength is in the centimeter range and CSR emission occurs in the tens of GHz range 16 , thus being accessible to conventional electronics. However, in most storage rings such as ALS 6 , ANKA 10 , BESSY 7 , DIAMOND 11,12 , ELETTRA 11 , or SOLEIL 14 , etc., the CSR emission occurs at frequencies that are so high (above 300 GHz for the present case of SOLEIL) that no suitable recording electronics is available at the moment, nor expected in the near future. Here, we propose a strategy that overcomes these limitations, and thus enables such fundamental studies in many storage ring facilities. It consists in "slowing down" the signals so that they can be recorded by conventional oscilloscopes (Fig. 1). This is a two-step process as shown in Fig. 2 THz CSR pulse is encoded into a laser pulse, using the well established technique of THz electro-optic sampling 10,[17][18][19] . Then the key point is to use a setup based on the so-called photonic time-stretch strategy [20][21][22] , which consists in dispersing the pulses in a long fiber. Under some condition on the fiber length L 1 , the output pulse is a replica of the original signal, except that is slowed down by a magnification factor (or stretch factor) 23 : where L 1 and L 2 are the input and output fiber lengths (L 1 = 10.7 m and L 2 = 2 km for the results presented hereafter, leading to a stretch factor M = 190). Using such a strategy instead of the classical spectral encoding [17][18][19] method presents two advantages: (i) a much higher acquisition rate (i.e., the number of recorded terahertz pulses per second), which is directly linked to the laser repetition rate, and (ii) the possibility of performing balanced detection 24 , a crucial point for reaching high sensitivity. The time-stretch THz recording system (detailed in the Methods section) was able to acquire 88 × 10 6 terahertz pulses per second, and its sensitivity was measured to be 37 V/cm inside the crystal (see Methods). This allowed us to record the terahertz CSR pulses (electric field, including envelope and carrier) emitted at each turn (i.e., every 1.2 μs) at the AILES beamline of the SOLEIL storage ring. First experiments with the time-stretch acquisition setup allowed us to record the THz CSR for each revolution in the ring, and to visualize the predicted microstructure in the electron bunch circulating at Synchrotron SOLEIL. The structure is clearly visible in Fig. 3, which represents a typical series of individual pulses, recorded at successive round-trips. Furthermore, the new type data thus obtained contain extremely detailed information on the long-term evolution of the structures. In order to summarize the dynamical features, we displayed the pulse evolution of Fig. 3 as a colormap versus the longitudinal coordinate, and the number of turns in the storage ring (Fig. 4). It clearly appears that-though their evolutions are very complex-the structures are constituted of oscillations with a characteristic wavenumber in the millimeter range (more precisely 10 cm −1 , or 0.3 THz as shown in the inset of Fig. 3). This is consistent with previous indirect observations, by spectrometric measurements, of a strong terahertz emission peak at 10 cm −1 (Ref. 14). Moreover, the recordings (as in Fig. 4) systematically revealed complex drifting evolution during the revolutions, that remind the ubiquitous irregular behaviors that occur in fluid dynamics 25 . We believe that this new detailed data will provide a real platform for testing and refining theoretical models of relativistic electron bunch dynamics. Several important features of the experimental observations can be reproduced with already existing theoretical models 5,14,26,27 , where only longitudinal dynamics of the electrons is taken into account. Each electron i is characterized by its instantaneous position z i and energy variable δ i . A map (at each turn n) can then be written for the evolutions of z i n and i n δ . Taking the continuous limit for the number of round-trips n 28,29 : Figure 1. General principle of the experiment. A relativistic electron bunch circulating in the SOLEIL storage ring (2.75 GeV) presents microstructures, that evolve in a complex way. The coherent THz radiation emitted at a bending magnet carries the information on the microstructure shape, but is too fast (at picosecond scale) to be recorded by traditional means. The present strategy consists in "slowing-down" the information in order to obtain an optical replica at the nanosecond scale, so that a conventional oscilloscope can be used for the recording. The soft clock has been adapted from https://openclipart.org/detail/16605/ clock-sportstudio-design-by-fzap (public domain CC0 1.0 Universal). where t is a continuous time associated to the number of round-trips n. with E i (t) the electron energy, and E R the reference energy corresponding to the synchronous electron (2.75 GeV here). F(z i ) characterizes the electric field at position z i created by the whole electron bunch, and is the main ingredient of the instability. We use here the same form as in previous studies of SOLEIL 14 . ω s /2π is the synchrotron frequency (not to be confused with the storage ring revolution time), and τ s is the synchrotron damping time. c is the speed of light in vacuum and η measures the dependence of the round-trip time with the electron energy. η N ξ is a gaussian white noise term, with Parameter values are given in the Methods section. Typical numerical results are presented in Fig. 5. When the electron bunch charge exceeds a threshold, finger-like structures are spontaneously formed in the electrons phase space (Fig. 5a). Furthermore, the Principle of the photonic time-stretch device realized for slowing down the information, while keeping a high sensitivity. The THz pulse under investigation is first encoded into a chirped laser pulse, using an electro-optic crystal ("EO modulation device"). This device (see Fig. 6 for detils) provides two complementary outputs. Then the optical information of each output is simply stretched from picoseconds to nanoseconds by propagation in a long fiber (2 km). Balanced detection is performed between the two stretched laser pulses, thus providing a very high sensitivity for the device by removing the "DC" background. Details of the optical system are presented in the Methods section. The longitudinal electron bunch shape (Fig. 5b) is deduced from the vertical projection of the phase space distribution (Fig. 5a). Then the CSR THz field at the electron bunch location (Fig. 5c) is deduced from the electron bunch shape (Fig. 5b). As can be seen in Fig. 5c, only the fast variations lead to a significant coherent terahertz field. This natural "AC-coupling" is an advantage for the observation as it removes the global (slow) electron bunch shape, and let pass only the important information. Because the electron bunch distribution rotates counter-clockwise in phase-space (see Fig. 5a and supplementary movie), the microstructures drift along the longitudinal position toward the head of the electron bunch (Fig. 5c). The drift of the structures (in Figs. 4 and 5d) can thus be interpreted as a consequence of the formation of "fingers" in the lower part of phase space (Fig. 5a). Main features of the theoretical predictions are found in relatively satisfying agreement with the new experimental findings. In conclusion, the present time-stretch strategy allows one to perform a "time-lapse observation" of microscopic structures that appear within charged relavististic objects. The advantages over classic single-shot electro-optic sampling strategies are a simultaneous improvement on both the aquisition rate and the sensitivity. Such quantitative studies open-up to a new level of understanding of electron beam dynamics, and severe tests of theoretical models. We believe that this strategy may be a key contribution in situations where high acquisition rate measurements are needed. Straightforward applications concern the investigation of the THz pulses emitted by other storage rings, and by terahertz free-electron lasers. The technique can also be transfered to high-repetition rate linear accelerators, provided an electro-optic sampling system can be used 18 . Perspectives in ultrafast spectroscopy are also envisaged, as the instantaneous THz spectrum can be straightforwardly deduced from the electric field shape (inset of Fig. 3). In addition to the THz domain, the present time-stretch strategy also opens new possibilities at short wavelengths. Important perspectives to be explored concern the monitoring of optical pulses from high-repetition rate EUV and X-ray Free-Electron Lasers, for instance by associating the time-stretch strategy to transient reflectivity setups 30,31 . Methods Ultrafast recording setup. The detailed setup is presented in Fig. 6. It exclusively uses off-the shelf components, and is composed of three parts: • A classical system for generation of chirped laser pulses, using a femtosecond laser and a dispersive fiber. • A classical electo-optic modulation system, based on the Pockels effect in a GaP crystal. • A specially designed balanced time-stretch device. This setup disperses the optical pulses up to the nanosecond range. Thus we can achieve simultaneously a high repetition rate, and at the same time a high sensitivity thanks to the possibility of the balanced detection. This is the key point of the setup. Production of the stretched laser pulses. We use a femtosecond Ytterbium-doped fiber laser (Orange) from MENLO GmbH. The emitted pulses have a spectral bandwidth of 50 nm, and the total output average power is 40 mW. The repetition rate is chosen at 88 MHz, which corresponds to 1/4 th of the RF frequency of Synchrotron SOLEIL and 104 times the electron revolution frequency. It is synchronized on the RF clock of the storage ring using a RRE-Synchro system from MENLO GmbH. The pulses are chirped by a polarization-maintaining fiber (PM 980), which length determines the temporal window of the acquisition (typically few tens of picoseconds). The length L 1 which is used in the calculation of the stretch factor M = 1 + L 2 /L 1 is the sum of two components: the actual length of the fiber used after the laser (10 m here), and a small contribution due to pulse dispersion inside the laser (estimated to 0.7 m of propagation in a PM980 fiber). Thus we took L 1 = 10.7 m, leading to M ≈ 190. A polarizer is placed at the output in order to remove possible spurious components with the wrong polarization. Electro-optic modulation device. This part corresponds to the EO modulation device in Fig. 2 and from (P) to (PBS) in Fig. 6. The THz radiation available at the focusing point of the beamline is first collimated using an off-axis parabolic mirror (not shown in Fig. 6), with 101.6 mm focal length. It is then focused in the GaP crystal using an off-axis gold-coated parabolic mirror (OAPM in Fig. 6) with 50.8 mm focal length, and a 3 mm hole. The laser and the THz radiation interact in a [110]-cut GaP crystal with 5 mm length (10 × 10 × 5 mm 3 ). The [-110] axis is parallel to the polarizations of the laser and the THz beam. An achromatic quarter-wave plate (QWP) is inserted after the GaP crystal. Its optical axis oriented at π/4 with respect to the laser polarization. Finally, a polarizing cube beam-splitter (PBS) provides the two outputs of the EO modulation device. At the outputs of the PBS, the pulses contain an intensity modulation which is a "replica" of the THz pulses, and the two outputs are modulated in opposite phase. Time-stretching of the two optical pulses, using a single fiber. Instead of using physically different fibers for the final dispersion (Fig. 2), we use a variant that is much more robust from the experimental point of view. As can be seen in Fig. 6, the two output pulses of the polarizing cube beam-splitter are sent in the same fiber, in opposite directions. Finally, two beam-splitters (BS) extract the two pulses, which are sent to a fast balanced photodetector. This technique (which reminds the idea of the Sagnac loop) permits to obtain the same path on the two laser pulses even when the fiber optical length fluctuates. The L 2 fiber is an HI1060 from Corning with 2 km length (and an overall attenuation of the order of 3 dB). and the beam-splitters are chosen to have low polarization-dependent losses (Newport 05BC17MB.2). This choice for L 2 leads to stretched pulses of ≈4.5 ns, and 1.35 mW peak power is typically dectected in each balanced photodetector channel input. Recording electronics. The detection and subtraction between the two stretched signals is performed using a DSC-R412 InGaAs amplified balanced photodetector (photoreceiver) from Discovery Semiconductors, with 20 GHz bandwidth and 2800 V/W gain (specified at 1500 nm). The two differential outputs of the detector are sent on a Lecroy LabMaster 10i oscilloscope with 36 GHz bandwidth, 80 GS/s sample rate on each channel, and a memory of 256 Mega samples. Data processing. In absence of THz signal, the recorded balanced signal presents a non-zero shape which corresponds to imperfections of the setup, in particular small polarization dependent losses (that depend on wavelength). Since this "background" signal is deterministic (i.e., is the same for each laser pulse), it is easly removed from the signal, by a simple subtraction. Transport of the terahertz beam. We operated the time-stretch setup at the A branch of the AILES beamline, just before the interferometer (see Ref. 32 for the beamlline detail). The focusing point was imaged onto the GaP crystal, using a telescope composed of a 101.6 mm focal length off-axis parabolic mirror (not shown in Fig. 6) and a 50.8 mm off-axis parabolic mirror (OAPM in Fig. 6). Perfomances of the setup. The special setup presented in Fig. 6 provides higher acquisition rates (in terms of number of pulses per second) than classical spectral encoding methods. The reason is that oscilloscopes can nowadays reach much higher data acquisition rates (80 Giga samples/s here) than the cameras which are necessary for spectral encoding. Here, the acquisition rate capability is limited by the repetititon rate of the laser, namely 88 MHz. At the same time, the setup allows us to perform acquisition with higher sensitivity than the traditional single-shot electro-optic sampling, because, of the possibilty to achieve balanced detection at the analog level. The sensitivity is here mainly limited by the noise of the amplified balanced detector. The RMS noise on the finally recorded signal can be easily measured, and this gives a measure of the system sensitivity. The RMS noise (over the 20 GHz bandwidth of the phototetector) corresponds to a birefringence-induced phase shift in the GaP of 3.2 × 10 −3 Radian. Assuming that the relevant electro-optic coefficient of GaP is r 41 = 0.97 pm/V, and neglecting the THz frequency dependence of r 41 , the sensitivity is estimated to be 37 V/cm (inside the crystal) near the laser pulse peak. The SOLEIL storage ring was operated in single bunch, normal alpha mode, with natural bunch length σ z = 4.59 mm, relative energy spread σ δ = 1.017 × 10 −3 , and a momentum compaction factor α = 4.38 × 10 −4 . The ring was operated at a current of 15 mA, which is above the microbunching instability threshold (≈10 mA). Other parameters are described in Ref. 14. The model parameters η and η N are defined by
4,047.4
2014-10-26T00:00:00.000
[ "Physics" ]
Effects of Different Temperatures on the Chemical Structure and Antitumor Activities of Polysaccharides from Cordyceps militaris The effects of different extraction temperatures (4 and 80 °C) on the physicochemical properties and antitumor activity of water soluble polysaccharides (CMPs-4 and CMPs-80) from Cordyceps militaris (C. militaris) were evaluated in this study. The results of gas chromatography (GC) and high-performance gel permeation chromatography (HPGPC) showed that a higher extraction temperature could degrade the polysaccharides with 188 kDa, mainly composed of glucose, and increase the dissolution rate of polysaccharides about 308 kDa, mainly consisting of rhamnose and galactose. In addition, the CMPs displayed the same sugar ring and category of glycosidic linkage based on Fourier-transform infrared spectroscopy (FTIR) analysis, however, their invisible structural difference occurred in the specific rotation and conformational characteristics according to the results of specific optical rotation measurement and Congo red test. In vitro antitumor experiments indicated that CMPs-4 possessed stronger inhibitory effects on human esophagus cancer Eca-109 cells by inducing cell apoptosis more than CMPs-80 did. These findings demonstrated that the polysaccharides extracted with cold water (4 °C) could be applied as a novel alternative chemotherapeutic agent or dietary supplement with its underlying antitumor property. Introduction C. militaris, an entomogenous fungus belonging to the class Ascomycetes, has been widely used as a folk tonic food and therapeutic drug for various diseases in East Asia [1]. C. militaris, known as the Chinese rare caterpillar fungus, has a similar pharmacological effect to the well-known Chinese traditional fungus Cordyceps sinensis [2]. Although a variety of bioactive components, such as cordycepic acid, cordycepin, adenine, adenosine, etc., have been reported, the polysaccharides are usually regarded as one of the most abundant bioactive substances, with the highest content in C. militaris [3,4]. Modern research has shown that polysaccharides from C. militaris possess multiple biological properties, such as antioxidant [5][6][7], antitumor [8,9], immunoregulatory [10][11][12], anti-hyperlipidemic [13], anti-inflammatory activities [14], etc. In general, the physicochemical properties of polysaccharides using conventional water extraction were strongly affected by multiple parameters, such as the source of raw materials, water temperature, extraction time, extraction solvent, ratio of liquid to solid, and so forth [15,16]. In addition, previous research has shown that different extraction methods influence the structures and pharmacological activities of polysaccharides [17]. Zhu et al. compared the influence of various extraction methods on the chemical structure and antitumor activity of Cordyceps gunnii mycelia polysaccharides, and their remarkable differences were observed in their antitumor effects and physicochemical properties, such as scanning electron microscopy, intrinsic viscosity, and specific rotation [18]. Zhang et al. revealed that ultrasonic treatments may induce the degradation of polysaccharides and further changes in their chemical structure, thus resulting in poor antioxidant capability [19]. Zhao et al. reported that a higher extraction temperature and longer extraction time could cause the hydrolysis of polysaccharides, and further lead to the reduction of polysaccharide yield and the variation of microstructure [20]. Therefore, tremendous efforts should be made to explore extraction technologies for the strongest biological activities of polysaccharides. Recently, researches concerning the C. militaris polysaccharide have mainly focused on the optimization of extraction conditions for maximum yield and good bioactivity, however, the influence of extraction temperature on the physicochemical properties and antitumor activities of polysaccharides remains limited. In this study, the main objective was to investigate the effects of cold-and hot-water extraction from C. militaris on the relationship between the structural characterization and antitumor activities of polysaccharides, and to further evaluate their antitumor capacities by comparing the in vitro inhibition effects on human esophagus cancer Eca-109 cells. The obtained CMPs were named CMPs-4 and CMPs-80, respectively, according to their extraction temperature (4 and 80 • C). Chemical Properties and Monosaccharide Composition The sugar and protein contents, and their molar ratio of polysaccharide samples, are shown in Table 1. The proportions of total sugar in CMPs-4 and CMPs-80 could reach 87.63% and 85.42%, respectively, and little protein was detected. The monosaccharide composition of CMPs was measured and identified by comparing its retention time with that of standard monosaccharides using GC analysis ( Figure 1). Both of them were mainly composed of rhamnose, arabinose, xylose, mannose, glucose, and galactose, in molar ratios of 0.24:0.57:0.48:1.00:12.41:1.63 and 3.98:0.62:0.42:1.00:6.70:3.18, respectively. The results showed that CMPs-4 and CMPs-80 had similar monosaccharide constituents and a higher proportion of glucose in CMPs-4, while the opposite trend was found in the proportions of rhamnose and galactose, indicating that a higher extraction temperature could degrade the polysaccharide mainly composed of glucose and increase the dissolution rate of polysaccharides consisting of rhamnose and galactose. Molecular Weight Distribution The regression equation of the standard dextran curve was established as follows: y = −0.3866x + 8.8472, R 2 = 0.9904 (y = lgM w , x = R t ). The retention time, molecular weight population, and relative contents of CMPs-4 and CMPs-80 are displayed in Table 2. The HPGPC profiles of CMPs-4 and CMPs-80 presented two peaks, indicating that the polysaccharide fractions of the two samples contained two major molecular weight distributions. A low molecular weight (approximately 2.5 kDa) was observed with different relative contents in both CMPs. However, a higher molecular weight (about 308 kDa) with a higher percentage area (62.46%) was detected in CMPs-80, compared to the lower molecular weight (about 188 kDa) with a lower percentage area (53.23%) detected in CMPs-4. This suggested that a higher extraction temperature might result in material cell disruption and further speed up the diffusion of macromolecular substances into water. Therefore, a different temperature treatment during the extraction process could influence the conformation of polysaccharides and change the distribution of polysaccharide compositions. IR Spectral Characteristics IR spectra of CMPs extracted with different temperatures showed very similar absorption bands ( Figure 2). Three characteristic absorption bands of the polysaccharides were displayed in the IR spectrums: a broad band at around 3416 cm −1 attributed to O-H stretching vibrations, a weak band at around 2932 cm −1 due to C-H stretching vibrations, and a band at around 1404 cm −1 attributed to the C-H bending vibrations. In addition, the strong absorption band at around 1625 cm −1 was derived from C=O stretching vibrations. The absorption bands in the range of 1144-1048 cm −1 could be assigned to the stretching and bending vibrations of C-O, C-C and C-O-C, and C-O-H, indicating the presence of pyranose rings, and the band at around 845 cm −1 suggested the presence of α-type glycosidic linkages in CMPs. This observation was consistent with the results of Zhu et al. [21], who indicated that the polysaccharides (CMPS-II and CBPS-II), isolated from the cultivated fruiting body and mycelium of C. militaris, mainly contained mannose, glucose, and galactose with α-type glycosidic linkages. The IR spectrums of CMPs-4 and CMPs-80 were almost identical only with a slight difference in the intensity of bands, indicating that different extraction temperatures had no distinct effect on sugar rings or the category of glycosidic linkages of polysaccharides, which coincided with the analysis of monosaccharide composition. Specific Optical Rotation Analysis of Polysaccharides The specific optical rotations of CMPs-4 and CMPs-80 were +49 • and +52 • , respectively, which showed the presence of α-glycosidic bonds in these two polysaccharides. The difference in specific optical rotations for both CMPs-4 and CMPs-80 also indicated that different extraction temperatures could cause changes in the stereochemical structure of polysaccharides, which might influence the strength of bioactivities. Conformational Characteristics of Polysaccharides Congo red, an acid dye, can interact with polysaccharides with triple-helical conformations to form complexes, leading to a bathochromic shift of the absorption maximum [22]. Figure 3 displays the maximum absorption wavelength (λ max ) of Congo red in the absence or presence of CMPs-4 and CMPs-80. The results showed that the λ max of Congo red CMP complexes exhibited a significant transition into longer wavelengths in the weak alkaline solution (<0.20 M NaOH), indicating the existence of a triple helix configuration in both CMPs-4 and CMPs-80. However, when the NaOH concentration was higher than 0.2 M, the λ max of Congo red CMP-80 complexes declined sharply, which might be due to the shift of space spiral structures to single flexible chains [23]. For CMPs-4, a small fluctuation was observed in the transition process of λ max with the alkaline concentration increasing, which is probably because the CMP-4 fractions extracted with cold water could maintain a stable triple helical structure in the high concentration of alkaline solutions. Further research is still needed to explain the phenomenon. In Vitro Antitumor Activity Analysis of Polysaccharides An MTT assay was applied to compare the inhibitory effect of CMPs-4 and CMPs-80 towards human esophagus cancer Eca-109 cells. As shown in Figure 4, after incubation for 24 h, CMPs-4 had a significant growth-inhibitory effect on Eca-109 cells in a concentration-dependent manner, and the inhibitory ratio was found to be 11.70%, 23.44%, 41.47%, 55.54%, 61.84%, and 66.54%, respectively, with the concentrations from 100 to 1000 µg/mL. Used as the positive control, 5-Fu exhibited the strongest inhibition effect (71.70%) on Eca-109 cells with 50 µg/mL. The IC 50 value of CMPs-4 at 24 h was calculated to be 532.9 µg/mL. In contrast, CMPs-80 showed a weaker inhibition (about <30% inhibitory rate) against human hepatoma Eca-109 cells than CMPs-4. The above data suggested that the inhibition ability of these polysaccharides was dose-dependent, and higher temperatures would reduce the antitumor activity of polysaccharides due to the damage to their structures. To examine whether the tumor cells' growth suppression was caused by the induction of cell apoptosis, Eca-109 cells were treated with CMPs-4 for 24 h and the morphological changes were observed under an inverted light microscope ( Figure 5A). The results showed that CMP-4-treated Eca-109 cells revealed typical apoptotic features with increasing concentrations of polysaccharides, including cell rounding and detachment, membrane blebbing, gradual loss of nuclear construction, and significant chromatin condensation, whereas these apoptosis characteristics above were not noticed in the untreated Eca-109 cells. The CMP-4-induced cell apoptosis was further evaluated and quantified using flow cytometry analysis after Annexin V-FITC/PI staining. Seen from Figure 5B, the untreated Eca-109 cells had the highest ratio of normal cells (96.53%) without obvious cell apoptosis. When the cells were treated with CMPs-4 at the concentrations of 200, 400, and 600 µg/mL, the proportions of apoptotic cells increased from 3.21% to 7.77%, 11.81%, and 14.44%, respectively. Quantitative analysis suggested that CMPs-4 significantly increased the number of apoptotic cells in a concentration-dependent manner. Nuclear morphology analysis can further confirm cell apoptosis by Hoechst 33,258 staining. As shown in Figure 5C, in comparison with the homogeneous and inerratic stained nuclei in untreated Eca-109 cells, CMP-4 treatment resulted in remarkable chromatin DNA condensation or nuclei fragmentation, which verified the typical characteristics of cell apoptosis. Discussion Esophageal cancer (EC), a highly malignant carcinoma, is estimated to be the eighth most common cancer and the sixth leading cause of cancer death worldwide [24]. Currently, chemoradiotherapy has dramatically improved the curability for unresectable malignancies compared to traditional chemotherapy or radiotherapy separately, and has been widely used to cure EC, however, serious side effects are always accompanied including tumor relapse and drug resistance [25,26]. Therefore, more and more efforts should be made to find safe and efficient bioactive substances for treating cancer. Polysaccharides, a kind of biological response modifier, have been extensively used as preventive and therapeutic agents for cancer because of their relatively low toxicity and effective antitumor activities. As reported, the polysaccharides' bioactivities were closely correlated to their structural features, including monosaccharide composition, molecular weight, functional groups, chain conformation in solution, and so on [27,28]. Temperature, as a critical parameter, influences various biological activities in animals and plants [29]. Currently, hot water (80-100 • C) is the most common polysaccharide extraction solvent, but the bioactivities may be reduced under high extractive temperatures due to degradation [30] and oxidation [31]. It has been reported that cold-water-extracted (4 • C) polysaccharides from Astragalus membranaceus exhibit the stronger antitumor and immunoregulation activities, with minimal side effects due to the degradation of active constituents by higher temperature [28]. In the present study, the differences of structural characteristics between polysaccharides from C. militaris with two extraction temperatures (4 and 80 • C) were deeply researched. Results showed that monosaccharides' molar ratio and the molecular weight distribution of these two polysaccharides exhibited significant discrepancies, where CMPs-80 had higher proportions of rhamnose and galactose, but a lower proportion of glucose. In addition, the CMPs-80 showed a higher molecular weight (about 308 kDa) with a relative high percentage area (62.46%), while CMPs-4 had a component about 188 kDa with a percentage area (53.23%). These results indicated that higher extraction temperature could degrade the polysaccharide mainly composed of glucose and increase the dissolution rate of polysaccharides consisting of rhamnose and galactose. The IR spectrums, specific optical rotation analysis, and conformational characteristics showed no significant difference between these two polysaccharides. Apoptosis is an important method for multicellular lives to maintain homeostasis and a key survival process for hosts when it occurs only tumor cells [32,33]. Characteristic morphological changes can be found in apoptotic cells, such as membrane bubbling, karyopyknosis, and chromatin condensation [34,35]. In this study, CMPs-4 exhibited much stronger inhibitory effects on Eca-109 cells and induced the cells to apoptosis identified via the above characteristics, which indicated that higher temperatures could destroy the structure of polysaccharides and reduce their antitumor activity. Considering the relationship of the structure characteristics and antitumor effect, we can conclude that the component of 188 kDa in the polysaccharide from C. militaris is responsible for the antitumor activity, which need to be further researched in the future. Plant Material and Chemical Reagents C. militaris were obtained from the incubation base of industry-university-research cooperation in YiLi Normal University (Sinkiang, China), and were first washed with deionized water thoroughly to remove all dirt and other contaminants. The C. militaris were dried to constant weight in an oven at 50 • C, screened through an 80 mesh sieve to obtain the homogeneous powder sample, and finally stored under suitable conditions for subsequent study. Fetal calf serum (FCS) was from Hangzhou Sijiqing Co. (Zhejiang, China). Annexin V-FITC/PI Apoptosis Detection Kits were purchased from Beyotime Biotechnology Co. (Shanghai, China). 5-fluoro-2,4(1h, 3h)pyrimidinedione (5-Fu), 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2-H-tetrazolium bromide (MTT), and dimethyl sulfoxide (DMSO) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Other chemicals and agents were of analytical grade. Extraction of Crude Polysaccharides The extraction procedure was carried out using water extraction and alcohol precipitation. The dried C. militaris powder (50 g) was extracted with distilled water (1:30 ratio of solid to liquid, w/v) at either 4 or 80 • C for 4 h. After extraction, the water extracts were filtered and centrifuged (4500 rpm, 15 min) to remove the insoluble fractions. The collected supernatant was concentrated by freeze-thaw method and then precipitated by the addition of 4 volumes of absolute ethanol overnight at 4 • C. The precipitated polysaccharides were obtained after centrifugation at 4500 rpm for 10 min, and washed using anhydrous ethanol and acetone alternately 3 times to remove the lipids and pigments thoroughly. Afterwards, the crude samples were re-dissolved in deionized water, and the protein was removed according to the Sevag method with 1-butanol and chloroform (1:4, v/v) for 5 times [36]. Finally, the mixture was concentrated to remove the Sevag reagents, dialyzed (M w 1000) in distilled water for three days to exclude the salt and low molecular weight sugars, and lyophilized to get the C. militaris polysaccharides (CMPs) for further study. Determination of Carbohydrate and Protein Content The contents of carbohydrate in the CMPs-4 and CMPs-80 were determined according to the phenol-sulfuric acid method using D-glucose as a standard [37]. The protein contents of the two samples were estimated by the Bradford's method using bovine serum albumin (BSA) as the standard [38]. Monosaccharide Composition Analysis of CMPs The monosaccharide composition of the polysaccharide samples was determined after acid hydrolysis and derivatization using GC (Shimadzu, Kyoto, Japan) according to the procedures described previously [27]. In brief, the dried samples (5.0 mg) were hydrolyzed with 2 mL of TFA (2 M) in the sealed tubes at 120 • C for 4 h. After TFA was removed, the hydrolysates containing myo-inositol as their internal standard were further acetylated and thoroughly dissolved in dichloromethane for GC analysis. In addition, L-rhamnose, D-Arabinose, D-Xylose, D-Mannose, D-Glucose, and D-Galactose were also derivatized as the standards to identify the monosaccharide components of the samples by comparing the retention times. Molecular Weight Distribution of CMPs The molecular weight distribution of CMPs-4 and CMPs-80 was performed by HPGPC (Agilent-1200, Agilent, Santa Clara, CA, USA) equipped with a TSK-gel G4000 PWxl column (7.8 mm × 300 mm, column temperature 30 • C) and Refractive Index Detector (RID, detecting temperature 35 • C) based on the approaches described previously [39]. Twenty µL of sample solution was injected and run with distilled water as mobile phase at a flow rate of 0.6 mL/min. The molecular weight distribution of polysaccharides was calculated according to the calibration curve established using T-series Dextran standards (T-10, T-40, T-70, T-500 and T-2000). FT-IR Spectrum Analysis of CMPs The characteristic groups of the polysaccharide samples were identified using FT-IR. A total of 0.7 mg of dried polysaccharide samples were blended with 150 mg of dry KBr powder, followed by pressing into a KBr disk for further research. The IR spectra was collected in the range of 4000-400 cm −1 on a FT-IR spectrophotometer (Bruker VECTOR-22, Karlsruhe, Germany). Determination of Specific Optical Rotation The digital automatic polarimeter (Precision & Scientific Instrument Co., Ltd., Shanghai, China) was applied to determine the optical rotation of the polysaccharide samples. The 10 mg CMPs-4 and CMPs-80 were dissolved with 10 mL distilled water, respectively, and transferred into the polariscope tubes. The optical rotation of the samples was read directly with an automatic polarimeter at the designated temperature (20 ± 0.1 • C). The specific optical rotation was calculated according to the following equation: where α is the optical rotation, t is the detecting temperature, λ is the wavelength of light source (λ = 589 nm), L is the length of polariscope tube, and C is the polysaccharide solution concentration (g/mL). Conformational Analysis of CMPs The conformation of the CMPs-4 and CMPs-80 was evaluated following the method reported by Su et al. [22] with little modification. Two milliliters of 0.5 mg/mL sample solution were mixed with 2 mL of 50 µM Congo red solution and a different volume of 1 M NaOH solution to make the final concentrations of NaOH solutions of 0, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, and 0.40 M. The reaction of the mixtures was performed for 10 min at room temperature, and subsequently the absorbance was scanned from 400 to 600 nm using an ultraviolet spectrophotometer (Infinite M200 PRO, Tecan, Crailsheim, Germany). A same volume of deionized water as the previously mentioned polysaccharides solution was used as a blank control. MTT Assay The viability of Eca-109 cells was evaluated in vitro using an MTT assay as previously described [40], with minor modifications. Eca-109 cells were cultivated to logarithmic phase with an RPMI 1640 medium containing 10% FBS at 37 • C in 5% CO 2 , and 100 µL of cell suspension (1× 10 5 cells/mL) was gently seeded into 96-well cell culture plates and incubated overnight. Subsequently, Eca-109 cells were subjected to different concentrations of sterile polysaccharide solutions (0, 100, 200, 400, 600, and 800) and incubated at 37 • C in a humidified 5% CO 2 incubator for 24 h, followed by the addition of 20 µL of MTT (5 mg/mL in PBS) into each well. After 4 h of incubation, the supernatant was removed by centrifugation at 1000 rpm for 10 min and the precipitated formazan was dissolved in 150 µL/well of DMSO. 5-Fu was used as the positive control. The absorbance value was detected at 570 nm using a microplate reader (Model 680, Bio-Rad, Hercules, CA, USA). The inhibition effect of the polysaccharides on Eca-109 cells was calculated as follows: Inhibitory ratio (%) = (1 − A Treated /A Control ) × 100. Annexin V/PI Double-Staining Eca-109 cells (8 × 10 4 cells/mL) cultured in 6-well plates were treated with an Annexin V/PI Apoptosis Detection Kit according to the directions. After treatment with the designated concentrations (0, 200, 400 and 600 µg/mL) of CMPs-4 for 24 h, the cells were collected and stained with the Annexin V-FITC/PI apoptosis detection kit following the manufacturer's manual. In brief, the harvested cells were incubated with 100 µL of a 1 × binding buffer containing 5 µL Annexin V-FITC and 5 µL propidium iodide (PI) for 10 min at room temperature in the dark. The stained cells were detected immediately by flow cytometry (BD FACSCallibur, BD, Franklin Lakes, NJ, USA). Hoechst 33258 Staining Eca-109 cells (8 × 10 4 cells/mL) were seeded into the 6-well cell culture plates and exposed to various concentrations of CMPs-4 (0, 200, 400 and 600 µg/mL) for 24 h. Subsequently, the cells were collected, washed with cold PBS, and stained with Hoechst 33258 solution at room temperature for 15 min. After that, the stained cells washed in PBS and observed by inverted fluorescence microscope (ECLIPSE Ti, Nikon, Tokyo, Japan). Statistical Analysis All values were expressed as the mean ± standard deviation (S.D.) and the statistical significance of differences was determined using the student's t-test and one-way analysis of variance (ANOVA). The data was considered significant at p < 0.05. All statistical analysis was carried out using SPSS 19.0 (SPSS Inc., Chicago, IL, USA). Conclusions In conclusion, we extracted the polysaccharides from C. militaris with water under two different temperatures (4 • C, CMPs-4; 80 • C, CMPs-80), and researched the relationship between the structural characteristics and antitumor effects. Our results showed that higher extraction temperatures could degrade the polysaccharide with 188 kDa, mainly composed of glucose, and increase the dissolution of polysaccharide with 308 kDa, mainly consisting of rhamnose and galactose. The in vitro antitumor test showed that CMPs-4 possessed stronger antitumor activity than CMPs-80, which could significantly inhibit the proliferation of Eca-109 cells via inducing their apoptosis. This study provides a novel extract method to obtain polysaccharides with higher bioactivity and a theoretical basis for the application of CMPs-4 in food and medical industries.
5,160.4
2018-04-01T00:00:00.000
[ "Chemistry", "Environmental Science", "Medicine" ]
A literature review and interpretation of the properties of high-TiO2 slags Synopsis The properties of high-TiO2 slags are markedly different from those of many other metallurgical slags. The objective of this paper is twofold: firstly, to assemble in one document the TiO2 slag properties, including Ti3+/Ti4+ ratio, liquidus temperatures, heat capacity, density, viscosity, thermal conductivity, and electrical conductivity: all expressed in terms of temperature and composition dependency. Secondly, this paper attempts to correlate published experimentally measured data with theoretically based parameters, which will enable the application of the measured properties to wider slag compositions within the pseudobrookite-TiO2 slag system. Introduction Slag properties are an essential tool in problem solving, both in equipment design and metallurgical process control. Experimental work on high-TiO 2 slags is particularly difficult because of their chemically corrosive nature and compositional sensitivity to oxygen partial pressure. While ample publications are available and are still being produced for silicate-based slags, comparatively few results are available for high-TiO 2 slags. The objective of this work is firstly to summarize the essential properties for liquid TiO 2 slags. Secondly, an attempt is made to derive correlations from the published experimental work which are based on a theoretical concept. In this way, it is believed that the experimental data can be cautiously extrapolated to other high-TiO 2 slags. To this end, a set of typical high-TiO 2 slags based on a data-set published in earlier work (Kotze and Pistorius, 2010) is used throughout this paper. The chemical composition of this set is given in Table I. Although 94%TiO 2 is not considered 'typical', it was included in the example slags because many producers migrate to higher TiO 2 slags. The experimental data cited in this work is by no means claimed to be all-inclusive. Glimmers of work done in Eastern European countries are present, though not easily interpreted by Englishspeaking researchers. On occasion experimental data was not used because of a discrepancy in the data or experimental method. Ti 3+ /Ti 4+ ratio Titania slag compositions are typically expressed as TiO 2 because analytical methods measuring the total titanium are fast and less cumbersome than the wet chemistry titration method required to measure Ti 3+ (Tranell, Ostrovski, and Jahanshahi, 2002). When appreciable amounts of Ti 2 O 3 are included in the TiO 2 value, titania is expressed as the equivalent TiO 2 . In these instances, slag assays exceed 100 because the oxygen is overestimated. The assay total is therefore an indication of the level of reduction. Previous attempts by the author to correlate measured Ti 3+ values with assay totals yielded unsatisfactory results. The balance between Ti 4+ and Ti 3+ follows Reaction [1] and like other transition elements, it is governed by a redox reaction such as the O-type behavior of Reaction [2] (Tranell, Ostrovski, and Jahanshahi, 2002). The Ti 3+ /Ti 4+ ratio is therefore expected to be dependent on the oxygen partial pressure and the oxygen activity in the slag bath. Tranell and co-workers Jahanshahi, 1997, 2002) investigated these relationships in TiO 2 -Ti 2 O 3 -SiO 2 -CaO slags and found the Ti 3+ /Ti 4+ ratio to depend on the oxygen partial pressure raised to the power of 0.21 (Tranell, Ostrovski, and Jahanshahi, 1997) and 0.23 (Tranel, Ostrovski, and Jahanshahi, 2002). Using optical basicity to represent oxygen activity, they derived a correlation using temperature, oxygen partial pressure, and optical basicity to calculate the Ti 3+ /Ti 4+ ratio. This correlation yielded good results with all the slags they investigated, but not with those of other researchers. [1] [2] In this work, electronic polarizability was investigated as an alternative measure to predict the activity of oxygen. Electronic polarizability is described as the 'ability of oxygen to transfer electron density to surrounding cations' (Dimitrov and Komatsu, 2010). It is a measure of the propensity of compounds to donate charge from oxygen ions to the surrounding electron cloud (Duffy and Ingram, 1976). In this context, Ca 2+ with a cation polarizability of 1.57Å 3 is 'generous' while Si 4+ at 0.284 Å 3 is comparatively 'stingy.' A few mutually exclusive data-sets exist to calculate electronic polarizability. The data used in this work is based on the large data-set of dynamic polarizabilities published by Shannon and Fischer (2016) which includes a value for the cation polarizability of Ti 3+ . The Ti 3+ /Ti 4+ ratios of the CaO-SiO 2 -TiO x slags from Tranell were found to correlate well with the ratio of the polarizabilities of Ca 2+ and Si 2+ , (polarizability Å 3 ; mole fraction X).This result corroborates the polarizability ratio of Ca 2+ and Si 4+ as a measure of the oxygen activity in the bath. The Ti 3+ /Ti 4+ ratios of the slags Tranell studied were subsequently calculated using a correlation with two parameters: the experimental oxygen partial pressure and the polarizability ratio Ra. Figure 1 shows the results of this correlation to compare well with the experimental Ti 3+ /Ti 4+ ratios. The validity of the correlation between oxygen partial pressure, electronic polarizability, and Ti valency was tested against industrial TiO 2 slags using measurements reported by Geldenhuys and Pistorius (1999). These workers measured the oxygen partial pressure in pilot plant TiO 2 slags and reported the TiO 2 and Ti 2 O 3 values of the slags at the time of measurement. Their measured oxygen partial pressures in the slags are plotted as the circles in Figure 2. Since the full assays of the slags used in the Geldenhuys study were not given, their results are compared with the calculated TiO 2 and Ti 2 O 3 values of the example slags in Table I using the correlation derived from the work by Tranell. This is not ideal, but a reasonable replacement, as the impurity levels of the ilmenites used in the Geldenhuys study and those used in the Kotze (Kotze and Pistorius, 2010) study -on which the example slags were based were similar. The calculated results shown with the solid and broken lines in Figure 2 compare well with the experimental data. It is worthwhile to note that several studies found the same Ti 3+ / Ti 4+ ratios in industrial slags as were found in the Geldenhuys pilot plant study. Few of these results are published such as the industrial slag data points given in Figures 4 to 6, but many are unpublished due to intellectual property rights. The dependency of the Ti 3+ /Ti 4+ ratio on the oxygen partial pressure and CaO/SiO 2 ratio (based on the work by Tranell) is summarized in Figure 3. Lower oxygen partial pressures lead to greater reduction and hence higher Ti 3+ /Ti 4+ ratios. Higher CaO/ SiO 2 ratios create higher oxygen activity, which in turn decreases the Ti 3+ /Ti 4+ ratio. In a given smelting process the oxygen partial pressure is largely governed by the Fe FeO equilibrium in the slag (Pistorius, 2007), while the CaO/SiO 2 ratio is determined by the raw material quality. In an earlier paper Tranell also reports MgO to affect the Ti 3+ /Ti 4+ ratio (Tranell, Ostrovski, and Jahanshahi, 1997). A correlation including the polarizability of Mg 2+ did not fit the industrial data. The industrial data was a small set, though, and it does not exclude the potential of MgO (as possibly Al 2 O 3 ) tosimilarly to CaO and SiO 2 influence the Ti 3+ /Ti 4+ ratio. Future work will investigate this possibility. The Journal of the Southern African Institute of Mining and Metallurgy VOLUME 120 FEBRUARY 2020 Pesl and Eriç (1999) conducted experimental work on slags of the TiO 2 -Ti 2 O 3 -FeO system, controlling the oxygen partial pressure with CO-H 2 gas mixtures, and calibrating slag samples at 1500°C and 1600°C in platinum or molybdenum crucibles for 6 to 8 hours before rapidly quenching them to capture the high-temperature phases. Their predicted liquidus isotherms for 1500°C and 1600°C are reproduced for axes in mass% and given in Figure 4 and Figure 5 as dotted lines. No accuracy or error prediction was reported. Seim melted slags in an induction furnace with a slag freeze-lining to protect the crucible walls (Seim, 2011;Seim and Kolbensen, 2010). Experiments were conducted in an Ar atmosphere, using titanium metal and haematite additions to control the level of reduction. Liquidus temperatures were determined from cooling curves captured by a spectropyrometer looking down onto the upper surface of the slag. The tolerances for individual measurements on the iron-saturated samples are given as 4.9°C to 33.3°C. Seim modelled a liquidus surface for the FeTiO 3 -Ti 2 O 3 -TiO 2 system using the measured liquidus temperatures of the iron-saturated slags and expanding the thermodynamic data-set with two optimized ternary parameters. The model is considered to accurately predict the measured liquidus temperatures of the iron-saturated slags. The liquidus temperature measurements on the iron-unsaturated samples were unexpectedly and unexplainably lower and not used in the modelling exercise. Despite this, the measured liquidus temperatures of the iron-unsaturated slags plot on average 108°C below the modelled liquidus and 59°C above the modelled solidus temperatures. The model isotherms for 1500°C, 1600°C, and 1700°C were reproduced to have TiO 2 , Ti 2 O 3 , and FeO as corners of the ternary and are shown in Figure Thermodynamic data for the binary systems TiO 2 -FeO and Ti 2 O 3 -TiO 2 were published by Eriksson and Pelton (1993) and they illustrated the validity of using this data to calculate the solid phase relationships in the ternary system below 1400°C. They did not publish liquidus predictions (Eriksson and Pelton, 1996). The maximum inaccuracy of the data was given as ±20°C. Liquidus temperatures In the present work the thermodynamic data from Eriksson were used in the Modified Quasichemical Approach (Pelton and Blander, 1986) to calculate the liquidus temperatures in the TiO 2 -Ti 2 O 3 -FeO system. A temperature, dependent activity coefficient was calculated for the pseudobrookite M 3 O 5 phase from the experimental data by Pesl (Pesl and Eric, 1999). The calculated liquidus isotherms for 1550°C, 1600°C, and 1700°C are shown Figure 4, Figure 5, and Figure 6 as solid lines. The low-temperature trough these isotherms along approximately 55%TiO 2 is in line with the experimentally measured low melting point measured in argon by Brauer and Littke (1960) for the TiO 2 Ti 2 O 3 binary: 1660°C between TiO 1.70 and TiO 1.75 (43% to 53% TiO 2 ). All three isotherm sets show a liquid slag field with (i) an upper phase boundary with the liquid slag + TiO x phase field, (ii) a lower phase boundary with the liquid slag + liquid iron phase (Pistorius, 2002) are shown as dark squares on the ternaries. For the high-FeO slags the dotted 1500°C and 1600°C upper boundary isotherms by Pesl imply a 100°C drop in liquidus with a ±1% change in FeO (approximately 19% to 18%). This seems an unlikely discontinuity in the known FeO-liquidus temperature relationship of the system. The more likely case is a more gradual temperature gradient as shown by the calculated 50°C spaced isotherms in Figure 7. These are also in agreement with the isotherms by Seim -at least up to 20%Ti 2 O 3 . According to the calculated isotherms the industrial slag series have liquidus temperatures ranging from 1550°C for 20%FeO eq up to 1700°C for ±9%FeO eq , while the broken line isotherms (Seim model) indicate a liquidus range starting somewhere above 1600°C to just over 1700°C. Given the stated tolerances and inaccuracies, the different sets of isotherms are regarded to give similar results, especially in the area of concern for typical industrial TiO 2 slags. It will be worthwhile, though, to clarify the discrepancy in the size of the liquid slag phase fields as predicted by the different sets of isotherms. Figure 7 shows how the liquidus increases with decreasing %FeO: higher reduction yields a higher slag liquidus. However, for a given FeO content, e.g. 10mass%, higher reduction will increase the Ti 3+ /Ti 4+ ratio, which will decrease the liquidus, until the Ti 3+ /Ti 4+ ratio crosses the low liquidus trough between TiO 2 and Ti 2 O 3 . On the lower TiO 2 and higher Ti 2 O 3 side of this trough, the liquidus will increase again with increasing Ti 3+ /Ti 4+ . In practice, where a process receives a stable ilmenite quality, higher reduction will lead to a simultaneous decrease in FeO and an increase in Ti 2 O 3 , and the net effect will be an increase in liquidus, though smaller in size than what would have resulted from a decrease in FeO only. To apply phase temperature behaviour from the ternary FeO-TiO x (Figure 7) (Borowiec, 2009;Pistorius, 2002). There are indications that MnO does not fully partition to the M 3 O 5 phase but that a potentially significant portion thereof separates to the silicate phase (Seim, 2011;Pesl and Eric, 2002). Without quantitative data on the behaviour of MnO though, the correlation showed by Pistorius is followed. The liquidus isotherms in Figure 7 can therefore applied to industrial slags when the FeO and Ti 2 O 3 are expressed in their equivalent forms. The FeO eq and Ti 2 O 3 eq of the example slags (Table I), together with their calculated liquidus temperatures, are listed in Table II. [4] High-TiO 2 slags have a narrow gap between liquidus and solidus temperatures. Seim measured the liquidus and solidus temperatures on synthetic slags using differential thermal analysis/thermal gravity analysis (Seim, 2011). For a TiO 2 -Ti 2 O 3 -5.6%FeO slag the liquidus and solidus were 1696°C and 1682°C respectively, and for a TiO 2 -Ti 2 O 3 -16.9%FeO slag, 1664°C and 1644°C respectively. These results imply a narrow temperature gap of 14°C to 20°C between liquidus and solidus. Borowiec (2009) measured solidus and liquidus temperatures on slags with 11-12%, 6-7%, and 2% gangue impurities (the publication gives the impurities for the first and third slag, while impurities for the second slags were deduced from other publications using the same slag). The temperature differences were 136°C, 83°C, and 66°C respectively (Borowiec, 2009). Provided the SiO 2 content in the gangue impurities is low (i.e. <5% in >10% total gangue, and <3% in <7% total gangue), these differences are indications of the solidus temperatures of high-TiO 2 slags. Heat capacity The heat capacities of liquid TiO 2 slags were reported by Kotze and Pistorius (2010) and are repeated here, Equation [5] (C p in J/ kg.K). The heat capacity values for the example slags in Table I are listed in Table III. [5] Density With most liquid slag systems, a density calculation needs to incorporate the enthalpy of mixing. For slags with only a small deviation from ideality, such as high-TiO 2 slags (Eriksson and Pelton, 1993;Gaskell, 2008, p. 247), a weighted average of the densities of the pure liquids suffices. The density of liquid TiO 2 was taken as 3.3 g/cm 3 at 1600°C from the measurements by Dingwell (1991). The density of liquid Ti 2 O 3 was measured by Ikemiya et al. under an Ar-H 2 gas mixture (1993); extrapolated to 1600°C, Ti 2 O 3 has a density of 4.0 g/cm 3 . Finally, the density of liquid FeO was taken as 4.4 g/cm 3 at 1600°C based on the measurements done by Xin et al. (2019). The resultant calculated densities for the example slags (Table I) are given in Figure 8. It is concuded that typical high-TiO 2 slags have a density range of 3.5 g/cm 3 to 3.7 g/cm 3 . This is lower than the temperatureindependent value of 3.8 g/cm 3 often cited in literature (Kotze and Pistorius, 2010;Zietsman and Pistorius, 2006). Figure 8 shows that the slag densities decrease with increasing temperature, as well as with increasing total TiO 2 . Since electrical conductivity is the movement of electron charges, the concept of electronic polarizability was tested against the experimental electrical conductivity measurements. By using the polarizability of the oxide components (Table IV), it was possible to reproduce the electrical conductivities of the high-gangue TiO 2 slags using a correlation of the form given in Equation [6] (electrical conductivity s [Ω -1 cm -1 ]; electronic polarizability of the slag a slag [Å 3 ] calculated with Equation [7] and the oxide polarizabilities given in Table IV; a a constant; and n a temperature-dependent constant). The calculated electrical conductivities for the high-gangue group (lower TiO 2 eq ) are given by the broken lines in Figure 9: these calculated values correspond remarkably well with the measured data from Desrosiers. [6] Table I [8] Applying Equation [6] to the low-gangue group (high-TiO 2 eq ) yielded fair, but not, good results. The addition of a temperaturedependent valency ratio R valency , Equation [8], improved the calculation significantly: the results are shown by the solid lines in Figure 9. The electrical conductivities calculated for the example slags (Table I) are shown in Figure 10. The results of the 82%, 85%, and 88%TiO 2 eq slags correspond with the measured values of similar slags shown in Figure 9. The predicted electrical conductivities for the 91% and 95% TiO 2 slags cannot be verified against experimental data, but since the calculation is based on parameters which supports the theory of electrical conductivity in slags, the prediction carries some level of confidence. Judging from the intervals between the lines of the five example slags, the increase in electrical conductivity accelerates with increasing TiO 2 eq . A similar phenomenon is shown for high-FeO slags up to 100% FeO (Jiao and Themelis, 1988). Viscosity The viscosity of slags is known to be structure-dependent and high-TiO 2 slags are no different: above their liquidus temperatures TiO 2 slags dissociate into Ti 4+ , Ti 3+ , and Fe 2+ ; these have no structure such as SiO 2 networks provide for silicate slags. Consequently, fully liquid TiO 2 slags have very low viscosities. The measured data for 'low-FeO' Sorel slag (Handfield and Charette, 1971) and 95%QMM slag (Borowiec, 2009) are reproduced in Figure 11: above their liquidus temperatures these slags measured 0.3 dPa.s and 0.15 dPa.s. A recent study reports 0.6 dPa.s for fully liquid slags (Hu et al., 2018). Though these numbers are different, they are all very low and not sensitive to composition or temperature. A second point of importance is that the critical temperature where viscosity increases sharply is only slightly below the slags' liquidus temperatures (Borowiec, 2009;Hu et al., 2018). Considering these results, the viscosities of the example slags (Table I) are estimated as shown in Figure 12. The dominant feature in these viscosity curves is the sharp upwards turn, the estimated positioning of which, was based on the calculated liquidus temperatures of the example slags given in Table II. In industrial smelting the slag bath is likely to contain some unreacted reagent. The Einstein-Roscoe equation was used to quantify the impact thereof on the viscosity of the liquid bath (Equation [9]: viscosity of the liquid with solids, η [Pa.s]; viscosity of fully liquid phase η0 [Pa.s]; f volume fraction of the solids; assumed density for reductant 0.9 g/cm 3 ). This is an approximation because the Einstein-Roscoe equation was derived for solid spheres and reductants are often angular. The calculated A literature review and interpretation of the properties of high-TiO 2 slags ▶ 128 FEBRUARY 2020 VOLUME 120 The Journal of the Southern African Institute of Mining and Metallurgy viscosities for 0%, 5%, and 10mass% reductant in the 88%TiO 2 fully liquid slag are shown in Figure 13. At 10mass% reductant the viscosity of the slag bath increases from around 0.4 dPa.s to 1.5 dPa.s. This is similar to a change from olive oil at 35°C to maple syrup at 25°C (Nierat, Musameh, and Abdel-Raziq, 2014;Ngadi and Yu, 2004): not unmanageable -as Handfield points out: blast furnace slags are still tappable at 5 dPa.s, but also not insignificant considering the foaming tendency of TiO 2 slag baths. [9] Thermal conductivity The thermal conductivity of solid TiO 2 slags was reported as 0.00175T + 0.3 W/m°C, which gives a thermal conductivity range of 1.0 to 2.9 W/m°C from 400°C to 1500°C (Kotze and Pistorius, 2010). Thermal conductivity measurements on solid TiO 2 slags from room temperature to 1050°C varied from just above 1 W/mK to approximately 2.5 W/mK with mostly, but not always, a positive relationship with temperature (Heimo, 2018). A single measurement in the same study gave 5 W/m°C at 1400°C. In the absence of data for liquid slag, 1 W/m°C is sometimes used in modelling (Zietsman and Pistorius, 2006). In the following paragraphs an attempt is made to approximate the thermal conductivity of high-TiO 2 slags based on the structure of these slags and their optical properties. The thermal conductivity of slags is a combination of lattice k latt , radiation k rad , and electronic conduction k e (Mills and Susa, 1992). Lattice conduction is structure-dependent because it relies on the velocity of phonons v through the slag, and the mean free path of phonons L through the slag, (Equation [10]; heat capacity at constant pressure C p ). In solids the crystal lattice provides the structure along which phonons travel; in silicate slags, the SiO 2 network provides the means. Thermal conductivity values measured with the transient hot wire method, a technique which isolates lattice conductivity, range from 0.03 to 1 W/mK for silicate slags (Glaser and Sichen, 2013;Kang et al., 2012). Lattice thermal conductivities decrease with increasing temperature, reflecting the diminishing slag structure with increasing temperature. In TiO 2 slags -judging from the low viscosities above liquidus temperatures -no significant structure exists. Lattice thermal conductivities in high-TiO 2 slags are therefore expected to be small. Nevertheless, the lattice conductivity where estimated from a correlation between reported transient hot wire thermal conductivities and calculated viscosities of silicate slags. The model used for viscosity calculations is based on the work by Zang , a little-known model for SiO 2 -Al 2 O 3 -CaO-MgO-FeO-MnO-K 2 O slags, but with good reproducibility of experimental data, and with potential to be expanded to more oxide components (Kotze, 2017). [10] The radiative component of thermal conductivity k rad is calculated with Equation [11] (Mills and Susa, 1992) (Stefan Boltzmann constant s = 5.67x10-8 W/m 2 K 4 ; refractive index of the slag n [dimensionless]; temperature T [K]; absorption coefficient β [cm -1 ] usually denoted as a in the literature but changed here to avoid confusion with the earlier mentioned electronic polarizability). The refractive indexes of the transition metal oxides are generally higher: 2 and more, compared to 1.535 for SiO 2 , 1.805 for CaO, 1.715 for MgO, and 1.79 for Al 2 O 3 . Based on this the radiative component in the TiO 2 slags could be expected to be higher than for silicate slags. However, the absorption coefficient for FeO is two orders of a magnitude higher (Susa, Nagata, and Mills, 1993), and that of Ti 2 O 3 three orders of a magnitude higher (Li, 2016) than those of SiO 2 , CaO, MgO and Al 2 O 3 . Consequently, the radiative component of high-TiO 2 slags is low compared to low FeO silicate slags. [11] The electronic thermal conductivity k e was calculated with the Wiedemann-Franz formula (Ok et al., 2018) (Equation [12], electrical conductivity s [Ω -1 m -1 ]; Lorenz number L = 2.44x10 -8 WΩ/K 2 ; temperature T [K]). Due to the high electrical conductivity of TiO 2 slags, electronic conductivity, which is negligible in silicate slags, can be significant for TiO 2 slags. [12] The etimated lattice, and calculated radiative and electronic conductivities of example slags 1, 3, and 5 (94%, 88%, and 82% TiO 2 eq ) are shown in Figure 14. Total thermal conductivities are shown with the thicker solid lines. The total thermal conductivity for liquid high-TiO 2 slags is estimated to range from 0.4 W/mK to close to 1 W/mK over the liquid temperature range. The thermal conductivity is predicted to increase non-linearily with increasing TiO 2 eq (Figure 15) which is attributed to the increase in electronic conduction with increasing TiO 2 eq . Summary Experimentally measured properties of high-TiO 2 slags were investigated and correlated with theoretically based parameters. Based on these correlations, the properties for five typical high-TiO 2 slags were calculated. ➤ The Ti 3+ /Ti 4+ ratio of the liquid slags depends on the oxygen partial pressure and the oxygen activity in the bath. The polarizability ratio between Ca 2+ and Si 4+ cations was found to be a good parameter to quantify the oxygen activity. Future work will endeavor to investigate this concept against more TiO 2 slag data and other slag systems. ➤ Liquidus temperatures increase with decreasing FeO, but can increase or decrease with increasing Ti 2 O 3 depending on the ratio between Ti 3+ and Ti 4+ . Following on from the Ti 3+ /Ti 4+ correlations, liquidus temperatures depend therefore on FeO, oxygen partial pressure, and the CaO and SiO 2 impurities. Other impurities such as MgO, MnO, Al 2 O 3 , Cr 2 O 3 , and V 2 O 5 also affect liquidus values when they stabilize the M 3 O 5 pseudobrookite phase. The partitioning of MnO between the M 3 O 5 and silicate phases needs to be reviewed. ➤ The heat capacity of liquid high-TiO 2 slags cover a narrow range from 1000 to 1020 J/kgK. ➤ Liquid slag densities range from 3.5 to 3.7 g/cm 3 , with the lower range applying at higher temperatures and to higher TiO 2 eq slags. ➤ The electrical conductivities of liquid high-TiO 2 slags are one to two order of magnitudes higher than those of silicate slags and increase with increasing temperature. This is an important point for both AC and DC furnace design. The electrical conductivity of high-gangue TiO 2 slags correlates with the slags' electronic polarizability, while for lowgangue TiO 2 slags (high-TiO 2 eq ), the electrical conductivity was reproduced by using the slags' electronic polarizability and the valency ratio between Fe and Ti cations. ➤ The viscosities of high-TiO 2 slags above liquidus temperatures are very low but increase sharply just below liquidus temperatures. Though the viscosities of fully liquid slags are low, the presence of unreacted reductant in the bath will increase its viscosity. ➤ The thermal conductivities of liquid high-TiO 2 slags were derived from their viscosities and optical properties. For the example slags the thermal conductivities are estimated to range from 0.5 to 1 W/mK. Electronic thermal conduction is dominant and contributes to most to the total thermal conductivity. Though the intention of this paper is to create a basis for extrapolation to TiO 2 slags not covered in the specific set of experimental work, the correlations derived here are still only applicable to the typical TiO 2 , pseudobrookite M 3 O 5 slags. Slag properties are evasive: the need to estimate the thermal conductivity of the TiO 2 slags from theoretical considerations is a point in case. It is a complex field but our understanding of slag properties has expanded over the last approximate two decades. It is the author's hope that many a publication on the properties of TiO 2 slags is still to see the light.
6,413.4
2020-02-01T00:00:00.000
[ "Materials Science" ]
Language Processing Model Construction and Simulation Based on Hybrid CNN and LSTM . Deep learning is the latest trend of machine learning and artificial intelligence research. As a new field with rapid development over the past decade, it has attracted more and more researchers’ attention. Convolutional Neural Network (CNN) model is one of the most important classical structures in deep learning models, and its performance has been gradually improved in deep learning tasks in recent years. Convolutional neural networks have been widely used in image classification, target detection, semantic segmentation, and natural language processing because they can automatically learn the feature representation of sample data. Firstly, this paper analyzes the model structure of a typical convolutional neural network model to increase the network depth and width in order to improve its performance, analyzes the network structure that further improves the model performance by using the attention mechanism, and then summarizes and analyzes the current special model structure. In order to further improve the text language processing effect, a convolutional neural network model, Hybrid convolutional neural network (CNN), and Long Short-Term Memory (LSTM) based on the fusion of text features and language knowledge are proposed. The text features and language knowledge are integrated into the language processing model, and the accuracy of the text language processing model is improved by parameter optimization. Experimental results on data sets show that the accuracy of the proposed model reaches 93.0%, which is better than the reference model in the literature. Introduction Text language processing is widely used in the fields of network public opinion, crisis public relations, brand marketing, and so on.A large amount of user comment data has been accumulated on online media, which reflects netizens' emotions, attitudes, and tendencies toward hot social events, policy implementation, and products and services.Because of the strong practical value of network review data analysis, it has been deeply studied in the industry and academia.e language processing problem is studied in the data of travel blogs to help tourists choose their favorite travel destination.e classification of comment data into positive, negative, and neutral categories in language processing has become one of the key research areas in the field of natural language processing [1].In the network environment, text expression has the characteristics of nonstandard, often using acronyms, network neologisms, spelling mistakes, grammar errors, and other problems, which brings great challenges to language processing.Methods to solve language processing problems mainly include dictionary-based methods, traditional machine learning methods, and deep learning methods [2][3][4]. In order to improve the accuracy of network text language processing and study the role of language knowledge and affective knowledge in the model, this paper proposes a convolutional neural network model based on the fusion of text features and language knowledge, which integrates words, parts of speech, effective dictionaries, and other external knowledge into the language processing model.Firstly, the word vector training model is used to train the word vector, and the part of speech and affective words are added to produce a variety of feature data, which is used to eliminate word ambiguity and express emotional information. en, the convolutional neural network model was constructed, and various features were fused into the model by feature layer fusion and classification layer fusion.Finally, the neural network model is trained to evaluate the language processing effect of the model [5][6][7][8][9].e neural network model has been widely used in the fields of image processing, speech recognition, and text analysis and has achieved better results than traditional machine learning methods.A classical convolutional neural network (LENETS) model is proposed and applied to image classification.In text language processing, the neural network model has also achieved very good results.e first use of convolutional neural networks for sentence segmentation class introduces deep learning into the field of text classification.LSTM cyclic neural network was used to express sentence vectors, and then discourse vectors were expressed according to sentence vectors, and discourse-level sentiment classification was carried out [10]. Since the structure of a neural network determines the effect of the model to a great extent, scholars have conducted more in-depth research on the structure of the neural network.For example, by integrating the advantages of Hybrid CNN and LSTM, attention, and other models to solve the defects of a single neural network model, Cola emotional classification neural network model is proposed.According to the characteristics of short text classification, a character-level neural network model is proposed by combining Hybrid CNN and LSTM [11]. In neural network models, word vectors are used to represent text information, and the representation ability of word vectors is an important factor affecting the model effect.Word2vec, a word vector training tool released by Google in 2013, implements two word embedding models, CBOW and SKIP DRAM, and becomes the basis for deep learning models in the field of natural language processing.According to the needs of language processing to express emotional information, the researchers improved the word vector training model.Sentiment word embedding model SSWE is used to train word vectors to improve the effect of the language processing model [12]. e word vectors containing emotional information were trained using emotion dictionaries and remote supervised information.Using word and part-of-speech concatenation, and then training vector, and disambiguation of words improve the ability of word vector text representation. e representation and use of text emotional features play an important role in language processing.Scholars have studied a variety of emotional features and their combination methods.In this paper, a multiattention convolutional neural network model is proposed for a specific target emotion analysis task, in which three kinds of attention feature matrices are used: word, part of speech, and word position.A multichannel convolutional neural network model is proposed for the sentiment analysis task of Chinese microblogs, which integrates multiple emotion information features such as words, part of speech, and word position [13].A convolutional neural network model combining part-of-speech information and object attentional mechanism is proposed in object-level sentiment classification. With the research and development of deep learning theory, researchers have proposed a series of convolutional neural network models.In order to compare the quality of different models, the recognition rates of models in the literature on classification tasks were collected and sorted out, as shown in Figure 1. Because some models do not test the recognition rate on the ImageNet dataset, the recognition rate on CIFAR-100 or MNIST dataset is given.Among them, the TOP-1 recognition rate refers to the probability that the CNN model predicts the classification with the maximum probability to be the correct category.Top-5 recognition rate refers to the probability that the CNN model predicts the correct category in the first five categories with the highest probability.Due to a series of breakthrough research results and continuous improvement according to different task requirements, the convolutional neural network has been successfully applied in different tasks such as target detection, semantic segmentation, and white language processing. Based on the above understanding, this paper synoptically introduced the development history of convolution neural network and then analyzes the typical convolution neural network model through the stack structure, net structure of China, the structure of the residual, and attention mechanism model method, and the performance of ascension and further introduces the convolutional neural network model and the structure of the special.Finally, the typical applications of convolutional neural networks in target detection, semantic segmentation, and white language processing are discussed, and the existing problems and future development direction of the deep convolutional neural network are also discussed. is paper proposes a joint architecture of Hybrid CNN and LSTM, which takes the local features extracted by CNN as the input of RNN to conduct sentiment analysis on short texts.In order to reduce the loss of local information and obtain the long-term dependence of the input sequence, the convolution layer and cyclic layer are used as the network model, and the short and long-time memory model is used to replace the pooling layer.e result is 6.3% better than the support vector machine method, and 1.2% better than the Hybrid CNN and LSTM with pooling layer, but sometimes the results are erratic. In Section 2, the related work is discussed.In Section 3, a typical neural network model suitable for the analysis method in this paper is analyzed by introducing the convolutional neural network model and its operating unit.In Section 4, a language processing model based on a convolutional neural network is constructed and simulated, and the superiority of the proposed method is concluded through data analysis.In Section 5, the work of this paper is concluded, and the future work is listed. Related Works Choi et al. [14] proposed that the neural network model has been successful in sentiment classification, but the research on text feature representation, language knowledge representation, emotion knowledge representation, and multifeature fusion is still insufficient.Titano et al. [15] find that, on the basis of word vector representation, this paper adds 2 Computational Intelligence and Neuroscience external knowledge such as part-of-speech information and affective word information.en, Rao et al. [16] clarified the convolutional neural network model structure is improved, and multiple features are fused in the feature layer and classification layer to improve language processing effects.Song et al. [17] find that word vector (WV) is trained by a word embedding model, which can represent the context and semantic information of words.Zhu et al. [18] proposed that the part-of-speech vector (POSV) was trained after combining words and parts of speech, and different vectors were used to represent different parts of speech of the same word, thus avoiding ambiguity of some words.Sentiment word vector SWV is trained with sentiment words and text sentiment tags.e research of Liu et al. [19] Basic Convolutional Neural Network and Operation Unit. e basic convolutional neural network includes the basic operating units of CNN: the convolutional layer, the pooling layer, the nonlinear unit, and the full connection layer.A typical CNN architecture usually alternates the convolutional layer with the pooling layer and finally outputs the results through one or more fully connected layers.In some cases, the global flat pooling layer is used to replace the full connection layer, and batch normalization and other operations are added to further optimize the performance of CNN. e structure of the convolutional neural network is shown in Figure 2: (1) Convolution Layer (CONV), also known as feature extraction layer, is mainly used for extracting features of images.It is composed of a set of convolution kernels, and the weight value of the convolution kernel can be updated according to the objective function of white motion learning.e convolutional layer is the core of the convolutional neural network (2) e pooling layer is also called the lower sampling layer.Generally, the Journal of Frontiers of Computer Skier carries out dimension reduction between two continuous convolutional layers, which can effectively reduce the number of model parameters and reduce the overfitting phenomenon of the network ere are maximum pooling layers and average pooling layers [20][21][22]. (3) e nonlinear unit is composed of nonlinear activation functions, which can be divided into saturated nonlinear activation functions, such as SIGMOD function and TANK function, and unsaturated nonlinear activation functions, such as RELU function and Leaky RELU function.e nonlinear element is a nonlinear mapping of the output results of the convolutional layer, which enables the neural network to approach any nonlinear function and improves the feature expression ability of the model (4) e batch normalization (BN) is the process of transforming input data into the standard normal distribution to make the input values of nonlinear units fall into a larger value range of the gradient, so as to avoid the gradient disappearance and accelerate the convergence rate of model training e application of CNN in the NLP system has achieved very good results.e convolutional layer is similar to the sliding window on the matrix, and CNN contains many complex nonlinear activation functions, such as RELU or Tank.In the classical feedforward neural network, each input of a neuron is connected to each output in the next layer, which is called the complete connection layer. In the pooling layer, CNN extracts the image features according to the size of the convolution kernel.For example, in image classification, CNN might learn to detect edges from the original pixels in the first layer and then use the edges to detect simple shapes in the second layer, using these shapes to detect higher-level features, such as face shapes.is layer is provided to classifiers that use these advanced features.As an alternative to image pixels, the input for most NLP tasks consists of sentences and documents represented in matrix form.Each row represents a vector of one word.Usually, this vector is called the word vector.In NLP, a convolution kernel is the dimension of the word vector; that is, the width of the filter is the same as the width of the input matrix.Also, the area size may vary, but it is usually a sliding window of more than 2-5 words at a time. e goal of RNN is to make use of the above information in text sentiment analysis by using sequence information [23,24].In a traditional neural network, all the inputs are independent of each other, but this approach is inefficient for many tasks in NLP [25], such as predicting the next word in a sentence.In this case, in order to predict the next word in the context, it is important to know the previous work, so the researchers came up with RNN.RNN was a great success on the NLP task.RNN has memory to capture information in arbitrarily long sequences.e network structure of RNN is shown in Figure 3. RNN is a kind of deep neural network, which has been widely used in time series modeling.e goal behind RNN for sentence embedding is to find a dense and low-dimensional semantic representation by repeating and sequentially processing each word in a sentence and mapping it to a low-dimensional vector.It can be calculated with a simple RNN, and the output is as follows [26]: where W 0 , W h , and W x are parameter matrices in the neural network; X t represents the input of the neural network; and H t−1 represents the state of the neural network at one time.Equation (2) indicates that the state at time t is related not only to the current input but also to the state at time t − 1.By analyzing the relationships between words, RNN preserves the semantics of all previous texts in a fixed-size hidden layer, while increasing the time complexity of the text.e high-level, appropriate statistics are then captured in RNN, which can be valuable for capturing the semantics of long text.RNN is a biased model where the most recent words are more meaningful than the previous words.is can be inefficient when used to capture information about the entire document.So, in order to overcome the difficulty of RNN, the Long Short-Term Memory (LSTM) model is introduced. is article uses LSTM to obtain the long-term dependency of a sentence.e LSTM network structure is shown in Figure 4. Typical Convolutional Neural Network Model. In this chapter, the four network models are compared and analyzed from three aspects: model mechanism, advantages and disadvantages, and application suggestions, as shown in Table 1.It can be seen from the analysis of Table 1 that due to the different mechanisms of network models, it is necessary to select and optimize the network according to the characteristics of the network when applying the network. Stacking Structure Model. e stacking structure model refers to the network model formed only through the stacking of network layers without other topological From linear to modular Qualitative targets Quantitative targets e network-in-network architecture model (NIN) was proposed in NIN and has a profound impact because it uses a small number of parameters to achieve the Alex Net effect.In the classification tasks, input features are usually highly nonlinear.NIN network induces a micro network in each convolutional layer, which can better abstract the features of each local block than stack structure.After introducing a micro network in each convolutional layer, the network depth and width are deepened and the special expression ability of the network is enhanced. Residual Structural Model. A residual structure model is a model structure in which a short circuit mechanism is introduced into the structure and the output of the model is expressed as a linear superposition of the input and its nonlinear transformation.In the convolutional neural network model from Alex Net layer 8, to GNet layer 19, and then to Google net layer 22, the number of layers of the model is gradually increasing, and the model performance is getting better and better, and deeper network model means better nonlinear expression ability, which can better fit complex features.Since the number of channels in the residual element is inconsistent with the output, the identity mapping cannot be used directly.However, if the 1 × 1 convolution layer is used to increase the dimension of the number of input channels, the information transmission between channels will be hindered.Dares Net adopts the residual unit structure as shown in Figure 5.In the residual path, the channel filled with zero matrices after the input feature channel is directly added to the output channel. Attention Mechanism Model. e attention mechanism model acquires the features that need to be paid attention to in the way of free-motion learning and inhibits the model structure of other useless features.rough the model introduced previously, it is found that a lot of work of researchers is to improve the performance of the model in the spatial dimension, while SENET can obtain the importance of each channel feature without any action, improve the weight of useful feature channels, and suppress other useless feature channels, which is a channel attention model.Combined with channel attention mechanism and spatial attention mechanism, compared with single attention mechanism model, it has better feature expression ability, and as a lightweight model, it can be seamlessly integrated into any current CNN model architecture. e attention mechanism can make the neural network have the ability to focus on its input feature subset and solve the problem of information overload. Hybrid CNN and LSTM-Based Language Processing Model Construction and Simulation Hybrid CNN and LSTM-Based Language Processing Model Construction. e framework model is composed of a convolutional neural network and recursive neural network.Firstly, the model architecture uses the word vector as the input and introduces it into the convolutional neural network to learn how to extract the high-dimensional features and then outputs it to a short-and long-time memory cyclic neural network language model and finally adds a classifier layer.Figure 6 shows the Hybrid CNN and LSTM framework proposed in this paper: (1) Word vector: the first layer of the network trains each word in the emotional text into a word vector with semantic information through the neural network.e input of the model is a sequence of words. In the experiment, the maximum length of the sentence is set as 100.If the sentence length is not up to the maximum, zero is used to fill it.Each emotional sentence can be represented as a 100 by 128 matrix.e matrix has semantic information and position information of words (2) Convolution layer: in the first layer structure of the network, word vectors are used to represent each sentence as a matrix of 100 * 128, which is formed by splicing simple word vectors.Convolution kernel is used to extract high-dimensional features from text.e difference between text convolution and image convolution lies in that the size of the image convolution kernel can be set arbitrarily, while the size of the convolution kernel in text convolution is twice the length of the word vector.In the experiment, the convolution kernel is set as 3 * 128, 4 * 128, and 5 * 128, respectively.In this way, the sentence is convolved to obtain 98 * 1, 97 * 1, and 96 * 1 sentence features, which are then spliced and filled with zeros to form a 98 * 3 matrix, which is then sent into the long-and short-time memory model (3) Cycle layer: RNN is a neural network structure specially used for sequence modeling.At each moment t, a loop input vector x and a hidden state h are applied, and the loop operation is -N.N. Computational Intelligence and Neuroscience Learning the long-term dependence of an ordinary RNN is difficult because of the gradient explosion problem.LSTM overcomes this problem with input gate, forget gate, and output gate [27].(4) Classification layer: in principle, the classification layer is a logistic regression classifier.It provides a fixed-size input from the bottom layer to fully connect to the classification layer, followed by a Softmax activation function to calculate the predicted probabilities for all classifications Simulation of Language Processing Model Based on Hybrid CNN and LSTM.In the convolution neural network, there is a series of a breakthrough in the field of image classification, all applications will be different as the feature extraction of backbone network (CNN) and add a different functional unit structure to form the new application model, CNN, into to the depth of the different learning tasks, because of its excellent performance of detection effect, and gradually replace the traditional method.At present, it has become a research hotspot in the fields of target detection, semantic segmentation, white language processing, and so on. In this paper, the language processing model is constructed.Precision, Recall, and F-score are used as evaluation indexes, and their calculations are shown in equations ( 4)-( 6).Where HW is the correct number in the classification result, Hb is the number of errors in the classification result, and F n is the number of incorrectly classified results in the data set of this kind of sample.Fscore is the harmonic value of comprehensive consideration of accuracy and recall rate, reflecting the overall effect of the model: On the basis of text features, the Hybrid CNN and LSTM model adds language knowledge such as part of speech and affective words as input data to enhance text semantic information and affective information.is experiment tests the effect of the model in sentiment classification by adjusting the number and combination of input features.WV, POSV, SWV, WV + POSV, WV + SWV, WV + POSV, and WV + POSV + SWV were obtained by combining the word vector, the part-of-speech vector, and the affective word vector. e experimental results of feature fusion of the classification layer fusion model Hybrid CNN and LSTM are shown in Table 2. It can be seen from Table 2 that the combination of word features, part-of-speech features, and affective word features, WV + POSV + SWV, achieves the best result of affective classification, which is significantly higher than other combinations of single features or two features.e positive category F value, negative category F value, and macro average F value of the three feature combinations reached 92.8%, 93.2%, and 93.0%, which were higher than other feature combinations, indicating that the combination of multiple features in the classification layer fusion model could improve the classification effect. e experiment proves that the accuracy of the convolutional neural network sentiment classification model can be effectively improved by adding external language knowledge such as part-ofspeech features and sentiment word features on the basis of text features.e fitting curve of Hybrid CNN and LSTM is shown in Figure 7.It can be seen that the curve is relatively smooth and the gap is small in different stages, indicating that the algorithm has strong stability. e stochastic gradient descent (CSGD) is used to train the network and the backpropagation algorithm to calculate the gradient.By adding a loop layer to the model instead of a pooling layer, you can effectively reduce the number of convolutional layers used to capture long-term dependencies. erefore, the convolution layer and recursion layer are combined into a single model.e architectural goal is to reduce the stacking of multiple convolutional layers and pooling layers in the network to reduce the loss of detailed local information.erefore, in the proposed model, linear units in the convolutional layer (RELU) should be used for the activation function, LSTM should be used for the recursive layer, and the hidden state dimension d = 128.For two datasets, the number of training cycles varies between 5 and 20. e model is compared with methods based on word embedding and convolutional architecture as well as different deep learning and traditional methods.e regularization, learning rate, and rejection rate parameters are concerned, and then, the sentence features are extracted by using the convolutional layer. e recursive layer shows the robustness of the proposed method in multiple domains.e learning rates of the convolutional layer and the cyclic layer are set as 0.01 and the loss rate is set as 0.5. In this paper, Dropout is used as an effective method to regularize deep neural networks. e loss prevented the mutual adaptation of the hidden units constrained the 12 norms of the weight vector and inserted the Dropout module between Hybrid CNN and LSTM layer to make it more standardized. Depth study of the classification algorithm accuracy is generally higher than that of machine learning algorithm, support vector machine in machine learning algorithms best differ with the former method by only 1.7%, and the effect of the support vector machine with Hybrid CNN and LSTM is higher than the BOW + CNN model.e reason may be the fact that a BOW + CNN model has not yet been trained to the optimal parameters, likely the fitting problem, etc.As can be seen from Figure 8, the method proposed in this paper achieves the best effect.e accuracy 8 Computational Intelligence and Neuroscience value of the classification algorithm is obviously higher than that of the BOW + CNN method over time, and the generated model is more stereoscopic and intuitive.Among the deep learning models, the model proposed in this paper not only has the highest accuracy but also has the lowest parameters.In terms of feature extraction, unlike Chinese, the Uyghur language does not have natural word segmentation marks.In the Uyghur language, spaces are used as segmentation marks between words.In the experiment, Unigram and Bigram were used as feature extraction methods, respectively.From the classification results, the effect of Uyghur text sentiment classification using Bigram as feature extraction was significantly better than that using Unigram.e model proposed in this paper also achieves good results in the Uyghur language.It is 6.3% better than the support vector machine method and 1.2% better than the LSTM-CNN with a pooling layer.e validity of the proposed method is proved both theoretically and experimentally. Although the convolutional neural network (CNN) has the advantage of learning to extract locally invariant highdimensional features, it requires many layers of convolution to capture long-term dependencies due to the locality of convolution and pooling. is situation becomes more serious as the length of the input sequence increases.Ultimately, a very deep network with many convolutional layers is required. is article proposes a new framework to overcome this problem, in particular, to capture word information in sentences and reduce the number of parameters in the architecture.On the basis of the input word vector, this framework uses the method of combining a convolutional neural network and a cyclic neural network.Even if there is only one recursive layer, the sorting information can be retained.erefore, using a loop layer instead of a pooling layer assumes less loss of detail in local information and more efficient capture of long-term dependencies. e proposed method Hybrid CNN and LSTM performs well on the two datasets and achieves competitive classification accuracy while being superior to other methods.Experimental results show that the same level of classification performance can be achieved using a smaller architecture.In future research, this method is expected to be applied to other fields such as information retrieval or machine translation. Conclusion With the development of research on convolutional neural networks, their performance and model complexity are also improved.In this paper, a typical convolutional neural network model Hybrid CNN and LSTM with excellent performance is analyzed.e typical convolutional neural network model has made remarkable achievements and has high accuracy in image classification and recognition.It mainly includes key technologies such as increasing the width and depth of network structure and merging the attention mechanism of channel domain and spatial domain. However, by adding some specific noises to the original image, the artificial disturbance can easily make the neural network model misclassify the image.How to solve this problem and improve the generalization ability of the model is a problem to be solved.Moreover, with the deepening of the depth and width of the neural network, the training cost gradually increases.If the prior knowledge of specific problems can be added to the model construction, the model training speed will be greatly accelerated.In addition, there is still a large space for structural research of convolutional neural networks.e improvement of model performance requires more reasonable network structure design, and the setting of super parameters of network model depends on experiment and experience, so the quantitative analysis of parameters is a problem to be solved for the convolutional neural network.Although the convolutional neural network is in a very hot stage of research, there is still a lack of complete mathematical explanation and proof about it.It is of great significance for the further development of convolutional neural networks and the solution of the defects of the current network structure to carry on the related theoretical research. e special models listed in this paper also put forward more design ideas for the traditional convolutional neural network model: (1) Typical convolutional neural network models need lightweight design.In the past, the research of convolutional neural networks focused on algorithm design, but the specific deployment platform of its model application was not considered.e design of a hardware-friendly model structure will help to further improve the model performance, which is also the key research direction of model structure design (2) To strengthen the convolution model structure of weakly supervised or unsupervised learning, in the white realm, unsupervised learning is more common and conforms to the thinking mode of the human brain.Although unsupervised learning and weakly supervised learning have made some progress in image recognition, there is still a big gap between unsupervised learning and semisupervised learning in the accuracy of image recognition compared with supervised learning (3) To construct the multi-input convolutional neural network model structure, multi-information input can make full use of the implicit feature expression in the original data and obtain a better recognition effect with less training cost.And sharing the network structure in the recognition process can further accelerate the recognition efficiency (4) To study a more efficient feature generation method, the efficient redundant feature generation method can reduce the number of model parameters and generate more abundant feature graphs, which is also worth studying e Hybrid CNN and LSTM model is an important research field.Typical and specific convolutional neural network models have been applied in areas such as intelligent security, virtual reality, intelligent healthcare, white driving, wearable devices, and mobile payments.e development of deep neural network models plays a key role in leading the development of science and technology and the 10 Computational Intelligence and Neuroscience artificial intelligence industry in the future.Future research can be carried out to further solve the existing problems and realize their application value. Figure 1 : Figure 1: e recognition rates of models in the literature on classification tasks. Figure 2 : Figure 2: e structure of the convolutional neural network. Figure 3 : Figure 3: e network structure of RNN. Figure 5 : Figure 5: Dares Net adopts the residual unit structure. Table 1 : Comparative analysis of four models.Le net and adds Dropout and Local Response Normalization (LRN) to prevent network overfitting.Due to the limitation of GPU video memory in the early stage, a stacking structure model was split and the two GPUs were used for cooperative training.With the development of a hardware platform, a single GPU platform could be used for network training without splitting the model structure anymore. Figure 6: e Hybrid CNN and LSTM framework proposed in this paper. Table 2 : e experimental results of feature fusion of the classification layer fusion model Hybrid CNN and LSTM.
7,432.6
2021-07-06T00:00:00.000
[ "Computer Science" ]
Trisubstituted Aryl Cyclohexanecarboxylates (TACC): A Simple, New Molecular Scaffold for Antibiotics Design A new class of potential antibacterial agents has been synthesized on a new molecular scaffold of cyclohexane carboxylate. We have tagged this new class of compounds TACCs (Trisubstituted Aryl Cyclohexanecarboxylate). These new molecules are structural analogues of an Activators of Self-Compartmentalizing Proteases 4 and 5 (ACP 4 and 5), and were synthesized to circumvent the drug-like property (drug-ability) challenges and liability noted in ACP 4 and 5. A pseudo-Robinson annulation protocol was used to furnish this new class of potential antibiotics. Structure-activity relationship (SAR) study was done to identify the pharmacophore(s) in this molecular scaffold. A selection of these compounds was used in our preliminary antibacterial inhibitory activi-ties’ studies on Bacillus mycoides and Bacillus subtilis. These preliminary studies show that the TACCs exhibited equal, and in some cases better, antibacterial activity than ACP 4 and 5. Introduction A tremendous increase in the worldwide spread of antibiotic-resistant bacteria has spurred a lot of interests in the search for antibiotics with new modes of action. Current drug discovery and development efforts are focused on modifying existing classes of antibacterial agents to improve potency and efficacy, provide broader spectrum of activity, reduce resistance and improve pharmacodynamics properties [1]. Others focus on identifying and screening compounds (natural or synthetic) that could act as inhibitors against unexploited, genomic targets [1] [2]. Biologically active molecules with novel chemical structures, acting against previously unexploited bacterial targets are more likely to be less prone to the existing compound-or target-based resistance mechanisms observed in most multi-drug resistant (MDR) strains of bacteria [3]. In fact, cellular pathways that are paramount to the survival of the bacterium at the early stages of the infection process have been identified as attractive candidates for rational drug design [4]. In these endeavors, Clp protease, which is one of the major cellular proteases responsible for degrading misfolded or damaged proteins and thus plays an essential role in maintaining protein function, has been established as a suitable target for new antibiotics [4]- [11]. Clp protease clade is an energy-dependent protease comprising of ATPases connected with diverse cellular activities (AAA+), like ClpX or ClpA in E. coli, or ClpX, ClpC, or ClpE in B. subtilis, and the subunit ClpP [8] [12] [13]. In Clp protease complex, the ATPase (ClpC and ClpX) is the regulatory subunit, while the ClpP subunit is the central proteolytic core [4] [8] [14]. Clp protease is an essential factor in controlling protein homeostasis and developmental processes like cell motility, genetic competence, cell differentiation, and sporulation [14] [15] [16]. Therefore, perturbation of the Clp protease complex could lead to severe physiological defects in bacteria, potentially leading to the bacterial demise [8] [17]. Proteolytic subunit of Clp protease, ClpP, was first identified in E. coli by Maurizi et al. [18] [19] and since then hundreds of studies have been done to understand its structure and mechanism of operation. The investigations of the crystal structures of ClpP from different species, including bacterial, human, plants and yeast [20], have revealed that the protease is highly conserved [9] [21] [22] [23]. These crystal structures show that ClpP assembles into a tetradecameric barrel-shaped enzyme having an enclosed chamber that contains 14 serine proteolytic active sites [22]. Access to this ClpP proteolytic chamber is only possible through the two axial pores that are gated by the N-terminal region of the protomers [22] [24]. Although the proteolytic chamber is large enough to accommodate a 50 kDa protein, the tapered axial pores prevent the entry of even the smallest folded protein [23]. So ClpP protease depends on its partners, the highly specific AAA+ proteases, to recognize native proteins, unfold them and spool the denatured polypeptide into the proteolytic chamber for degradation [21]. The importance of ClpP protease in intracellular milieu has made it an attractive target for new antibiotics. Its inhibition by cyclic peptides [25], β-lactones [17] [26] [27] [28] [29], and its acti-vation by acyldepsipeptides (ADEPs) [5] [6] [12] [30] are detrimental to different bacterial strains. In our continued studies of bactericidal agents and ClpP activation/deactivation [5] [17] [30], we came across new classes of compounds called Activator of Cylindrical Proteases (ACP) reported by Leung et al. [31]. These were four different structural classes of compounds with no structural similarities to previously reported ADEPs [4], but with comparable bactericidal activities against different pathogens [31]. Leung and co-worker attributed antibacterial activities of these ACPs to CpP activation suggesting similarity in mechanism of actions of ADEPs and ACPs. They proposed that ACPs prevent ClpP from binding to its associated unfoldase, while concurrently promoting nonspecific proteolysis probably via the opening of the axial pores. They also proposed the existence of an additional pocket, the C pocket [31], in conjunction with the previously reported H pocket [4], that helps enhance compound binding. Of the four structural classes reported, our attention was drawn to ACP 5 and 4 (ACP 4 has p-nitro in place of p-bromo) (Scheme 1) since they were considered unsuitable for further structural optimization because of the challenges access to the structure poses in a structure activity relationship (SAR) studies [31], even though they showed significant antibacterial activities. We herein report the syntheses, structure activity relationship and antibacterial activities against B. mycoides and B. subtilis of ACP 5 and 4 and their structural analogues. We also present, herein, evidences that suggest that there is possibly a synergistic mechanism of action of these new class of compounds involving membrane permeabilization and a minimal amount (if at all present) of ClpP activation. Since the core structure of these compounds is a cyclohexane carboxylate, we have chosen to tag this new antibiotic scaffold a trisubstituted aryl cyclohexane carboxylate (TACC). We varied the substituents on both the aryl group and the cyclohexane ring in our SAR studies, and the antibacterial activity results of the different analogues thus obtained are herein presented. To the best of our knowledge, synthesis and medicinal application of these TACCs have not been reported in the literature before now. Synthesis of Dichlorovinyl TACCs 20-29 and Preliminary Antibacterial Activity Studies on Bacillus mycoides Our initial synthetic target was ACP 5 (Scheme 1). The goal was to find a simple way to assemble the core structure in the minimal possible steps to facilitate diversity-oriented synthesis of analogues for SAR. In our proposed retrosynthesis (Scheme 1), the core structure could be obtained by a pseudo-Robinson annulation reaction (tandem Michael-Aldol addition reaction) of the conjugated 2,4-dienone 2 with ethyl acetoacetate 3. The conjugated ketone 2 could then be synthesized from cross Aldol condensation of dichloroacrolein 4 and p-bromoacetophenone. The synthetic challenge here was making dichloroacrolein 4 which was not commercially available. With little modifications of a previously reported procedure [32], dichloroacrolein 4 was synthesized on multi-gram scale in good yield by radical reaction of isobutyl viny ether with carbon tetrachloride using benzoyl peroxide as radical initiator (Scheme 2). The reagents were simple, but the process was elaborate because of the propensity of 4 to easily polymerize (black polymeric tar was seen in some cases). The presence of both intermediates 1,3,3,3-tetrachloropropyl isobutyl ether 6 and 1,3,3-trichloro-2-propenyl isobutyl ether 7 was confirmed by quick proton NMR of an aliquot of the reaction mixture. With dichloroacrolein 4 in hand, its cross-Aldol condensation was conducted with different aryl ketones taken into account the potential electronic effect of the aryl substituents on the alcohol functional group of the desired TACCs ( Table 1). The chalcones 10-19 thus obtained were then reacted with ethyl acetoacetate 3 in a tandem Michael-Aldol addition reaction to afford the corresponding TACCs 20-29 (Scheme 2). The conditions and yields for the different reactions are presented in Table 1. The same trend was observed for these two compounds in our later antibacterial activities studies against B. subtilis (Table 3, ring is maintained. Evaluation of the Effect of gem Dichlorovinyl Substituent on TACCs' Antibacterial Activities: Synthesis of TACCs 40-44 and Their Antibacterial Activities against B. mycoides To study the effect of the dichlorovinyl handle on the activity of TACC 20, the dichlorovinyl moiety was replaced by dimethylvinyl 40, thiophenyl 41, furanyl 42, 5-methylfuranyl 43 and phenyl 44 substituents (Table 4, entries 1-5). The idea was to study the role of the electrophilic character of the dichlorovinyl International Journal of Organic Chemistry substituent on the antibacterial activity of the whole molecule since the vinyl gem dihalide functionality is known to be versatile bidentate electrophile [34]. Thus TACCs 40-44 were synthesized by cross Aldol condensation of 4'-bromoacetophenone with the appropriate aldehydes under basic condition to generate the corresponding chalcones 31-35 which were then annulated via tandem Michael-Aldol reaction ( Table 4, entries 1-5). These compounds were tested against B. mycoides and the antibacterial activity data are presented in (Table 2, entry 14, MIC > 50 µ/mL) has slightly reduced activity than TACC 42. These observations from 40, 42, and 43, interestingly point to some sort of synergistic contribution of the electrophilic nature of this side handle to the antibacterial activity of the molecule. Although the introduction of additional oxygen atom into the molecule by the furanyl moiety increases the nucleophilic character and hydrogen-accepting ability of the side handle, the decrease in activity observed for compound 43 suggests that electrophilicity of the side handle may have more role to play in the antibacterial activity of the molecule, more so that the dichlorovinyl moiety has more electrophilic character and was observed to be more active. We were excited to notice though, that TACC 42 has comparable antibacterial activity with ACP 5. Thus TACC 42 was chosen for further structural optimization to improve potency. Evaluation of the Effect of the Hydroxyl and the Oxo-(Ketone) Functional Groups on TACCs' Antibacterial Activities: Synthesis of TACCs 45-52 and Their Antibacterial Activities against B. mycoides The tertiary hydroxyl functional group on TACCs is a potential hydrogen donating O. B. Olubanwo et al. Antibacterial Activity Study of Selected TACCs against B. subtilis Some library of our synthesized TACCs ( Figure 1) were tested against B. subtilis for antibacterial activity. The results for these analyses are presented in Table 3. It is interesting to point out that most of the trends recorded for these compounds' activities against B. Mycoides (Table 2) active than TACC 25 (ACP 4) ( Table 3, entries 1 and 6 respectively), which is in contrast to what was reported by Leung et al. [31]; Antibacterial activity decreases as you go from electron-deprived aryl substituents to electron-rich aryl group at the quaternary carbon center with the tertiary alcohol (Table 3, entries 1, 3 and 10); Substituting the chemically liable, commercially unavailable dicholorovinyl moiety, which possesses poor drug-like property [31] with a commercially available and stable furanyl analogue resulted in a comparably active compound ( Table 3, entries 1 and 12 respectively); A 2-hydroxycyclohexane carboxylate proved to be more active than the corresponding 2-oxocyclohexane carboxylate ( Table 3, entries 15-16, and entries 1 and 3 respectively). An interesting observation was made while evaluating TACCs' antibacterial activity. Activity for these compounds seemed to diminish over time. Minimum inhibitory concentrations (MIC) were determined by the agar dilution method [35]. When the agar plates were inspected at 24, 48, and 72 hours, the bacterial growth tended to steadily increase over time. In typical agar dilution MIC assays, there is little change in bacterial growth from 24 to 48 hours and no change from 48 to 72 hours. The peculiar activity of these TACCs is indicative of gradual compound degradation in the growth medium. As the concentration of active compound decreases over time, persistent bacterial cells are eventually able to proliferate. We reasoned that the loss in activity over time could be a result of compound dehydration. Dehydration could occur either by a base promoted E1cB mechanism or an acid promoted E1 mechanism (Figure 2(a)). In the growth media, both mechanisms could be operative. To test the effect of dehydration on TACC activity, compound 55, which was recovered as a byproduct from syntheses of 22, was tested against B. subtilis and found to be completely inactive. Apparently, the tertiary alcohol is absolutely essential for antibacterial activity. The report by Leung and co-workers [31] suggests that antibacterial activity of TACC 25 (ACP 4) and TACC 20 (ACP 5) was due to activation of the peptolytic activity of ClpP. To confirm this mechanism of action, we tested TACC 20 (ACP 5) and compound 22 against a ∆clpP-spx null strain of B. subtilis that is not susceptible to the ADEPs [30] [36] [37]. To our surprise, TACCs were more active against the B. subtilis ∆spx null strain and ∆clpP-spx double null strain than the B. subtilis wild type strain (Figure 3(a)). These data suggest that TACCs have targets other than ClpP. We also tested TACCs for their ability to activate ClpP in vitro. We found that TACCs mediated very weak ClpP activation compared to ADEP1 (Figure 3(b)). At the highest concentration tested (1000 µM), TACC 20-induced decapeptide hydrolysis was only slightly more than in blank samples with no activator. ADEP1 on the other hand appears to saturate ClpP at 1000 µM. Our experimental results points to the fact that these new antibiotic molecular scaffolds have target other than ClpP that is responsible for antibacterial activity. In fact, preliminary bacterial cytological profiling (BCP) studies on these new class of antibiotics indicate that they are membrane active compounds and disrupt membrane integrity through rapid membrane permeabilization. Therefore, the mode of action of these compounds could be a synergistic one involving a very minimal ClpP activation and a pronounced membrane permeabilization. Conclusion A new class of antibacterial agents has been synthesized on a new molecular scaffold of cyclohexane carboxylate. We have tagged these new compounds TACCs (Trisubstituted Aryl Cyclohexane Carboxylate). These new molecules are structural analogues of ACP 4 and 5 previously reported by Leung et al. [31], and were synthesized to circumvent the drug-like property (drug-ability) challenges and liability noted in ACP 4 and 5. The TACCs exhibited equal, and in some cases better, antibacterial activity than ACP 4 and 5 ( Table 2 and Table 4). The tertiary alcohol on the quaternary carbon center of the cyclohexane carboxylate was found to be crucial to the antibacterial activity of this class of compounds. It was also discovered through the extensive bioassay analyses conducted that the 2-hydroxycyclohexane carboxylate (hydroxyl-TACC) was more active than the corresponding 2-oxocyclohexane carboxylate (oxo-TACC). While ClpP activation in TACCs is very weak, the preliminary bacterial cytological profiling (BCP) study revealed that this class of compounds exhibits pronounced membrane permeabilization leading to disruption of bacterial mem-International Journal of Organic Chemistry brane integrity. General All chemicals were purchased from Sigma-Aldrich and used without further purification. NMR analyses were conducted on Bruker Avance Ultrashield Spectrometer in d-DMSO solvent (400 MHz or 600 MHz for 1 H and 100 MHz for 13 C NMR). Residual DMSO signal was used as an internal reference (2.52 ppm for 1 H and 40 ppm for 13 C). Synthesis of 3,3-Dichloroacrolein (4) To a solution of carbon tetrachloride (200 mL, 317.34 g, 2.06 mol, 6.8 eq.) and isobutyl vinyl ether (39.5 mL, 30.35 g, 0.30 mol, 1 eq.) in a 1L 2-neck round bottom flask, equipped with a magnetic stirring bar was added catalytic amount of benzoyl peroxide (0.60 g, 2.50 × 10 −3 mol, 8.2 × 10 −3 eq.). The reaction mixture was then refluxed for 48 hr. Upon cooling, the refluxing setup was replaced with a fractional distillation setup, excess carbon tetrachloride was removed by distillation and the residual liquid was heated to 170˚ -196˚ where evolution of large amount of HCl was observed. The residue was then slowly heated to 220˚ under slight vacuum and different fractions were collected. 3,3-dichloroacrolein was obtained in 80% pure yield after redistillation using a short path, bp 124˚ -126˚ Synthesis of TACC 20-Ethyl 4-(4-Bromophenyl)-2-(2,2-dichloroethenyl)-4-hydroxy -6-oxocyclohexanecarboxylate To a solution of 1.5 eq. of NaOH in 4 mL of water was added solution of 4'-bromoacetophenone (6.03 mmol, 1.0 eq.) in ethanol (6 mL). The mixture was stirred for 5 -10 minutes and then a solution of 3,3-dicholroacrolein (6.63 mmol, 1.1 eq.) in 2 mL of ethanol was slowly added. Solid chalcone product 10 started forming almost instantaneously. Chalcone 10 was filtered after 20 minutes, washed with cold ethanol and dried. To a separate solution of sodium ethoxide (21% NaOEt in EtOH, 1.3 eq.) in ethanol (3 mL) was added ethyl acetoacetate (0.72 mmol, 1.1 eq.). The mixture was stirred for 10 minutes followed by the addition of chalcone 10 (0.65 mmol, 1.0 eq.). Then the reaction was left to stir for 5 hr. Upon completion of the reaction as monitored by TLC, ethanol was evaporated in vacuo and the reaction mixture was poured into water, extracted with ethyl acetate (3 × 15 mL), washed with brine and dried over anhydrous sodium sulfate. The drying agent was filtered off and the organic solvent was evaporated in vacuo to afford TACC 20 in 95% yield as a thick reddish oil. 1 General Procedure for the Synthesis of TACCs 21-29 To a mixture of substituted acetophenone (7.2 mmol, 0.9 eq.) and 3,3-dicholroacrolein (8.0 mmol, 1.0 eq.) in acetic acid (10 mL) was added 1.2 mL of H 2 SO 4 . After 24 hours, the reaction mixture was poured on ice (with some water) and the precipitate that formed was filtered and washed with ice-cold water to afford chalcones 11-19 (Note: Chalcone 16 did not yield a solid precipitate at this point, thus its reaction mixture was extracted with dichloromethane (25 mL × 3), the obtained organic layer was washed with water and brine and then dried over anhydrous sodium sulfate. The drying agent was filtered, and the organic solution was evaporated in vacuo to furnish chlcone 16). To a separate solution of sodium ethoxide (21% NaOEt in EtOH, 2.0 eq.) in ethanol (10 mL) was added ethyl acetoacetate (5.15 mmol, 2.0 eq.). The mixture was stirred for 10 minutes followed by the addition of the appropriate chalcone (2.57 mmol, 1.0 eq.). Then the reaction was left to stir for 2 -3 hr. Upon completion of the reaction as monitored by TLC, the reaction mixture was poured on iced-water and then acidified using 1 M HCl. Solid precipitate was filtered from the mixture and washed with ice-cold water to obtain crude TACCs 21-29. These were then purified us- 13 General Procedure for the Chemoselective Hydride Reduction of Oxo-TACCs-Synthesis of Hydroxy-TACCs 49-52 Corresponding oxo-TACC (1 mole eq.) was dissolved in methanol and treated with sodium borohydride (2 moles eq.). Upon completion, as determined by TLC, the solvent was removed in vacuo. The concentrated residue was diluted with ethyl acetate (~10 mL) and extracted once with saturated aqueous ammonium chloride. The aqueous extract was back extracted once with ethyl acetate and the combined organic layers were dried over anhydrous sodium sulfate. The desired product was isolated by flash chromatography on silica gel using a 30% ethyl acetate in hexanes mobile phase. Minimum inhibitory concentration determinations: B. subtilis MICs were determined using standard agar dilution techniques. Liquid cultures (LB Broth) inoculated from a fresh single colony were grown for 6 hours at 37˚C. LB agar plates supplemented with varying concentrations of test compound were inoculated with 5 µL of the liquid culture and then incubated at 37˚C for up to 72 hours. The agar plates were inspected for growth at 24-hour intervals. The MIC was determined to be lowest concentration of compound able to completely inhibit B. subtilis growth after 48 hours.
4,524.8
2019-09-18T00:00:00.000
[ "Chemistry", "Biology" ]
Goldstone bosons on celestial sphere and conformal soft theorems In this paper, we study celestial amplitudes of Goldstone bosons and conformal soft theorems. Motivated by the success of soft bootstrap in momentum space and the important role of the soft limit behavior of tree-level amplitudes, our goal is to extend some of the methods to the celestial sphere. The crucial ingredient of the calculation is the Mellin transformation, which transforms four-dimensional scattering amplitudes to correlation functions of primary operators in the celestial CFT. The soft behavior of the amplitude is then translated to the singularities of the correlator. Only for amplitudes in"UV completed theories"(with sufficiently good high energy behavior) the Mellin integration can be properly performed. In all other cases, the celestial amplitude is only defined in a distributional sense with delta functions. We provide many examples of celestial amplitudes in UV-completed models, including linear sigma models and Z-theory, which is a certain completion of the SU(N) non-linear sigma model. We also comment on the BCFW-like and soft recursion relations for celestial amplitudes and the extension of soft bootstrap ideas. Introduction In the last two decades, there has been remarkable progress in the study of on-shell scattering amplitudes and the development of efficient computational tools. This has led to many new insights and discoveries completely invisible in the standard formulation of physics. This ranges from the efficient on-shell computational methods [1][2][3][4][5][6][7][8][9][10][11][12][13][14] to the to discovery of hidden symmetries and new profound connections to mathematics . In most of these developments, the central objects are on-shell scattering amplitudes in fourdimensional Minkowski spacetime written in momentum space which make translational invariance manifest. Recently, it was proposed to express amplitudes in a boost eigenbasis that was derived within the celestial holography program by the requirement of conformal covariance of the scattering amplitudes viewed as correlators in a putative celestial conformal field theory (CCFT) living on the celestial sphere at asymptotic infinity. If successful, this would be a major progress in the search for flat space holography as one of the most important open challenges in theoretical physics. We refer to these correlation functions as celestial amplitudes and they possess some very special properties. While the key question about the existence of CCFT is an open question, we have learned in recent years a lot about various properties of celestial amplitudes. This includes the study of collinear and soft limits [39][40][41][42][43][44][45], color-kinematics duality and double copy relations [46][47][48][49], the role of w 1+∞ structures [50][51][52][53][54], celestial operator product expansion [55][56][57]. Furthermore, the celestial avatar of the dual superconformal symmetry was studied in [58] and explicit formulas for n-pt MHV amplitudes were obtained in [59]. In this paper, we focus on a complementary direction and study celestial duals for amplitudes in effective field theories (EFT) for scalar particles and soft theorems. Already in the late 50s and middle 60s, Low [60] and then Weinberg [61] asked what happens to a scattering amplitude when one null momentum of a massless particle (they considered photons and gravitons) approaches zero. The answer turned out to be remarkably universal, the amplitude factorizes into a theory-independent soft factor parametrized just by conserved charges of the particles involved (e.g. momenta, electric charges, spins and angular momenta), and a lower point amplitude with the soft particle erased. In a particular case of Goldstone bosons, the soft limit of the amplitude in certain cases vanishes, also referred to as Adler zero. This property has been used in the context of modern amplitudes methods in the soft recursion relations [62][63][64]. The tree-level amplitudes in a number of quantum field theories can be reconstructed from the residues on poles via BCFW recursion relations [1,2] (and generalizations [65][66][67][68][69]). This is based on simple properties of locality and unitarity of momentum space amplitudes, which translate into factorization properties. In effective field theories, this procedure can not be used as the amplitudes have additional poles at infinite momenta which are not known from first principles. However, as shown in [63,[70][71][72][73][74][75][76][77] this lack of information can be traded for the knowledge of the behavior in the soft limit. This then allowed us to fully fix and then reconstruct amplitudes in a variety of EFTs including the SU(N) non-linear sigma model (NLSM), Dirac-Born-Infeld theory (DBI), vector Born-Infeld theory (BI) and a newly discovered special Galileon. The same theories also appear in the context of scattering equations and CHY formula [78][79][80], ambitwistor strings [81][82][83] and the color-kinematics duality [84][85][86]. The universality property of soft theorems has long been an inspiration for further studies and searching for symmetry arguments behind the scenes. However, the plane wave basis in which the soft limits were derived was not a well-adapted framework for this task. It took until 2013 when Strominger [87,88] observed a crucial connection between soft theorems and asymptotic symmetries. At that point the celestial holography program was born. This fresh viewpoint allowed to derive soft theorems as consequences of asymptotic symmetries and express them as Ward identities among correlation functions. An immediate prediction of the newly developed formalism was the existence of a sub-sub-leading soft graviton theorem, which was indeed confirmed in [89]. It didn't take long to discover that the correlation functions related by "soft" Ward identities have many properties appropriate for 2D CFT correlators. Inspired by these findings a basis transform from translation eigenstates (plane waves) to boost eigenstates that make the conformal properties and "soft" Ward identities manifest was proposed soon after in [90]. As a matter of fact, the above-described relation between soft theorems and asymptotic symmetries works best for gauge theories (gravity included as a gauge theory of diffeomorphisms). In that case soft theorems take a transparent form of Ward identities for Kac-Moody or Virasoro (formed from the stress tensor) currents. These conserved currents and associated charges require the symmetry transformations to depend on the position on the celestial sphere. Such transformations get naturally interpreted as large gauge transformations, which exist only in gauge theories. For this reason, it is fair to say that for scalar theories not embedded into gauge theories, the asymptotic symmetry interpretation of soft theorems is less understood, despite some progress in this direction [91][92][93][94]. Nevertheless, effective theories (EFTs) of scalars provide a highly intriguing web of soft theorems. In this paper, we investigate the form of soft theorems for theories of Goldstone bosons in the celestial basis. We don't attempt to make a connection with asymptotic symmetries here, this is left to experts or to future analysis. The aim of this work is to collect a representative selection of soft behaviors of scalar particles, transform it to the celestial basis and thus output a representative database of conformally soft theorems for scalar Goldstone bosons. In particular (see section 6), we provide soft theorems whose right hand is a highly non-trivial linear combination of lower point amplitudes. Soft behavior of Nambu-Goldstone EFTs is captured by non-linear sigma models. Their effective action (up to two derivatives and all orders in fields) is completely determined in terms of a (pseudo)-Riemannian metric on the target space of the NLSM. However, celestial amplitudes of NLSMs are not very well-behaved. For massless particles, the transform to a celestial basis is a Mellin transform (see section 3) that exchanges light-cone energies in Minkowski spacetime with scaling dimensions in CCFT. An essential object associated with any Mellin transform is its fundamental strip, where it converges. For NLSMs the integrals are marginally convergent (i.e. their fundamental strip is empty) which results in generalized delta functions (with complex arguments) of scaling dimensions. The left border of the fundamental strip is controlled by the IR expansion of the amplitude while the right border by the UV expansion. When we want to analyze a particular soft behavior we consider the NLSM as fixed and thus cannot improve the IR expansion (the left border of the fundamental strip is fixed). However, to gain better control over the celestial amplitude, we can improve the high-energy behavior. This is done by moving one step up (against the RG-flow) in the hierarchy of EFTs. For this reason we always construct a pair of NLSM and its linearized version. The linear sigma models 1 always have non-empty fundamental strips and therefore their celestial amplitudes don't have a distributional character in scaling dimensions. The paper is organized as follows: In section 2, we review soft theorems for scalar EFTs and some recent developments on their momentum space on-shell scattering amplitudes. In sections 3 and 4, we review the basic properties of celestial amplitudes for scalars, their conformal properties and discuss in detail four-point and five-point cases. In section 5, we discuss soft limits and provide celestial analogues of Adler zero and enhanced soft behavior. In section 6, we provide many explicit examples of Goldstone boson models, linear and non-linear sigma models and their UV completions. In section 7, we briefly comment on the celestial recursion relations and the possibility of the soft bootstrap program. We give some concluding remarks in section 8. The more technical parts and calculations are moved to appendices. Review of soft limits in momentum space In quantum field theory, scattering amplitudes describe the probability amplitudes for particle scatterings. They are important because by squaring them, one gets the probabilities of the corresponding particle scatterings, which can be compared with the experimental data. The given scattering amplitude can be obtained by summing over all Feynman diagrams that connect the initial and final states for the given process. The Feynman diagrams calculation is a straightforward way how to get to the result. However, there are different techniques for calculating the scattering amplitudes. Namely, the BCFW [2] recursion enables reducing a full amplitude into a sum of products of two simpler amplitudes by changing (shifting) two momenta. This process continues until the entire scattering amplitude has been expressed in terms of the seed amplitudes, typically the three-point vertices. The BCFW recursion is particularly useful for theories with massless particles, such as gauge theories like QCD. These theories can have amplitudes with many Feynman diagrams, but the BCFW allows one to simplify the process and calculate them more efficiently. Application of the recursion procedure on the effective theories of Goldstone bosons is spoiled by bad high-energy behavior of the amplitudes. This can be overcome by the information on the amplitude at some different kinematical point(s). Typically, effective field theories focus on the low energy limit and it is thus natural to consider the behavior at the zero momentum or momenta. We will summarize below the main results of the so-called soft bootstrap program first introduced in [62,71,72] and developed further e.g. in [63, 73-77, 95, 96]. We will start with the well-known example, the single-shift limit, the Adler zero, connected with the SU (N ) NLSM [70,97]. Shift symmetry and Adler zero Shift symmetry and, consequently, Adler zero are important concepts in particle physics. In fact, the so-called Alder zero, i.e. the vanishing of the amplitude with one external soft particle, was discovered in connection with the strong interaction, more precisely with the SU (N ) NLSM. Let us consider the invariance of a theory at the quantum level with corresponding non-anomalous Noether current N µ (x). Now we will assume that this symmetry is spontaneously broken. The necessary and sufficient condition for spontaneous symmetry breaking is the coupling of the Noether current to the set of Goldstone bosons φ a as (2.1) Sandwiching the current between the in-and out-states (α and β, respectively), we find out that this matrix element develops a pole for p 2 → 0 with the residue of the amplitude of the Goldstone boson emission: where p = p α − p β . Assuming the current conservation and regular remnant R aµ in the soft Goldstone boson limit, we get lim p→0 F a β + φ a (p)|α = lim p→0 p µ R aµ (p) = 0. (2.3) We can verify this generic statement which is based solely on symmetry arguments and is valid non-perturbatively for explicit tree-level amplitudes. The leading order Lagrangian of the massless theory, which encodes the SU (N )×SU (N )/SU (N ) chiral symmetry breaking, can be written as: where t a are generators of SU (N ). We have introduced a multiplet of Goldstone bosons (pions) φ a and can easily calculate their scattering amplitudes. We can only focus on the ordered amplitudes and thus strip the flavor indices. For the 4pt we get (in our convention when we assume all momenta incoming): The structure is clear; we have the factorization terms corresponding to the 4pt-vertex insertions and the last term, linear in s ij , corresponding to the 6pt vertex. One can easily verify that setting, for example, p 6 → 0 the factorization terms collapse to s 24 and together with the leftover from the last term, we get indeed 0. The 6pt amplitude was calculated from the above Lagrangian, and all terms were fixed. But we can get to the result without the Lagrangian, just using the information of the Adler zero. Starting with the generic 4pt vertex, we would get the factorization terms. The requirement of the vanishing amplitude in the soft limit will force us to include the 6pt contact term of exactly the same form as above. This is the basis of the soft-bootstrap program or bottom-up reconstruction (see [98] and [99] for more details and higher orders). Enhanced soft limits and exceptional EFTs In the above example, we saw that the structure of the theory could be reconstructed solely based on the given powercounting and the information of the Adler zero. We will briefly describe here how it can be extended for more generic situations (for more details, please refer to [62,71,72]). The powercounting can be connected with the following parameter defined for each n-pt vertex with m-derivatives. In the above NLSM example, ρ = 0 for all vertices. It is easier to focus first on the single-ρ theories. The spontaneous symmetry breaking above is connected with the shift symmetry, for the single field given by φ → φ + a . (2.8) We can assume the following generalization of the shift symmetry where θ is traceless tensor and the local operator O[φ](x) is at least quadratic in the field φ and its derivatives. If we assume that the Lagrangian is invariant with respect to such a polynomial symmetry and there are no trilinear vertices, we get that the amplitudes will vanish as A(p 1 , . . . , p n ) = O(p σ i ), with p i → 0 and σ ≡ r + 1 . (2.10) I.e. we have obtained the Adler zero of the σ ≡ (r + 1)-order. This order σ is an important parameter and together with ρ enables classification of effective theories. In the (ρ, σ) parametric space we get the so-called exceptional single-ρ theories -(0, 1): NLSM, (1, 2): DBI, (2, 2): Galileon and finally (2, 3): special Galileon (sGal). Possible generalizations can address the above assumptions or limitations, namely the description of solely scalar particles, the absence of the trilinear interactions, or the focus only on the single ρ theories. Few attempts beyond these cases can be found for example in the following works: [63,73,74,76,96]. Fundamentals of celestial amplitudes for scalars A fundamental piece in any holographic correspondence is the matching between bulk and boundary symmetries. In the celestial holography context, it amounts to the isomorphism between the bulk Lorentz group SO(1, 3) and the global conformal group PSL(2, C) of the boundary celestial sphere. Bulk momentum space S-matrix makes manifest translation invariance, as it is expressed in its eigenstates (plane waves). Lorentz boost (in z-direction) gets mapped to a dilatation under the above isomorphism. It is the basis of boost/dilatation eigenstates that makes manifest conformal covariance of the boundary CCFT correlators. Thus the kinematic part of the holographic dictionary is implemented by a change of basis between scattering states (see [100][101][102] for early results and observations) plane wave (translation eigenstate) ⇐⇒ conformal primary wavefunction (boost/dilatation eigenstate) The precise form of the transformation was worked out in [90,103] (see also [104] for an alternative presentation). For massive particles it is realized as an integral transform on a hyperbolic slice H 3 of the dual momentum space to Minkowski spacetime defined by the constraintp 2 = 1, with an integration kernel given by the H 3 bulk-to-boundary propagator where Φ is the massive conformal primary wavefunction (of a scalar particle for simplicity). It depends on a spacetime point X and an auxiliary null momentum q(w, w) = 1 2 (1 + ww, w + w, i(w − w), ww − 1)), which has the interpretation of a point at the asymptotic boundary of a mass-shell hyperboloid H 3 . The bulk-to-boundary propagator acts in momentum space and captures propagation between this auxiliary boundary point q(w, w) and a pointp on the upper sheet of a unit momentum mass-shell hyperboloid (that is being integrated over in (3.1)). Equivalently, a particle with momentum q(w, w) would pierce the future celestial sphere (i.e. the boundary of the Minkowski spacetime) in a point (w, w) defined in stereographic coordinates. Outgoing/incoming particles are distinguished by ε = ± and the unitary 2 irreducible representation of PSL(2, C) is characterized by a scaling dimension ∆, lying on the principal continuous series ∆ = 1 + iR + , as required by a normalization condition [90,103]. For massless particles this expression reduces to a Mellin transform with respect to the particle's light front energy ω, see Appendix A for summary of definitions. It can be achieved either by direct computation or a limiting procedure (for a review see [106]) 2) 2 Unitarity is required with respect to the Klein-Gordon scalar product inherited from bulk, not the usual scalar product used for Euclidean 2D CFTs that is equivalent to radial quantization, see e.g. [105] for a discussion. where φ is the massless conformal primary wavefunction, ωq(w, w) the null momentum of a particle that pierces the celestial sphere at a point (w, w), and ∆ = 1 + iλ, λ ∈ R. Celestial amplitudes The formulas (3.1), (3.2) constitute an essential piece of the holographic dictionary and allow to express any momentum space S-matrix element as a celestial amplitude (CCFT correlator). The map is very simple, one just needs to do an appropriate integral transform for every external leg based on its mass 3 where i ∪ j = {1, . . . , n} is a division of external particles into massless/massive and is the momentum space scattering amplitude including momentum conservation. Here we parameterize the on shell momenta of massless and massive particles as respectively, wherep 2 i = 1 and For external particles ε i = ±1 for outgoing and incoming particles, respectively. For our conventions and further details see Appendix A. From now on, we will concentrate on tree-level amplitudes with massless external states 4 . In that case, the on-shell amplitude A n is a rational function of ω i . In accord with [108], it is useful to perform a change of variables and obtain Here we are interested in scalar particles. For massless particles the 4D helicities get identified with 2D spins and the map remains to be a Mellin transform. In the massive case, the 4D spin associated with irrep of the SO(3) little group must be decomposed into 2D SO(2) spins, while the scalar bulk-to-boundary propagator gets generalized to its spinning version (see [107] for details). 4 We preferred to state the general celestial dictionary since later a recursion formula based on (7.4) will be presented and lower point celestial amplitudes with one massive external leg will be needed. where in the last integral ∆ is given by Note that for massive particles the kinematics does not factorize like in (3.7). Let us look at the expression (3.8) in more details. For general n-point correlators there are n integrals over parameters σ i which satisfy 5 conditions from delta functions. This means that we can fix σ 1 , . . ., σ 5 and integrate over remaining σ 6 , . . . , σ n , together with the u integral. For the future reference to simplify the notation, we define the measure of the σ-integrations, and we define for fixed external states (and signs i ) and will refer to A n as a celestial amplitude. Homogeneous amplitude If the amplitude A n is a homogeneous function of momenta with degree of homogeneity 2d, . . , ε n σ n q (z n , z n )) (3.12) and after plugging into (3.8) we can perform the u-integral explicitly. The integral does not converge in the usual sense but it could be understood in the sense of distributions as a generalized Mellin transform of the distribution f (u) = u d leading to the identification (e.g. (2.20) in [108]) where ∆ is allowed to be complex. As a result we get, z 1 ), . . . , ε n σ n q(z n , z n )) , (3.13) where only the integration over σ i is left. Note that the 4D bulk dilatation operator acts on k-th external leg as −i(∆ k − 1) = λ k [109]. Thus the amplitude is classically scale invariant if the delta function in scaling dimensions reduces to δ( n k=1 λ k ). This happens precisely when the degree of homogeneity satisfies 2d = 4 − n. The amplitudes in exceptional EFTs are homogeneous and the simplification does apply to them. In particular (cf. section 2): But it is worth emphasizing that the celestial correlators (3.13) for these exceptional EFTs are only defined in the distribution sense, as indicated by the presence of the delta function δ(∆ + d). This is a general feature of any homogeneous amplitude, in order to perform the Mellin transformation in the usual sense we need an amplitude in a UV completed theory, as discussed further. Non-homogeneous amplitudes and UV completion In the general case of non-homogeneous amplitude such a simplification of the u−integration is not possible and the integral is entangled with the σ i -integration. In that case we are left with the general formula (3.8). For renormalizable theories, we have further constraints on the function A n (u) ≡ A n (. . . , . .) from Weinberg's theorem, namely for u → ∞ the amplitude generally behaves as which determines the upper bound on Re ∆ for which the integral converges in the UV On the other hand, the limit u → 0 yields the leading term of the low energy expansion of the amplitude A n . It might be calculated directly using the effective low energy theory obtained by integrating out all but the massless degrees of freedom. Provided A n = O (u α ) in this limit, the u integral converges in the IR for Therefore, for the real part of ∆ restricted to the fundamental strip −α < Re∆ < n/2 − 2, the Mellin transform is a holomorphic function of ∆ (see Fig. 1). For ∆ in the fundamental strip we can calculate the u Mellin integral using the complex Hankel contour, more details can be found in Appendix A. As a result, we get Re(∆) where the contour integral was calculated as a sum over residues in u. These residues correspond to massive factorization channels F of the tree-level amplitude A n , and they localize u to u F given by We schematically draw the factorization diagram in Fig. 2. Plugging into (3.8) we obtain a partial expression for the celestial amplitude after u-integration, Note that this formula involves the calculation of residues of A n (u) on the pole u F . Even if all external particles are massless, the pole corresponds to the exchange of the massive particle. Hence the residue depends on amplitudes of particles with both massive and massless external legs. General four-point and five-point amplitudes The case n = 4 and n = 5 are special since the integration over the measure [dσ, ∆] is saturated by the δ-functions. The number of delta functions is five so for n = 5 we exactly saturate all σ-integrations. The values of σ i are reduced to the solutions of the set of linear equations of the form where A n is (5 × n) matrix and σ = (σ 1 . . . σ n ), while b is a 5-vector. Five-point amplitudes For n = 5, A 5 is a square matrix with determinant 5 which is nonzero for generic q k . Explicitly where z ij ≡ z i −z j is a difference of two positions on the celestial sphere. Here we introduced the conformally invariant cross-ratio The solution σ * i of (3.20) is thus unique and equal to The product of delta functions can be then rewritten as The integration over σ then localizes σ i = σ * i and we get from (3.8) 26) where χ 0,1 is a characteristic function of the interval 0, 1 . Hence everything reduces to the u-integration. As discussed earlier, for homogeneous amplitudes this integration can be carried out explicitly (3.13) and for non-homogeneous amplitudes within renormalizable theories we can use partial results (3.19). For example, for homogeneous amplitudes with degree of homogeneity 2d after performing the u-integral we get where σ * j are given by (3.24). Four-point amplitudes In the four-point case since the rank of the matrix A 4 is at most four. The necessary condition for the existence of the solution of (3.20) is that rank ( 28) or explicitly, using (3.22) Hence we have to be a bit careful when solving for σ * i and performing σ-integrals. The constraints from momentum conservation can be written in the form and can be solved for three σ i in terms of one free σ j with j fixed as where k, l = i, j label the complementary two particles. Note that we get again in accord with (3.29). The corresponding four-dimensional δ−functions can be then rewritten as From the fifth constraint (3.33) we obtain the solution for the fixed σ * j , Note also that on the support of momentum conservation δ−function (3.33) we have Inserting (3.34) in (3.31) and using (3.32), the final solutions for remaining σ i * have the same form as (3.34) (up to the appropriate permutation of indices) and for any j = 1, . . . , 4 can be written again as which is now true for all j = 1, . . . , 5. As a result we get which principally differs from the five-point case by the presence of the delta function δ (−i (z 1234 − z 1234 )). Formulae for a generic massless celestial 4pt amplitude were derived e.g. in [110,111] (see also reviews [106,112,113]). In fact, we see that the delta function inherited from bulk translation invariance forces the cross-ratio to be real (i.e. the celestial operators lie on a big circle on the celestial sphere as derived in [110]), while the characteristic function χ requires an in-out-in-out configuration of celestial insertions as proven in [111] (where the reader can find a detailed analysis of implications of bulk kinematics on the geometry of celestial n-pt amplitudes). Thus the 4pt CCFT correlator is a distribution, which is not a usual property encountered in 2D CFT and complicates application of standard CFT techniques, e.g. the conformal block expansion. For this reason many authors tried to relax it, either by performing a shadow/light transform on some of the CCFT operators or by coupling the bulk theory to external sources breaking translational invariance. See [114][115][116][117][118][119][120][121] for a sample of works in this direction. To summarize, for 4pt and 5pt amplitudes, the Mellin transform reduces to the calculation of a single u-integral and generic massless celestial 4pt and 5pt amplitudes can be computed according to (3.37) and (3.26). Conformal structure of celestial amplitudes The structure of the celestial amplitudes can be better understood by taking their transformation properties with respect to the global conformal group into account. For simplicity, we will assume that all particles have zero helicity. Five-point amplitudes By appropriate Möbius transformation, we can fix the gauge for the 5pt amplitude where we want to take the limit → 0 in the end. For all scalar particles we get the following representation where the parameters of the transformation satisfy The points t 4,5 are then given by where the conformal cross ratios z ijkl is defined as usual (cf. (3.23)). Solving for the parameters a, b, c, d, we get and taking the limit ε → 0 we obtain for the celestial amplitude depends only on the invariants t 4,5 and is therefore conformally invariant by itself. Therefore, only the factor C 5 ({∆ i , z i }) is responsible for the proper conformal covariance of the full amplitude. From the representation (3.26) and using the explicit solution for σ * i (see Appendix B.1), we observe that for the configuration (1, 0, ε −1 , t −1 and thus the limit exists and is finite and conformally invariant. Note that the limit in (4.8) can be performed explicitly using the solutions for σ * i . Therefore, the celestial amplitude can be written in the form where the first two factors C 5 and f 5 are universal and given by (4.5) and (4.8) while the last term reflects a particular theory. Four-point amplitude Similar consideration can be applied to the 4pt amplitude. In this case the reference con- where again t 4 = (1 − ε)z 1234 + ε, and where now (4.12) For the reference configuration we get and thus . Again, in this formula (see Appendix B.2), we have σ * 3 = O ε 2 and σ * i = O(1) for i = 3. Therefore we can write where C 4 and f 4 are universal factors. The latter is given by and the theory-specific term is defined as . (4.17) Both limits on the right hand sides of (4.16)and (4.17) are finite and conformal invariant. The calculation of the nonuniversal factor F 4 (∆, t 4 ) can be further simplified. The amplitude A 4 {p i } 4 i=1 for scalar particles is a function of Mandelstam variables Only two of them are independent, since s+t+u = 0, let us write therefore A 4 ≡ A 4 (s, t; u). and for the theory-specific function F 4 we get (4.20) For particles 1 and 2 incoming and 3 and 4 outgoing and for ε → 0 we obtain , σ * 4 = 1 2 and the universal factor f 4 takes the form 6 , while for the function F 4 we get where we denoted Finally, the 4pt massless celestial amplitude of scalars takes the standard form (up to the delta function in the conformal cross ratio) dictated by conformal covariance where we used that in the ε → 0 limit t 4 = z 1234 and t 4 − 1 = z 1234 z 1423 . Explicit examples of exceptional EFT amplitudes Let us now look at amplitudes in the special class of theories reviewed in section 2, which have special soft limit behavior: SU (N ) NLSM, DBI and special Galileon. At 4pt these amplitudes take a simple form, In fact, these are unique kinematical invariants with O(p 2 ), O(p 4 ) and O(p 6 ) powercounting. For the A 4 = O(p 2 ) the amplitude is cyclic rather than permutational invariant as s+t+u = 0. We can now calculate the celestial 4pt amplitude for all three cases, at least formally 7 . In fact, using the results of (4.24) it is enough to calculate only the G 4 (∆, t 4 ) function using (4.23). Note that all three amplitudes are homogeneous and the integration over s trivializes, We can also consider the 5pt amplitude in a (general) Galileon theory, where s ij = (p i + p j ) 2 . This is again a homogeneous amplitude and we can calculate where σ * i 's are listed in Appendix B.1. The explicit result is not much illustrative, let us note only that for t 4 → t 4 it behaves as which mimics the O(t 2 ) behavior of the (general) Galileon amplitudes in the soft limit. Conformal soft limit for the universal factors Up to this point we have obtained expressions for 4pt and 5pt massless celestial amplitudes of scalars. Since our main aim is to study the conformal soft theorems, we make a short digression and, as a next step, we will consider the limit ∆ 5 → −k of the universal factors f 5 and C 5 of the 5pt amplitude. The latter limit corresponds to the case when the fifth particle becomes conformally soft, which in the momentum space reflects the O(ω k 5 ) behavior in the limit ω 5 → 0. For this purpose it is useful to consider the following formal equality where we used the identity (see (2.8) As a consequence of (3.24) and (3.22) we have Note also that for f (σ) = O(σ n ) when σ → 0 and for At the 5pt reference point it can be also shown (cf. Appendix B) that for i = 1, . . . , 4 we get where on the right-hand side stays the corresponding solution for 4pt at the 4pt reference point. Therefore Similarly from (4.5) and (4.12) we see that Thus, in the special case k = 0, the universal conformal factor of the 4pt amplitude can be obtained formally as the residue of the universal conformal factor of the 5pt amplitude for ∆ 5 → 0. This relation is obviously not true in general for the full celestial amplitude including the theory-specific part F 5 . General n-pt amplitude The considerations performed above for a 4pt and 5pt amplitude can be generalized to an arbitrary n-pt amplitude of massless particles. We again fix the gauge according to the t k is then the conformal invariant cross-ratio in the limit ε → 0 The analog of the conformal factor (4.5) reads now .42) and the celestial amplitude can be represented in the form (4.43) The conformal invariant Mellin integral H n reads where we evaluate z * i on (4.40). The first five σ i 's can be solved in terms of the remaining ones, the corresponding linear equations have the form of (3.20) and the same matrix A 5 . The nontrivial solution exists, provided not all ε k 's have the same sign. As above we can write Note, however, that unlike the 5pt case the right-hand side does not factorize, since σ * i 's depend on the integration variables σ j , j = 6, . . . , n. Let us note that in analogy to (4.39) we have for j = 6, . . . , n and clearly for i = 1, . . . , 5 and j = 6, . . . , n where the superscript refers to n−point and (n − 1)−point kinematics corresponding to the omission of the j−th particle respectively. Soft limits of celestial amplitudes In this section we will discuss the soft limits of the momentum space amplitudes A n and their relation to the conformal soft limits of celestial amplitudes A n . Momentum space soft theorems are written as a power series in energy of the soft particle i. Soft theorem of order k in energy gets translated via the Mellin transform to a conformal soft theorem for celestial amplitudes, encoded as a residue on the pole ∆ i = −k. We have then the following general picture 9 This result was established and checked in many papers, for a selection see [39,40,[42][43][44][122][123][124]. Soft theorem for 5pt amplitude The case of 5pt amplitude is special, since it allows to formulate the conformal soft theorem differently, solely in terms of the theory-specific conformal invariant functions F k ({∆ i , z i }) introduced in section 4. As an illustration, let us take the limit ∆ 5 → 0 of a general 5pt celestial amplitude A 5 of massless scalars, assuming that it indeed probes the leading soft behavior, which implies for the momentum space soft theorem an expansion in energy (of the soft external leg) starting with a constant term. As we will see in what follows, in particular theories such a constant term coincides with a linear combination of the lower-point (i.e. 4pt) amplitudes in the same theory, i.e. Recall that the 4pt and 5pt amplitudes can be written in the form (4.15), (4.9) as where C k 's are the universal conformally covariant factors, f k 's are universal conformal invariants, while F k 's are functions specific for the given theory. As we have shown in section 4.4, the following limits hold and therefore the residue of A 5 ({∆ i } , z 1 , . . . , z 5 ) on the ∆ 5 = 0 pole can be written as Notice that on the right-hand side the total scaling dimension ∆ is summed over scaling dimensions of four particles as appropriate for a 4pt function For the theory-dependent part F 5 we get and since σ * 5 → 0 for t 4 → t 4 , the limit t 4 → t 4 probes the soft behavior of the fifth particle, and the result does not depend on t 5 . Also, the t 4 → t 4 limit of {σ * j } 4 j=1 coincides with the corresponding solution of the 4pt kinematics. Therefore, purely kinematically, lim t 4 →t 4 F 5 (∆, t 4 , t 5 ) resembles the function F 4 (∆, t 4 ) for some 4pt amplitudes. On the other hand, assuming the momentum space soft theorem in the form (5.1), we get the t 4 → t 4 limit of the integrand of (5.6) just in terms of linear combinations of 4pt amplitudes and thus the conformal soft theorem reads Therefore, in such a case, the conformal soft theorem for a 5pt function can be formulated solely in terms of the functions F 5 (∆, t 4 , t 5 ) and F 4 (∆, t 4 ). Note that the O(1) term did not have to be a leading term in the soft expansion. For example, if the amplitude behaved as A 5 ∼ O( 1 t ) our calculation would correspond to the subleading soft theorem while the leading soft behavior would be captured by the residue of A 5 on ∆ i = 1 pole. As a more explicit example, we can consider the 5pt amplitude in the Galileon theory. The O(t 2 ) behavior of A 5 (. . . , tp i , . . . ) for t → 0 indicates that both the ∆ i = 0 and ∆ i = −1 residues vanish. This is of course the case. Using (5.4) and the behavior of F 5 given by (4.33), we have immediately Similarly, using (4.38) and (4.39) for k = 1, we get (cf. footnote 8) (5.10) As shown in Appendix B.1, we have σ * 5 = O(t 4 − t 4 ), which together with (4.33) gives finally lim and therefore lim Note that in all cases, any enhanced soft behavior of the momentum amplitude A n , i.e. vanishing in the soft limit like O(t k ), is directly translated to vanishing of the celestial amplitudes on the residues ∆ i = −j for j = 0, . . ., k−1. This is a rather trivial consequence of the definition of F n which is directly proportional to A n . Hence, we do not expect any non-trivial information about soft behavior that can be translated from A n to A n . Celestial amplitudes of UV completed models The celestial amplitude in a generic quantum field theory is not well defined. As discussed earlier, the Mellin transformation requires sufficiently good (and correlated) behavior of the scattering amplitude at low and at high energies such that the u-integration in (3.8) can be performed. The Mellin transformation is well defined and holomorphic as a function of ∆ for ∆ within a fundamental strip. The size of the fundamental strip is given by the behavior of the amplitude at both low and high energies. For an n-pt amplitude, we determine the fundamental strip directly from the u-integral in (3.8). Concretely, we rescale each external momentum in A n by √ u as in (3.7). To fix the left border of the fundamental strip we calculate the leading term in A n for u → 0. The right border is analogously fixed by the u → ∞ leading term. Therefore, if A n (u) scales as In particular, if the theory is renormalizable, b = 2 − n/2. The fundamental strip can be also empty as indicated by the presence of delta function δ(∆ + d) with d = 1, 2, 3 in the explicit examples discussed earlier, and the Mellin transformation can be then only understood in the distributional sense. This reflects the fact that the corresponding theories are effective ones that cannot be taken seriously above some intrinsic scale, while the Mellin transform takes into account the whole energy range without any constraint. Therefore, to obtain a well-defined celestial amplitude for a scattering of Goldstone bosons, it is desirable to start with a (partial) UV completion of the corresponding effective theory. In what follows, we look at further particular models for scattering amplitudes. First, we discuss two linear sigma models where the UV completion is provided by a massive particle. Next, we explore the U (1) fibered CP N −1 model studied in [63] as an interesting example of a theory of Goldstone bosons without an Adler zero. This model is not UV complete so we provide a simple UV completion which leads to well-defined celestial amplitudes. We compare it with the result in the effective field theory (original model without a UV completion) which yields celestial amplitudes with delta functions in ∆. Then we study a pion-dilaton model and the corresponding conformal soft theorems. Finally, we explore the celestial amplitudes in the NLSM model and a certain UV completion of this theory called Z-theory, studied in the context of color-kinematics duality [125,126]. This provides us with the most transparent difference between the celestial amplitudes in renormalizable theory vs effective field theory. U (1) linear sigma model The simplest example of a theory with interacting Goldstone bosons we present is the U (1) linear sigma model with the Lagrangian Expanding around the classical vacuum φ = µ/ √ λ and using the parametrization we identify the field π as the Goldstone boson of the spontaneously broken U (1) symmetry and σ as the massive Higgs particle with mass µ. Explicitly we get The tree-level four Goldstone boson amplitude π (p 1 ) + π (p 2 ) → π (p 3 ) + π (p 4 ) reads then where the Mandelstam variables are defined as usual and where the parameter a is related to the CMS scattering angle and conformal cross-ratio (cf. (4.4)) as The asymptotic expansion of the amplitudes for s → 0, ∞ with a fixed reads (here we denote for short and The nonuniversal part G 4 (∆, t 4 ) of the celestial amplitude can be expressed according to the general formulae (4.22), (4.23) The definition of the conformal cross-ratio t 4 was recalled above and for ∆ see (3.9). By inspection of (6.8) and (6.9), the fundamental strip for this Mellin transform is −2, 0 . For ∆ inside this strip, we easily find using the residue theorem (for a general formula see The result is a meromorphic function with simple poles for ∆ = k ∈ Z, k = −1 and corresponding residues (see Fig. 3) These correspond for k ≤ −2 to the coefficients of the asymptotic expansions (6.8) and for k ≥ 0 to the coefficients of the asymptotic expansion (6.9). Note, however, that the pole at ∆ = 0 sits on the right of the fundamental strip, and therefore it reflects the leading order term for the asymptotics at s → ∞. This is in apparent contradiction with the usual expectation that a pole at ∆ = 0 encodes the soft behavior of the amplitude. Let us now comment on the limit µ → ∞, i.e. the limit of infinitely heavy σ resonance. In the case of spontaneously broken symmetry, this limit is non-decoupling: the Feynman diagrams with σ exchange do not decouple and their remnant cancels the contribution of the contact term, i.e. This can also be understood on the Lagrangian level when we integrate out the σ resonance at the leading order in the derivative expansion. Indeed, changing the parametrization of the original Lagrangian as At the leading order of the low-energy expansion, the field Σ freezes at its vacuum expectation value Σ = 2µ/ √ λ and the leading order effective Lagrangian for the Goldstone boson is just the free theory. For the celestial amplitude, however, this effective description cannot be reproduced, since G 4 (∆, t 4 ) is sensitive to the full energy region. Indeed, using the formula we obtain In contrary to the usual intuition, the µ → ∞ limit is connected with the residue of the celestial amplitude at ∆ = 0, which, as we have discussed above, reflects the leading order UV behavior of the amplitude A 4 (s, t; u). This is another demonstration of a general phenomenon: we can not easily extract a celestial amplitude in effective theory (here for µ → ∞) from the celestial amplitude in the complete theory. O(N ) sigma models A straightforward generalization of the previous example is represented by the O(N ) linear sigma model with real scalar field N −plet Φ and Lagrangian Expanding around the classical vacuum we get the Lagrangian for π and σ fields, The theory contains N − 1 massless Goldstone bosons π a , a = 1, . . . , N − 1, corresponding to the spontaneous symmetry breaking of O (N ) symmetry down to O (N − 1), and one massive resonance σ. The four Goldstone boson amplitude π a π b → π c π d reads now The fundamental strip for the Mellin transform is now −1, 0 as evident from the high and low energy limits of the amplitude. The nonuniversal function G abcd 4 (∆, t 4 ) can be easily obtained with the result The result can be analytically continued to a meromorphic function with simple poles for ∆ = k ∈ Z, the corresponding residues are (6.20) Again, the pole k = 0 is on the right of the fundamental strip and reflects therefore the UV limit of the original amplitude. It is the pole k = −1 (which was absent in the previous example) which reflects the leading IR asymptotics, i.e. the O (s) behavior of the amplitude for s → 0 Note that also in this case the limit of heavy σ particle leads to vanishing amplitude, while the same limit yields a nonzero celestial amplitude We again conclude that the µ → ∞ limit of the celestial amplitude probes the UV region since it is connected with the residue at ∆ = 0. This can be compared with the amplitude of the effective low energy theory describing the dynamics of the Goldstone bosons only, namely the O(N ) nonlinear sigma model. The latter can be obtained by means of integrating out the σ resonance. The corresponding momentum space amplitude coincides with the leading O p 2 term of the asymptotic expansion (6.21), i.e. It definitely differs from the µ → ∞ limit of G abcd 4 (∆, t 4 ), as above. Note that in both cases of effective field theories the celestial amplitudes include the delta functions and could be only interpreted in the generalized sense of distributions. UV completion of the The U (1) fibered CP N −1 model represents a non-trivial example of a theory of Goldstone bosons with Adler zero violation and with an interesting form of soft theorems. It describes the dynamics of (N − 1) charged and one neutral Goldstone bosons corresponding to spontaneous symmetry breaking according to the pattern U (N ) → U (N −1). In the appropriate parametrization (see [63] for more details), the Lagrangian reads is an (N − 1)-plet of complex scalars and φ is a real scalar field. The Lagrangian is invariant with respect to the non-linearly realized U (N ) symmetry, the U (N − 1) subgroup of which is realized linearly as while the remaining broken generators give rise to the following infinitesimal generalized shift symmetry transformations with real parameter a and complex (N − 1)-plet of parameters a. These symmetries are responsible for the soft theorems of the amplitudes. Let us present them here for the case N = 2 for simplicity 10 . The soft neutral Goldstone bosons φ(k) obey the usual Adler zero, namely lim while the soft charged Goldstones φ ± (p) (built from two components of the field Φ) violate the Adler zero. In this case, the charged Goldstone boson soft theorem acquires nontrivial right-hand side, which is a linear combination of lower point amplitudes with appropriately changed flavor in accord with charge conservation (in the following formula, the hat means the omission of the respective particle from the list) A p ± 2 , . . . , p ± n , q ∓ 1 , . . . , q ∓ k , . . . , q ∓ n , q k , k 1 , . . . , k m . (6.32) Remarkably, this model can be partially UV completed (in the sense of having softer UV amplitudes, but still not leading to an asymptotically free theory) without violating the above soft theorems for the purely Goldstone boson amplitudes. The minimal variant of such a UV completion contains on top of the Goldstone bosons two additional massive scalar and one massive vector resonances and can be described as follows. Let us assume two linear U (N ) multiplets, namely one complex scalar N −plet Φ and an additional complex singlet scalar ψ. These transform under U ∈ U (N ) as The last ingredient is the U (1) gauge field A µ associated with the U (N ) generator The renormalizable Lagrangian describing the UV completion of the above sigma model is then where the covariant derivatives introduce two gauge couplings g 1,2 The classical vacua can be chosen as Then the broken generators are linear combinations of the U (N ) generator T = 1 N and SU (N ) generators where e i are the column vectors with components e j i = δ j i . Appropriate parametrization of the quantum fluctuations around the classical ground state reads Here the fields ϕ, χ and Φ play the role of the Goldstone bosons corresponding to the broken generators (ϕ, χ are related to the linear combination of the generators T and Z, while Φ and Φ + are associated with the broken generators Z ± i = X i ± iY i ). According to the Higgs mechanism, since the gauged generator is broken, one linear combination of the Goldstone bosons is eaten to generate the mass of the gauge field A µ so we end up with (N − 1)−plet of charged Goldstone bosons Φ and with one neutral Goldstone boson φ, which represents a linear combination of the above fields ϕ and χ with mixing angle ϑ φ = χ cos ϑ − ϕ sin ϑ. (6.38) The orthogonal combination η = ϕ cos ϑ + χ sin ϑ (6.39) is eaten by the Higgs mechanism. The fields σ and h form two massive Higgs scalars with masses m 1,2 which correspond to the linear combinations of σ and h with mixing angle θ The last ingredient of the particle spectrum is the massive vector boson A µ with mass M . The original parameters of the Lagrangian (6.35) can be expressed in terms of the masses and mixing angles as follows Inserting the parameterization (6.37) into the Lagrangian (6.35), and passing to the unitary gauge η = 0, we can integrate out the massive particles to the second order in the derivative expansion. At tree level, it means to use the classical equation of motion for the heavy fields and expand the solutions to the desired order in derivatives. In practice this means to set σ, η → 0 and to substitute for the field A µ the leading order term As a result, we obtain as the leading order effective theory just the U (1) fibred CP N −1 sigma model with the Lagrangian (6.26), where we identify The above UV completion of the CP 1 sigma model provides us with a nontrivial example of renormalizable theory with Goldstone bosons violating the Adler zero and with well defined celestial amplitudes. Here the soft theorems have the general form of (5.1). For instance, take N = 2 and let us discuss the amplitude A 5 (1, 2 − , 3 − , 4 + , 5 + ). Here we use condensed notation identifying p i ≡ i and, as above, the superscript denotes the charge. According to (6.32), we get e.g. and observe that the leading order term of the momentum space soft theorem on the right hand side is constant in the energy of the soft (fifth) particle, thus the limit ∆ 5 → 0 probes the leading term of the conformal soft theorem. According to the general discussion, we expect the conformal soft theorem in the form (5.7) 11 All the functions F k have nonempty fundamental strip, e.g. has a fundamental strip −1, 0 , which can be compared with the same function within the effective theory where the fundamental strip is empty 12 . Here we denoted The conformal soft theorem (6.45) is supposed to hold in the intersection of the fundamental strips of all F k 's. In Appendix C we present explicit results for the 4pt and 5pt amplitudes and explicitly verify both the above soft theorems (6.44) and (6.45). Conformal soft theorem as a Ward identity Usually, soft theorems in momentum space associated with 4D bulk are encoded in the boundary 2D CCFT theory in terms of the OPE 13 of the soft particle operator, interpreted as a current on the celestial sphere with other primary (hard) fields. However, this is not a general case, and the interpretation of the soft Goldstone particle as a local current need not be possible. We will illustrate this phenomenon using the U(1) fibered model. 11 The superscript denotes the charges of the particles involved. 12 Note that F (∆, t4) ef f by the presence of the delta function δ(∆ + 1) which makes the latter well-defined only in the distributional sense. 13 OPE, i.e. the limit of two approaching points on the celestial sphere, probes collinear limits in momentum space. However, when a momentum approaches zero (soft limit) it simultaneously becomes collinear with any other momentum. Let us start with the general representation of the celestial amplitude in the form (4.43), i.e. 14 (1, 1) , . . . , √ uε n σ n q t −1 n , t n −1 (6.49) and take formally the limit ∆ n → 0. Using (4.47) and (4.35), we get and therefore, the residue of the celestial correlator for ∆ n → 0 probes the momentum space amplitude in the integrand in the limit when the n−th particle becomes soft. Taking into account (4.48) and the momentum space soft theorem, we can relate the integrand in the above formula to (the linear combination of) the analogous integrands for (n − 1)−point amplitudes. Namely, using (6.32), we obtain the conformal soft theorem in the UV completion of the U (1) fibered CP 1 model as follows 15 the primary field corresponding to a Goldstone boson with charge Q i and inserted at the point (z i ,z i ) on the celestial sphere, X stays symbolically for an additional string of primary operators corresponding to the massive non-Goldstone particles, and we introduced a shorthand notation for the charge-flipped particles on the right-hand side of the soft theorem as 16 (6.52) 14 Here and in what follows, we tacitly assume the integrands to be calculated at the reference point 15 Here the second superscript denotes the charge of the corresponding field. The combination ε± corresponds then to a particle with charge ±, provided ε = 1 and its antiparticle with charge ∓ when ε = −1. Double superscript ε0 corresponds to a neutral particle for both values of ε. 16 In words, neutral Goldstones get flipped to positive/negative charge depending on whether they are incoming/outgoing, while charged Goldstones get flipped to neutral ones (and their causality is of course always preserved). The soft symmetry does not act on other massive fields X. Note that the right hand side of the above identity (6.51) does not depend on (z n , z n ) at all. This suggests that the single insertion formally behaves as a global operator. Let us remind that in the general case global Ward identities corresponding to a global symmetry transformation acting on operators O i (z, z) as have the following form 55) where A (z, z) stands for a local operator insertion expressing explicit or anomalous violation of the symmetry (6.54). Comparing this with the soft theorem (6.51), it seems reasonable [127][128][129] to suppose that the operator Q ε± is in fact a shadow transformation of some local primary operator A ε± ∆ (z, z) with scaling dimension ∆ = 2, i.e. The conformal soft theorem (6.51) might be then interpreted as the global Ward identity corresponding to the explicitly or anomalously broken symmetry which acts on the primaries as , δ ε± X = 0. (6.57) Here X stays for conformal primaries corresponding to the massive non-Goldstone particles. The pion-dilaton model In this section, we introduce a simple renormalizable model with spontaneous symmetry breaking, which shows another feature of Goldstone bosons on the celestial sphere, namely the conformal soft theorems of unusually complex form. The Lagrangian is The classical ground state satisfies the constraint and we choose the broken phase with the vacuum expectation values as follows for which both the above symmetries are broken. Appropriate parametrization of the fluctuations around the above classical ground state reads The new fields transform with respect to the rotations as and with respect to the scale transformations as Thus the "pion" π is a Goldstone boson of SO(2) breaking while the "dilaton" a corresponds to the breaking of the scale invariance. In these variables, the Lagrangian becomes where we expressed the vacuum expectation value v in terms of the mass m 2 = λv 2 of the σ particle. The tree-level amplitudes are the subject of the following soft theorems with non-trivial right-hand sides given by linear combinations of lower point amplitudes (in the same theory): 17 1. The soft pion theorem, which reflects the spontaneous breaking of the SO(2) symmetry of the form The leading soft term on the right hand side is constant in the energy of the soft pion and thus the leading conformal soft theorem will be probed by the limit ∆ (π) → 0. The soft dilaton theorem, which is a consequence of the nonlinearly realized dilation symmetry, and which reads Here the superscript f i = π, a, σ denotes the flavor of the corresponding external particles. The leading soft term on the first line is proportional to the inverse energy of the soft dilaton, while the subleading one on the last two lines is constant. This corresponds via Mellin transform to ∆ (a) → 1 for the leading conformal soft theorem and ∆ (a) → 0 for the subleading one. Taking into account the homogeneity of the amplitudes as functions of the momenta and the mass parameter m, namely we can rewrite the right-hand side using the Euler theorem in the form Here the partial derivative with respect to m takes into account only the explicit dependence of the vertices and propagators (i.e. not the implicit dependence stemming from the massive momenta). Let us demonstrate the validity of these theorems on the following simple example of a treelevel 4pt amplitude corresponding to the scattering ππ → aσ. There are just two tree-level Feynman diagrams leading to The 3pt amplitudes entering the right hand side of the leading soft pion theorem (6.65) take the form and as a consequence of the 3pt kinematics lim p 1 →0 2 (q · k) = −m 2 . Thus the soft pion theorem is manifest Similarly, from (6.69) and since we have the exact identity 2 , k (σ) , (6.72) which verifies the validity of the soft dilaton theorem. Let us now discuss the conformal soft theorems for the celestial amplitudes. Conformal soft theorems The conformal soft pion theorem is similar to the one discussed in section 6.3. It can be obtained by means of a transformation of both sides of (6.65) to the celestial sphere and taking the residue in one of the conformal weights of the pions at ∆ (π) = 0. As a result we obtain 73) where f i = π, a, σ stays for the flavor of the particles and we denoted The form of the soft theorem can be interpreted as a global Ward identity (6.55), with the soft insertion defined as an integrated operator of an exactly marginal shadow dual local field. The leading conformal soft dilaton theorem corresponds to the residue in the dilaton conformal weight at ∆ (a) = 1. Direct but rather long calculation shows that this conformal soft theorem takes a rather complicated form, namely All the details are discussed in Appendix D including special limits. The form of (6.75) suggests that the dilaton soft theorem can not be easily interpreted as a Ward identity. In fact, the right hand side of (6.75) is not just a sum of celestial correlators like in 6.73), but it is further integrated with a certain integration kernel. Subleading conformal soft dilaton theorem corresponds to the limit ∆ → 0, the result now reads, More details can be again found in Appendix D. Z-theory: UV completion of non-linear sigma model Quite recently, an interesting stringy UV completion of the nonlinear sigma model relevant for the dynamics of mesons was found, namely the so-called Abelian Z−theory [125,126]. It is related to the Z−theory disc integrals, which originally depend on two orderings σ, ρ ∈ S n Z ρ (σ(1), . . . , σ(n)) = α n−3 Here the integration domain is D (ρ) = (z 1 , z 2 , . . . , z n ) ∈ R n , −∞ < z ρ(1) < z ρ(2) < . . . < z ρ(n) < ∞ , In [125] it was claimed that the field theory limit of the amplitudes Z × corresponds to the stripped amplitudes of the nonlinear sigma model including higher order corrections with specific values of the higher order low energy constants. Here we will concentrate on the four-point amplitude, which is known explicitly and which reads (in what follows, we use the units in which α = 1 for simplicity) is the Euler beta function and s, t and u are the usual Mandelstam variables 18 (4.18). The transformation of this amplitude to the celestial sphere is, according to (4.23) and (4.24), determined by the non-universal conformal invariant function G 4 (∆, t 4 ), where t 4 = z 1234 is the cross-ratio and ∆ is given by (3.9). Explicitly where, according to our general prescription, the poles on the real axis should be approached from the upper complex half-plane (see Fig. 5). Let us first discuss the general properties of G 4 (∆, t 4 ). It is given by the above Mellin transform and the corresponding fundamental strip is determined by the asymptotics of Z 4 (s, t 4 ) for s → 0 and s → ∞. The low energy behavior is fixed by the nonlinear sigma model amplitude (see section 4.3 for the discussion of NLSM amplitudes), which is valid for 19 |arg (z)| < π − δ, for any 0 < δ < π. Let us remind that the kinematics constraints give t 4 > 1. Suppose first s to be complex s → z = te iε and let t → ∞. Then for 0 < δ < |ε| < π − δ and for t large enough, the arguments of −z, 1 + z/t 4 and 1 − t −1 4 z satisfy the applicability conditions of the above asymptotic relation (6.85). We get then η(z) = sign (Im(z)) . (6.88) 18 The α dependence can be easily restored by the substitution s → α s, t → α t, u → α u. 19 Here we take arg (z) ∈ (−π, π). For calculation of the Mellin transform, we define the amplitude for real s to be a boundary value of the function Z 4 (s, t 4 ) which is analytic in the upper complex half plane. This defines a prescription for avoiding the poles of the Mellin integrand on the real axis. They are being bypassed by deformation of the integration contour in the upper half plane. .83) is shown. It has poles both on the positive and negative real axis, thus infinitesimal wedges (shaded gray) where the asymptotics cannot be controlled have to be excised. On the green arcs at infinity the integrand is exponentially suppressed, while on the red ones it is large. The Mellin transform is defined by an integral along the contour C ε , such that it lies outside the excised wedge and asymptotes to a region where the behavior of the integrand can be controlled. Both the contour C ε and the excised wedge will be pushed on the real axis by a limiting process, but their relative position is fixed as depicted. Right: Cauchy contour argument for the limit in (6.89). The Mellin integral can be then defined as (6.89) Applying the Cauchy theorem to the closed contour depicted in the right panel of Fig. 4 and taking into account the exponential suppression of the integrand along the arc z = Re iφ , φ ∈ (ε, ε ) for R → ∞, we find that the integral before taking the limit ε → 0 + in fact does not depend on ε and the limit therefore need not to be taken. Using the above asymptotics for Z 4 te iε , t 4 , we can conclude that the fundamental strip of the Mellin transform of the function Z 4 te iε , t 4 is −1, ∞ . Unfortunately, we are not able to calculate the Mellin transform explicitly but we can still derive some properties. In particular, inside the fundamental strip, G 4 (∆, t 4 ) is holomorphic, and in the rest of the complex plane it is meromorphic with simple poles ∆ = −1, −2, . . ., whose residues are in one-to-one correspondence with the coefficients of the low energy expansion of the amplitude Z 4 te iε , t 4 . Namely, as a consequence of (6.84), the singular expansion 20 of G 4 (∆, t 4 ) reads This is a formal sum as it is not convergent. The simple poles of (6.90) are the only singularities of the function G 4 (∆, t 4 ). This can be compared with the case of the leading order nonlinear sigma model amplitude, where the celestial amplitude has a δ−function singularity Interestingly, this result can be derived formally directly from the stringy function G 4 (∆, t 4 ) as follows (cf. [110]). Note that the field theory limit of the amplitude Z 4 (s, t 4 ) can be obtained as the α expansion for α → 0. Restoring the α dependence setting s → α s, we get and thus On the other hand, the α dependence of the Mellin transform G 4 is simple, namely therefore, interchanging formally the limit and integration and using the formula (6.14) and the singular expansion (6.90), we get Note again that the celestial amplitude for the Z-theory is a well-defined function, the counterpart in the effective theory is only defined in the distributional sense as evident from the presence of the delta function δ(∆ + 1) in G N LSM 4 (∆, t 4 ). In Appendix E, we comment on a truncated version of the Z-theory 4-pt amplitude for which the Mellin transform can be performed explicitly. Towards celestial soft bootstrap In this section we present several ideas how to implement methods in momentum space to calculate celestial amplitudes. We first discuss recursion relations and their soft extensions, leading to recursive formulas that include products of celestial amplitudes with one on-shell massive leg. Towards celestial recursion relations As we have seen in (3.8), the Mellin transform of the massless amplitude is proportional to the Mellin integral z 1 ) , . . . , √ uε n σ n q (z n , z n ) , (7.1) which can be calculated by the residue theorem using the Hankel contour as described in Appendix A (cf. formula (3.17). Note that the configuration of the null momenta is constrained by the δ−function in (3.8) to satisfy the momentum conservation and thus represents a valid all-line BCFW-like shift. The residues at the poles in formula (A.21) correspond then to the massive factorization channels of the amplitude, and are therefore determined by the one-particle unitarity, namely (see Fig. 2) where I stays for the collection of the quantum numbers of the one-particle intermediate states with momentum √ u F Q F and mass M F and F c is the complementary set of momenta with respect to F. According to the sign of Q 2 F , the √ u F is either real or purely imaginary, therefore √ u F Q F is always time-like and on shell (recall (7.3)). The amplitudes A F and A F c on the right hand side of (7.4) are then in general analytically continued on-shell amplitudes to complex momenta. The formula (7.4) suggests a possibility to calculate the Mellin transform (3.3) recursively 21 . The computation splits into two distinct branches, according to the causality of Q F , which have to be dealt with separately. A detailed derivation can be found in Appendix F, here we present the final result where we abbreviated the celestial amplitudes with just one massive external state I as , z, z, I}) , (7.6) 21 Celestial BCFW recursion was treated e.g. in [43,108,130]. and similarly for the generalized celestial amplitude (you can find the full definition in the Appendix F) In the formula (7.5), the integrals are over parameters of the internal massive leg. Importantly, the individual terms in the sum are products of (generalized) celestial amplitudes where one of the on-shell legs is massive. Therefore, we can not continue the recursion beyond the first step unless we consider a more general framework with both massive and massless legs for the celestial amplitudes. This is of course possible to do but it goes beyond the scope of this paper. Celestial six-point functions in soft-reconstructible theories We can apply the same logic for soft-reconstructible theories. These are theories which fail the BCFW criterion for vanishing at infinity, but this deficiency is compensated by improved soft behavior. See section 2 for review. We will concentrate on amplitudes in these theories at 6pt. Let us remind the general form of the 6pt function of massless scalar particles is given by the formula (4.42) and where (1, 1) , . . . , √ uε 6 σ 6 q t −1 6 , t 6 . A 6 (p 1 , . . . , p 6 ) and is reconstructible, i.e. σ ≥ ρ ≥ σ − 1, the function A 6 (σ 6 ) can be reconstructed from its poles and zeroes. Namely, applying the residue theorem to the function and taking into account that there is neither residue for σ 6 → ∞ nor for σ * i (σ 6 ) → 0, we get The residue res A 6 (z) , z F can be calculated using the factorization at the pole (7.16) as , z F , (7.20) where A F L,R (z F ) are the lower-point amplitudes with momenta {ε i σ * i (z F ) q i } with i ∈ F, F c . Finally, using (7.18) we can write 22 Here we tacitly assume q3 = n as above. where we introduced the theory-independent universal function, depending only on the 6pt kinematics, by the formula where the last line is written in a more symmetric form. It would be interesting to extend this procedure to a general n-pt amplitude. Soft bootstrap on the celestial sphere? It is a natural idea to explore the possibility of bootstrap approach on the celestial sphere. In this approach, we would like to expand the celestial amplitude A n in terms of some well defined building and carefully chosen blocks B k with arbitrary coefficients, impose a set of certain conditions and fix the coefficients c k uniquely. These conditions should include the celestial avatars of factorization and soft limit properties (and any others like double soft limits and collinear limits). In a trivial sense, this could be done by writing B k using the Mellin transforms of the momentum space building blocks B k (p j ) and impose the conditions directly on B k (p j ). This just reduces the problem back to the momentum space problem and we have not really learned anything. Ideally, we would like to impose constraints directly on B k . While the constraints of locality and unitarity in this basis are not well understood, the celestial basis conjecturally makes manifest different properties of amplitudes not visible in the momentum space representation. However, our current understanding of the celestial amplitudes is not deep enough to formulate and efficiently run the bootstrap methods. We leave that for future work in the years to come. Summary In this paper we continue the effort (initiated e.g. in [93,127,128,131]) of analyzing celestial amplitudes of Goldstone bosons and their conformal soft theorems. Our intention was to focus on the form of soft theorems for effective theories of Goldstone bosons in the celestial basis. In particular, we have collected a representative selection of soft behaviors of scalar particles, transformed it to the celestial basis and thus obtained a database of conformally soft theorems for scalar Goldstone bosons. One takeaway message is that for celestial amplitudes of massless particles to be well defined, their fundamental strip associated with the Mellin transform has to be non-empty. It is governed by low and high energy properties of the amplitude and there is tension between them. An amplitude with enhanced soft behavior but bad high energy behavior can have a well defined celestial dual. The opposite is also true, a very soft UV behavior (like in string theories) does not require particularly nice IR properties for the celestial amplitude to be well defined. In this work we emphasize the IR properties, consider the soft behavior as fixed and described by a given effective theory of Goldstone bosons (a non-linear sigma model). Then, in order to obtain well defined celestial amplitudes we first have to construct partial UV completions of these effective theories -renormalizable theories known as linear sigma models. In particular, our examples encompass two linear sigma models (the so-called U (1) and O(N ) sigma models) including also massive particles in addition to the Goldstone bosons. The next entry on the list of theories we present is the U (1) fibered CP N −1 non-linear sigma model. It has non-zero odd amplitudes and thus allows for soft limits with non-trivial right hand sides. In accord with the strategy outlined above, we then constructed its partial UV completion and computed celestial amplitudes within this linear sigma model. The final example of this kind, i.e. a pair of a non-linear sigma model and its linearized version, is the pion-dilaton model. It is presented as last since it features the most complex soft behavior. In particular, its soft theorems in momentum space depend on the momenta of massive particles. This has far reaching consequences for the associated conformally soft theorems and causes difficulties in their interpretation as Ward identities. For this reason it requires further study, especially along the lines of [129,132]. The closing pair of theories we discuss is of slightly different nature in that the UV completion is not just a partial one. As the effective theory we consider the SU(N ) nonlinear sigma model relevant for low energy QCD. Its conjecturally full UV completion of stringy nature (therefore UV finite) is known as Z-theory [125]. Despite the fact that we did not manage to compute the 4-pt celestial amplitude in a closed form, we were still able to comment on its properties and derive a truncation that approximates it to some degree. In all the examples listed above we compared celestial amplitudes for the (partial) UV completions with those in the low energy effective theories and commented on various decoupling limits. We closed the paper by investigating a BCFW-like celestial recursion. At this point it does not have direct applicability as it requires celestial amplitudes with one massive leg as input. Those are technically hard to compute. Acknowledgments We thank Paolo di Vecchia for turning our attention to the pion-dilaton model. This work is supported by GAČR 21-26574S, DOE grant No. SC0009999 and the funds of the University of California. Notation and conventions We use the mostly minus metric The spinors corresponding to the light-like momentum p 2 = 0, p 0 > 0, are given as follows where |p is right-handed and |p] left-handed, and where Then, defining σ µ = 1, σ i and σ µ = 1, −σ i we have and similarly which results in 23 For the spinor products we get therefore scalar products read Under Lorentz transformations Λ the spinors transform according to where U R (Λ) and U L (Λ) are SL(2, C) matrices and θ (Λ, p) = 2 arg (cz + d) . 23 Note that in the embedding formalism q(z, z) corresponds to the q − = 1 section of the Minkowski light-cone. Most authors in the celestial literature use either q + = 2 or q + = 1 sections. Then the transformed spinors take the form The invariant measure on the forward light cone is dp = For massive particles with mass m, an appropriate parametrization of the on-shell momenta reads where n = (1, 0, 0, 1)/2. The invariant measure on the two-sheeted mass hyperboloid p 2 = 1 then takes the form where the upper sheet is covered by y > 0, while the lower one by y < 0. Some useful integrals Here we give a survey of the integrals for n = 2, 3, which are used in the main text. These are a special case of contact (Euclidean) AdS Witten diagrams and a much more complete analysis of their structure can be found e.g. in [133][134][135][136][137][138]. Using Schwinger parametrization we have where the following relations hold Due to the Lorentz invariance of I n ({∆ i , z i } n i=1 ), we can use the frame where Q = Q 0 , 0 with Q 0 = Q 2 and then The integration over w is elementary with the result where we denoted t = yQ 0 . For n = 2 we have Q 2 = α 1 α 2 |z 1 − z 2 | 2 . Let us integrate over α 1 first and then over t. We get we get the final result for n = 2 For n = 3 we can write with the inverse and with the Jacobian Then the integral (A.4) factorizes As a result of the above integration one obtains Hankel contour The Mellin integral from section 3.3 can be calculated using the complex Hankel contour C H consisting of the two straight lines C ± infinitesimally above and below the positive real axes and an infinite radius circle C R with center in the origin (see Fig. 5), i.e. The physical amplitude is defined by approaching the real axis from above, which results in a prescription that poles are avoided on upper semi-circles. (Right): By contour deformation, the Mellin integral can be evaluated by employing the Hankel contour. Poles with both u F > 0 (blue) and u F < 0 (red) contribute. The value of the integral on C + and C − differs by a phase, while C R and C ε can be dropped. For tree-level amplitudes and for ∆ from the above region, the only singularities of u ∆−1 A n (u) are the branching point for u = 0 and simple poles u F corresponding to the massive factorization channels F (see Fig. 2) which are determined by the conditions where This leads to the location of the simple pole corresponding to a given factorization channel F Due to the asymptotics (3.14), the integral I R around the infinite circle vanishes, while the integrals on the half-lines above and below the positive real axis give (second terms on r.h.s. come from infinitesimal semicircles around blue poles in Fig. 5, while the overall phase in I C − from looping around the u = 0 branch point on C ε ) (A.17) respectively, where is the corresponding principal value. On the other hand, according to the residue theorem, the integral over the Hankel contour is given by the sum of residues inside C H (red poles in Fig. 5) Supposing the usual Feynman iε prescription for the propagators (i.e. the u Mellin integral is defined by the contour in the left panel of Fig. 5), we get 24 which can be combined into B Explicit formulas for σ * i Here we summarize explicit solutions (3.24) of delta function constraints (3.20) (four from momentum conservation and one from scaling) needed to construct lower point (4, 5, 6-pt) celestial amplitudes. B.3 The 6pt amplitude Here we assume the process 1 + 2 + 3 → 4 + 5 + 6. In the 6pt reference point (z 1 , . . . , we have the general formula where σ (5) * i are the solutions for 5pt kinematics listed above and with A given by equation (B.1). C Amplitudes and soft theorems for the U (1) fibered model Let us discuss in details 5pt amplitudes in the model from section 6.3. We consider the amplitude A 5 (1, 2 − , 3 − , 4 + , 5 + ) in the UV completion of the U (1) fibered CP 1 sigma model. Here we use condensed notation identifying p i ≡ i and, as above, the superscript denotes the charge. According to (6.32), we get and observe that the leading order term of the momentum space soft theorem on the right hand side is constant in the energy of the soft (fifth) particle, thus the limit ∆ 5 → 0 probes the leading term of the conformally soft theorem. We, therefore, expect the conformal soft theorem in the form which is supposed to hold in the intersection of the fundamental strips. This can be confirmed explicitly. The tree-level momentum space 5pt amplitude reads where, as usual, s ij = (p i + p j ) 2 and all momenta are treated as outgoing. Clearly, the soft limit for the neutral Goldstone boson p 1 → 0 means s 1j → 0 and we get the Adler zero according to (6.31). On the other hand, the soft limit for the charged one, p 5 → 0 reads It is then straightforward to verify explicitly the validity of the (momentum) soft theorem (6.44) within the UV completed linear model. Or we can verify explicitly the validity of the (momentum) soft theorem directly in the effective theory, i.e. within the U (1) fibered CP 1 nonlinear sigma model. Expanding the above amplitudes in s ij and keeping only leading order terms, we can easily obtain the effective amplitudes (cf. (6.43) and ref. [63]) By inspection we immediately conclude that the (momentum) soft theorem (6.44) holds manifestly within the effective theory. In order to check the conformal soft theorem in the general form (6.45), we have to calculate explicitly the function F (0−−++) 5 (∆, t 4 , t 5 ). Let us first check the validity of the conformal soft theorem (6.45) for the effective theory amplitudes. In this case, the fundamental strips are empty and we have to treat the Mellin transform in the sense of distributions. Explicitly we get 13 . (C.8) Here we denoted Σ where on the right-hand side we insert the reference points (z 1 , . . . , for Σ (4) ij and the limit → 0 is tacitly assumed. Since (cf. Appendix B ) Σ (∆, t 4 )) as −1, 1 and −1, 0 respectively. Therefore the intersection of the fundamental strips for the left-hand side and the right-hand side of the conformal soft theorem (6.45) is nonempty. Though the Mellin transform (5.6) of the amplitude (C.3) is less trivial, nevertheless it can be easily obtained using the general formula (3.19). As a result, we get It can be shown by explicit calculation that for ∆ in the intersection of the fundamental strip of the 5pt and 4pt amplitudes, i.e. for ∆ ∈ −1, 0 , the t 4 → t 4 limit of the individual members of the first sum on the right-hand side of (C.13) either vanishes (for (i, j) = (2,5) or (3,5)) or it is finite and can be simply obtained by the replacement Σ Similarly, the individual members of the second sum on the right-hand side of (C.13) vanish in the limit t 4 → t 4 for (i, j) = (2,4) or (3,4), while for (i, j) = (2, 5) or (3, 5) the limit is finite, though the result is less transparent than above. Finally we get When transformed to the celestial sphere, the following integral appears on the right-hand side of the soft theorem (6.65) as one of the terms in the sum over all massive σ particles A . . . , mε p (σ) , . . . . (D.2) Since we are dealing with a massive particle, the celestial transform (3.1) is implemented by the bulk-to-boundary propagator G ∆ ( p, q (z, z)), where p is the unit momentum of the massive σ particle whose CCFT dual operator O ε(σ) ∆ (z, z) is inserted at the point (z,z) on the celestial sphere. Finally, ωq(w,w) is the momentum of the soft dilaton (ω was stripped off as it represents the energy expansion parameter denoted by t in the second line of (6.65) and the factor of 2 is for convenience in order to interpret (2 p · q (w, w)) −1 as a bulk-to-boundary operator with scaling dimension ∆ = 1). Let us use the completeness relation (F.12) for the bulk-to-boundary propagators to rewrite I ∆ (z, z) in the form The latter integral can be formally related to a tree-level contact 3pt Witten diagram in EAdS 3 dual to a 3pt correlator in a CFT 2 living on the boundary of EAdS 3 . The result of the integration is standard [137] (for completeness, the computation is summarized in Appendix A) Putting these ingredients together, we get the leading order conformal soft dilaton theorem in the form Let us also calculate the limit w → z i when the momentum of the soft dilaton becomes collinear with that of the i-th σ particle. Starting with (D.2), we get immediately (D.11) Note that this coincidence limit is regular and that the scaling dimension of the i-th sigma particle got shifted by one unit in the first term on the right hand side. This result can be also obtained directly from (D.10). Indeed, in the limit w → z i , we get in the i-th term on the right hand side where we have used (cf. (6.14)) The x−integration can be recognized as the shadow transformation of the operator O . This transformation satisfies the identity which is valid for the conformal primaries corresponding to the massive scalar particles. Therefore Subleading conformal soft dilaton Using the usual parameterization of the massless and massive momenta where n = 1 2 (1, 0, 0, 1) , (D. 18) we can rewrite the subleading soft theorem stated in the last row of (6.65) in the form (see the last section of this appendix for details) where we denoted by D (f ) i the following differential operators acting on the amplitude A n D (a,π) i The left-hand side of the subleading soft theorem can be identified as the residue at ∆ = 0 of the Mellin transformed amplitude, i.e. symbolically (D.23) Similarly, on the right-hand side of the theorem, we can either use the formula 25) or insert the completeness relation for the bulk-to-boundary propagators and write A n ε 1 p The latter representation of A n simplifies the action of the operator D (D. 28) and therefore, the transformation to the celestial sphere needs the integrals of the type (see Appendix A) and where the coefficient C(∆ 1 , ∆ 2 , ∆ 3 ) is defined by (A.13). The conformal version of the subleading soft dilaton theorem has then the form This can be further simplified, using properties of the shadow transform interpreted in terms of projectors [140] 2 d 2 xdνµ (ν) and thus, the subleading conformal soft dilation theorem reads finally Note that again, the limit z → z i is regular. Derivation of the reparameterized subleading dilaton soft theorem In this last section of the appendix, we will derive (D. 19). Let us parameterize massive momenta as p = m y q (w) + y 2 n , (D. 34) where q (w) = 1 2 1 + |w| 2 , 2w, |w| 2 − 1 , n = 1 2 (1, 0, 0, 1) , (D. 35) and w is treated as a two-dimensional real vector with components w a . It then follows where ε a (w) = ∂ ∂w a q (w) (D.37) is the polarization vector corresponding to the null momentum q (w). Note that the vectors q (w), ε a (w) and n form a basis. For their scalar products we have q (w) · ε a (w) = 0, ε a (w) · ε b (w) = −δ ab , q (w) · n = 1 2 , n · ε a (w) = 0. (D.38) The on-shell amplitude is then a function of y i , w i and the mass m. The latter dependence is either explicit (from vertices and propagators) and also implicit (from the dependence of the massive momenta). In the subleading soft theorem (cf. (6.65)), we need the operators p · ∂/∂p and q (z) · ∂/∂p, where q (z) refers to the soft dilaton and p is the massive momentum. Using the above basis, we can write q (z) = 2 (q (z) · n) q (w) + 2 (q (z) · q (w)) n − (q (z) · ε a (w)) ε a (w) where we have used (D.38) and We also find (D.41) where we denoted (∂/∂m) P the derivative acting only on the implicit dependence of the amplitude on m through its dependence on the massive momentum p. Using these formulas, we get On the right-hand side of the subleading soft theorem, we have the operator Using the above formulas, we get Similarly, we need the operator where now p i is a massless momentum. Using the usual parametrization Finally, the subleading soft theorem can be written as 25 where the amplitude is assumed to be a function of ω i , z i for massless particles and y j , w j for massive ones and the operators D (f i ) are given by (D.45) and (D.48). E Truncated amplitude of Z-theory Our master formula (3.19) for computing the Mellin transform was derived using a Hankel contour (see Fig. 5). A key step in the argument was the ability to drop the arc at infinity. However, this assumption is not valid for this model, as can be seen from the asymptotic behavior of the integrand (6.83) depicted in the left panel of Fig. 4. Since we cannot apply the master formula to compute the Mellin transform G 4 (∆, t 4 ), let us specialize to a truncated amplitude for which the assumptions about its asymptotic behavior will be fulfilled. The truncation is inspired by the product representation of the beta function and where is the Pochhammer symbol. Let us note that unlike B (a, b) which has an infinite number of poles and an essential singularity at infinity, B n (a, b) is meromorphic and regular at infinity with a finite number of simple poles. It grasps the same poles as B (a, b) up to and including a, b = −n + 1. 25 Here we assume the momentum conservation δ−functions to be included in the amplitudes An. For simplicity, we concentrate on the example of Mellin transform of just one of the terms in (6.81), namely the third one Note that this term is relevant also in a different context since it is proportional to the "string form factor" F I (s, u) for the type I open superstring four-gluon MHV amplitude Therefore, calculating we have at the same time calculated also the relevant Mellin transform for the amplitude Let us insert instead of the full beta function its truncated approximation B n . Physically, we have truncated the original amplitude to include only finite number of resonance poles and modified the corresponding residues (cf. (E.13) and (E.14) below). In accord with (E.1), this truncation satisfies 4,n (s, t 4 ) is therefore holomorphic in the fundamental strip 1, n + 1 . The truncated amplitude Z 4,n (s, t 4 ) has simple poles at s = k, k = 0, 1, . . . , n − 1, , k = 1, 2, . . . , n − 1, (E.11) 26 Note that in the full amplitude Z4, the pole at s = 0 is absent due to the presence of additional two terms, while in the formfactor FI it is present and corresponds to the one-gluon exchange. 27 Here we again suppose the poles to be bypassed according to the prescription s → s + i0. Therefore, according to the general formula (3.19), the Mellin transform of Z (E.15) however, not all of them are nonzero. This can be seen as follows. Observe that the right hand side of (E.15) can be rewritten in terms of the residue of the rational (since m ∈ Z) function s m−1 Z 4,n is regular for s → 0 and for m < n + 1, this function has vanishing residue at infinity. Therefore, for 1 < m < n + 1, the right hand side of (E.16) is a minus sum of residues of all the poles of the function s m−1 Z 4,n , which is zero as a consequence of the residue theorem. Thus res (G 4,n , m) = 0, 1 < m < n + 1, (E. 17) which means that G 4,n is holomorphic in the fundamental strip 1, n + 1 as expected. However, for m ≤ 1, there is one pole more, namely s = 0, which is not included to the sum (E.16), therefore, according to the residue theorem, res (G 4,n , m) = 0 and equals res (G 4,n , m) = res s m−1 Z As a result, the celestial dual of truncated (part of the) amplitude Z 4,n shares the poles for Re∆ ≤ 1 with the celestial dual of the original one, however, the truncation of higher resonances results in additional poles in the region Re∆ ≥ n + 1. However, we cannot claim that the true function G 4 can be obtained as a limit of the truncation G 4,n for n → ∞, since the right-hand side of (E.12) does not converge. F Details on the celestial recursion Here we include more details on the BCFW-like recursion presented in section 7. Q F time-like and u F positive: Provided Q 2 F > 0 corresponding to u F > 0 (blue poles in Fig. 5), we can insert to the right-hand side of (7.4) unity in the form and rewrite the momentum conservation δ−function in (3.8) as The above factorization of momentum conservation delta functions allows us to transform the integrand (in σ-variables) (3.8) to a recursive form ×A F c √ uε j σ j q (z j , z j ) j∈F c , Q, I . (F.4) Inserting this into the right-hand side of (3.8), substituting back u = s 2 , σ i = ω i /s, we get for the contribution of a given factorization channel F π sin π∆ e πi∆ [dσ, ∆] δ (4) where, as usual, and similarly for A F c . Note that these two amplitudes have just one particle massive (the one exchanged in the factorization channel F with momentum Q, see Fig. 2). Our goal is to factorize the Q-integration and turn it into a celestial transform of the massive leg for each of the lower point amplitudes (notice that all the remaining massless legs are already Mellin transformed). The logic is very similar to the derivation of the celestial optical theorem [142]. It will be done in two steps: (i) rewriting the Q-integration as an integral over the upper sheet of the unit mass-shell hyperboloid (ii) treating the Q-momenta of the amplitudes A F , A F c as independent while inserting a delta function (forcing momentum conservation) in a factorized form based on a completeness relation for the bulk-to-boundary propagator Turning to step (i), we can write for a general function F (Q) is a Lorentz invariant measure on the unit mass hyperboloid Q 2 = 1. The latter consists of two disconnected sheets H ± corresponding to sign Q 0 = ±1. Note that for a general 10) and thus Step (ii) consists of exploiting the completeness relation [103,143] for the bulk-toboundary propagator (3.1), 2 dνµ (ν) d 2 z 1 2 p · q (z, z) We now apply steps (i), (ii) explained above to the relevant part of (F.5) (F.14) In the final expression, the integrals over Q and Q enclosed within square brackets are the massive celestial transformations (3.1) of the corresponding on-shell particle to the celestial sphere primary O ε,I 1−iν (z, z) and similarly for A F c . Thus we managed to completely transform the lower point amplitudes A F and A F c to the celestial sphere. Collecting partial results, the contribution to the full celestial correlator (3.8) from a bulk factorization channel F with internal time-like momentum takes the form π sin π∆ e πi∆ [dσ] δ (4) Let us remark that for a generic factorization channel F both terms in the last sum contribute, as the Q 0 component of the time-like exchanged momentum Q can change a sign when the massless states attached to channel F are Mellin transformed. This situation is illustrated in Fig. 6. For special factorization channels, it can however happen that Q is either future or past-oriented (for the whole Mellin integration domain of the massless particles) and thus one of the two terms in the sum vanishes. . Right: Mellin transforming massless states attached to the channel F means "summing" (weighted) amplitudes over null momenta on half light-rays (with fixed directions). Two special (kinematically admissible) configurations belonging to the Mellin integration domain are depicted. For the left one, the exchanged momentum Q is future time-like, while for the right one it is directed to the past. Q F space-like and u F negative: The case Q 2 F < 0 corresponding to u F < 0 (red poles in Fig. 5) can be treated similarly. The residue at u F can be expressed in factorized form as ×A F c i |u F |ε j σ j q (z j , z j ) j∈F c , i |u F |Q F , I , where now Q 2 F < 0. Applying similar manipulations as before, now with u F = − |u F |, we get (cf. (F.4)) and (cf. (F.5)) The latter formula can be understood as an analytic continuation of (F.5) to purely imaginary momenta. Note that it can be rewritten in the form, is the Lorentz invariant measure concentrated on the H dS hyperboloid defined by the constraint Q 2 = −1 in momentum space. Suppose that we can write (as claimed in [144]) the following completeness relation 2 dνµ (ν) d 2 z 1 2 p · q (z, z) for p, p ∈ H dS , then we can proceed further. Namely, introducing for the amplitude A ({ε i p i } m i=1 , εQ) with n massless particles with momenta p 2 i = 0 and one massive particle with momentum Q 2 = M 2 the following generalized celestial transform we can then rewrite (F.22) formally in the form analogous to (F.18), namely which expresses a massless celestial n-pt amplitude in terms of lower point celestial amplitudes with one massive external leg corresponding to a massive factorization channel.
23,955.4
2023-03-26T00:00:00.000
[ "Physics" ]
Rough Set Theory-Based Multi-Class Decision Making Framework for Cost Effective Treatment of COVID-19 Suspected Patients : Rough set theory approximates a concept by the three regions, namely positive, negative and boundary regions. The three regions enable us to derive three types of decisions, namely acceptance, rejection and deferment. The deferment decision gives us the flexibility to further examine suspicious objects and reduce misclassification. The main objective of this paper is to provide a cost effective treatment of a patient suspect to COVID-19 positive by using multiclass three-way decision making with the help of Rough set theory. The cost-based analysis of three-way decisions brings the theory closer to real-world applications where costs play an indispensable role. In our approach, we extend the three-way decision to three-way multiclass decision, offering a new framework of multiple classes. Different types of misclassification errors are treated separately based on the notation of loss function from Bayesian decision theory. In our cost sensitive classification approach, the cost caused by a different kind of error are not assumed to be equal. Finally, a numerical example for a cost effective treatment of a patient with COVID-19 disease is considered to demonstrate the practicability and efficacy of the developed idea in real-life applications. Introduction The theory of three-way decision has received much attention in recent years [41,42,43]. The theory of three-way decision concerns thinking in threes, working with threes, and processing through threes. It explores the effective uses of triads of three things, for example, three elements, three parts, three perspectives, and so on. From a new semantic interpretation of the positive, boundary and negative regions, Yao [33] first introduce and study the notion of three-way decisions, consisting of positive, boundary and negative rules. The notion of three-way decisions represents a close relationship between rough set analysis, Bayesian decision analysis, and hypothesis testing in statistics [5,9,30,31]. In a paper "The geometry of three-way decision" by Yao, in a nutshell, he very nicely shown that the three-way decision is about thinking, problem solving, and information processing based on a triad of three things. The three-way decision rules come closer to the philosophy of the rough set theory, namely, representing a concept using three regions instead of two [19]. By considering three-way decision rules, one may appreciate the true and unique contributions of rough set theory to machine learning and rule induction. With the introduction of the notion of three-way decision, there is a new avenue of research for rough set theory. The cost-based analysis of three-way decisions brings the theory closer to real-world applications where costs play an indispensable role. In this paper, we extended cost-based analysis of three-way decisions to three-way multiclass decisions to provide a cost effective treatment of a patient suspect to COVID-19 positive. Before going to the main theme of our paper, we take a look at the present scenario of COVID- 19. In late 2019, the world observed the outbreak of the novel coronavirus at Wuhan, China [44], and since then, it has had a rapid spread all over the world from the beginning of the following year. The COVID-19 is a highly infectious disease. After overcoming the first wave, we entered into the second wave. The number of vaccination is quite a few in comparison to their requirement. As the virus mutates continuously, such an available vaccine does not work correctly against some newly upcoming strain. The second wave of covid-19 is affecting most of the world. The scenario is very grim in India, where the daily count on April 15, 2021, itself is double the first peak. India leads the world in the daily average number of new infections reported, accounting for one in every two infections reported worldwide each day. Alarm bells ring as COVID-19 shifts base to villages. A large number of patients in the smaller centers could overwhelm the healthcare facilities, leading to an increase in deaths. The different graphical representations depicted in Fig. 1 -Fig. 2 shows that the present situation is horrible. Researchers are continuously working to find the possible solutions for this epidemic problem to come out from this unprecedented situation. They are trying to dominate the virus in many ways, like inventing effective treatment, destroying the virus, or protecting it. Majumder et al. [15] describe a decision-making technique for identifying the infected population of COVID-19. Si et al. [29] proposed a decision-making method for selecting preferable medicine for the appropriate treatment of COVID-19 patients in a picture fuzzy environment through the hybrid approach of grey relational analysis and Dempster-Shafer theory. Mishra et al. [14] proposed an extended fuzzy decision-making framework using hesitant fuzzy sets for the drug selection to treat the mild symptoms of COVID-19. One of the major confusion is that some of the symptoms of flu and COVID-19 are similar; it may be hard to tell the difference between them based on symptoms alone. Both COVID-19 and flu can have varying degrees of signs and symptoms, ranging from no symptoms (asymptomatic) to severe symptoms. Only testing may be needed to help confirm a diagnosis. But diagnostic firms testing for coronavirus are nearing breaking point in the metro cities like New Delhi, Kolkata, Bangalore, Chennai and Mumbai as India battles its biggest surge in COVID-19, which may worsen the crisis as many sick people can't get tested fast enough to isolate themselves. Many complaints shows that the test report is negative, but a few days later, the patient condition becomes serious due to covid-19. But we believe that our awareness and proper treatment at the right time can save our life from this pandemic. In the present scenario, our approach provides a cost-sensitive solution to multiclass decision-making for the treatment of covid-19 suspected people. The new interpretation of a three-way decision is not so critical in the classical rough set model since both positive and negative rules do not involve any uncertainty. It is essential in the probabilistic rough set models, where acceptance and rejection decisions are made with certain levels of tolerance for errors. The probabilistic rough set, which is a generalization of Pawlak's rough set, the same pair of thresholds determine the three regions, namely, positive, negative and boundary regions. Different pairs of thresholds have been discussed in several kinds of literature. However, it is difficult to relate these thresholds with practical notations of real-world applications, such as cost, risk, benefit etc. The required parameters of probabilistic rough set approximations can be systematically determined based on costs of various decisions. The incurred costs of decision rules can be analyzed. Moreover, the Bayesian decision procedure can be used to develop a decision-theoretic rough set model [32,36,37,39,40]. Based on the decision-theoretic rough set framework, Yao and Zhao [38] suggest changing a × classification problem into × 2 classification problem. Liu et al. [12] further discussed some practical issues applying Yoa's idea to classify new objects. Their work assumes that the loss incurred for misclassification of an object into any class is the same. This assumption doesn't always hold in real world applications. For example, they are misclassifying a patient with COVID-19 positive to pneumonia costs more than misclassifying this patient to have a general flu. Instead of making an immediate acceptance or rejection in a three-way decision, a third option is deferment to each class. The deferment decision gives users flexibility for further examination of suspicious objects and reduces misclassification, which is cost effective. For example, if the doctor cannot diagnose a few different types of flu based on a patient's symptoms, a series of medical tests can be performed to gather more information to help the doctor make the decision. Moreover, the losses incurred for misclassifying an object into any classes are treated differently, and the losses incurred for making deferred and rejective decisions to different classes are considered. In this paper, our main objective is to provide a cost-sensitive solution to multiclass three-way decisionmaking, which may be difficult to achieve in binary classifications because a forced definite decision may come with a higher cost. The rest of the paper is organized as follows: The basic concepts of rough sets and related properties are discussed in Section 2. The motivation for three-way decision-making is discussed in Section 3. The Bayesian decision procedure is described in detail in section 4. The proposed classification techniques based on a decision-theoretic rough set is presented in Section 5. In Section 6, an illustrative example of the treatment of a patient suspect to COVID-19 positive is provided to show the effectiveness of the proposed method. Section 7 provides the concluding remarks with future research directions of this study. Background, motivation and research issues In this section, we review the related knowledge about classical rough set, probabilistic rough set, threeway decision based on probabilistic rough set and decision theoretic rough set model based on Bayesian decision making process [33,34,35,36]. Classical rough set theory The theory of rough sets was introduced by Pawlak [18,19]. In this theory, an approximation space ( , ) is developed given a datum in terms of the attribute-value table. is the universal set of objects and is the equivalence relation generated out of the data-table. The relation , obviously, partitions into the equivalence classes [1,2,3]. The equivalence classes are also called blocks and if is understood from the context they may be written simply as [•]. Given this perspective, for any subset of two other subsets of are defined by and are called the lower and upper approximations of respectively. The set ∖ is called the boundary of . Elements of and may be interpreted respectively as the objects that are 'definitely' included in and 'possibly' included in . Based on the rough set approximations of , one can divide the universe into three pair-wise disjoint regions: the positive region ( ) is the union of all the equivalence classes that are included in ; the negative region ( ) is the union of all equivalence classes that have an empty intersection with , and the boundary region ( ) is the difference between the upper and lower approximations: Hence, if ∈ ( ), then surely belongs to the concept . If ∈ ( ), then certainly does not belong to target set . If an object ∈ ( ), then it may or may not belong to . Afterwords, in 1994, Skowron and Pawlak [20,21] introduced the notion of rough membership function. Given an approximation space ( , ), for any subset ⊆ , a rough membership function |[ ]| , | | being the cardinality of the subset of . Although in [20], the universe was taken to be finite, the above definition is extended to the case when all the equivalence classes are finite. The following consequences of the above definition are immediate. (i) There can be two sets , Any element of a block [•] receives the same value under each rough membership function . The value that every element of a block receives is called the block-value and denoted by [•]. Probabilistic rough set The positive region, negative region and boundary region defined by Pawlak (1982) are perfect. But the main drawback is that it cannot make decisions for most of the object. With this knowledge, the probabilistic rough set model [10,11,20,36,37,46] .was proposed. The main intuition of the probabilistic rough set model is to expand the decision region, i.e., to expand positive and negative regions using two parameters and . Let ( , ) be the approximation space, then ( , , ) is a probabilistic approximation space (Yao 2008), where is a probability measure defined on a subset of universe . For any ⊆ , containing instances of a concept, ( |[ ]) denotes the conditional probability of an object in given that the object in [ ]. ( where |. | denotes the cardinality. According to the above definitions, the three regions defined in Eq. (2) can be equivalently defined by: Decision-theoretic rough set model The decision theoretic rough set (DTRS) model first introduced by Yao et al. [32,36,37]. It is a more general probabilistic model in which a threshold pair ( , ) with 0 ≤ < ≤ 1 is used to define three probabilistic regions. Now for 0 ≤ < ≤ 1, upper and lower approximation of ⊆ are given by Using the above two approximation the three probabilistic regions are define by The 0.5-probabilistic rough set model is a special case of the decision-theoretic rough set model, which is formulated based on a particular choice of and values, namely, = = 0.5. Probabilistic three regions may be interpreted in terms of the costs of different types of classification decisions [33,34]. One can obtain larger positive and negative regions by introducing classification errors in the trade of a smaller boundary region so that the total classification cost is minimum [35]. Considering the errors introduced, the three regions are semantically interpreted as the following threeway decisions [33,34,35]. We accept an object to be a member of if the conditional probability is greater than or equal to α, with an understanding that it comes with an (1 − )-level acceptance error and associated cost. We reject to be a member of ' ' if the conditional probability is less than or equal to , with an understanding that it comes with an -level of rejection error and associated cost. We neither accept nor reject to be a member of ' ' if the conditional probability is between of and , instead, we make a decision of deferment. The boundary region does not involve acceptance and rejection errors, but it is associated with the cost of deferment. The three probabilistic regions are obtained by considering a trade-off between various classification costs. Motivation of three way Decision Binary classification gives two decisions, namely yes or no, acceptance or rejection. In many real-world tasks, it is not easy to make definite decisions. For instance, when a patient's symptoms are not sufficient to support a particular disease, the doctor may not be able to make a correct / positive diagnosis right away. Then instead of making a decision that could give wrong results, a better way to perform diagnostic tests is to collect more evidence. This kind of option we termed as "deferment" decision. Many real-world decision-making problems become more efficient and easier by adding this third option. One of the other issues is to minimize the number of misclassification examples [16,22,23]. Different types of misclassification errors are treated separately based on the notation of loss function from Bayesian decision theory. In the cost-sensitive classification approach, the cost caused by a different kind of error is not assumed to be equal. Cost-sensitive learning is one of the challenging issues in reallife problems. As an example, in medical diagnosis, not treating a blood cancer patient could cause death or injury. On the other hand, unnecessarily treating a patient who doesn't have blood cancer will waste resources and harm the patient. Three-way decision making has been studied and applied in many fields [13,25,26,30], but our main objective in this study is to provide a cost-sensitive solution to multiclass /three-way decision making, which may be difficult to achieve in binary classifications because forced definite decision may come with a higher cost. The following example explains the main advantages of adding the third option i.e. deferment decision. Example 1. Consider a story given in the book by Savage [24]. "Your wife has just broken five good eggs into a bowl when you come in and volunteer to finish making the omelette. A sixth egg, which for some reason must either be used for the omelette or wasted altogether, lies unbroken beside the bowl. You must decide what to do with this unbroken egg." Consider first a binary-decision scenario, namely, to break it into the bowl containing the other five or to throw it away without inspection. Depending on the state of the egg, each of these two actions will have some consequences and they are given by the following 2 × 2 pay off matrix: Good Rotten Break into bowl Six-egg omelette No omelette and five good egg destroyed Thrown away Five-egg omelette and one good egg Destroyed Five-egg omelette If the sixth egg is good and you made the right decision by breaking it into the bowl, your wife will be happy to see a finished six-egg omelette. If the sixth egg is bad and you made the right decision of throwing it away, your wife will still be happy to see a finished five-egg omelette. Wrong decisions cost much more. In one case, if the sixth egg is good, but you throw it away, one good egg will be wasted. In the worst case, the sixth egg is bad, but you break into the bowl, five good eggs will be destroyed, and you had better think about how to explain your decision to your wife. In binary decisionmaking, each action comes with the cost of ruining some good eggs. Is there a better way of doing this? A better choice would be not to break the sixth egg into the bowl or throw it away but to break it into a saucer for inspection. By adding this third action, the consequence changed to the following 3 × 2 pay off matrix: Good Rotten Break into bowl Six-egg omelette No omelette and five good egg destroyed Thrown away Five-egg omelette and one good egg Destroyed Five-egg omelette Break into saucer Six-egg omelette, and a saucer to wash Five-egg omelette, and a saucer to wash By further examining the state of the sixth egg, the cost is reduced from ruining good eggs to washing a saucer. A similar three-way decision-making process is used in both the above examples, where the doctor needed to decide whether to treat the patient, not treat the patient, or perform diagnostic tests to collect more information [17]. The Bayesian decision procedure In decision-theoretic rough set model, a pair of threshold ( , ) determined three probabilistic regions. The selection of threshold is a major issue. Based on the Bayesian decision procedure, values of and are calculated [6,7,8,27] . For a given object let be the description of the object, = { 1 , 2 , 3 , . . . , } be a finite set of m possible states that x is possibly in and = { 1 , 2 , . . . . , } be a finite set of possible action. Let ( | ) be the conditional probability of being in state and the loss function ( | ) enote the loss (or cost) for taking the action when the state is . Hence, we can construct an × matrix which represents all possible loss function. Table 1, expressed all the values of the loss function where the columns denoting the set of states and the rows the set of actions. Each cell denotes the cost λ( | ) for taking the action in the state . For simplicity, the cost ( | ) can be written as . : : : : : : : : : : : : : : In general, the selection of the values of the loss function can be represented as: Where ( = 1, 2 … ; = 1, 2 … . . ) denotes the loss incurred for deciding class when the true class is . There is no cost for making a correct decision, i.e., when = . For an object with description , suppose action is taken. The expected cost associated with action is given by: The × matrix in table-1 has two important applications. First, given the loss functions and the probabilities, one can compute the expected cost of a certain action. Furthermore, by comparing the expected costs of all the actions, one can decide on a particular action with the minimum cost. Second, according to the loss functions, one can determine the condition or probability for taking a particular action. Example 2. The idea of the Bayesian decision procedure can be demonstrated by the following example. Suppose there are two states: 1 indicates that a meeting will be over in less than or equal to 4 hours, and 2 indicates that the meeting will be over in more than 4 hours. Two states are complement to each other. Suppose, the probability for having the state 1 is 0.70, then the probability for having the state 2 is 0.30, i.e., P( 1 | ) = 0.70 and P( 2 | ) = 1 − 0.70 = 0.30. There are two actions: 1 means to park the car on meter, and 2 means to park the car in the parking lot. The loss functions for taking different actions in different states can be expressed as the following matrix: That is, P( 1 | ) ≥ 0.50. Thus, if the probability that the meeting is over within 4 hours is greater than or equal to 0.50, then it is more profitable to park the car on meter, otherwise, park in a parking lot. From equation (5) the classifier is said to assign an object to the class if the expected cost of making action , is less than the cost of making action i.e. One has the option of assigning the object to one of the classes. In other words, each class is associated with an action of accepting the object to be a member of that class. However, an immediate decision has to be made to either accept or reject the object to be a member of one of the classes. Yao and Zhao [38] suggested a change in this class-classification problem and made a three-way decision for each class. For the finite set of classes = { 1 , 2 , 3 , . . . , } the 's form a family of pairwise disjoint subsets of , namely, ∩ = ∅ for ≠ , and ∪ = . For each , a two-class classification { , } can be defined. The loss function for each is given by the following table. ... … The pairs of thresholds can be systematically calculated based on the given loss functions. The results from DTRS can be immediately applied. However, the losses incurred for making any wrong decisions are considered as the same, that is: This assumption does not always hold in many real world scenario, which makes it impractical for real applications. Class classification based on decision-theoretic rough set Here we proposed a new formulation of decision-theoretic rough set model. Let = { 1 , 2 , 3 . . . . , } be a given set of class. Each class is associated with three action = { , , }. where is the action for deciding an object belongs to positive region of the ℎ class. Similarly, , are action for deciding an object belongs to boundary and negative region of the ℎ class respectively. Now similar to table 2, here we also get a loss function table. We define ( | ) as the loss incurred for accepting an object as a member of when it's real class is , ( | ) represent as the loss incurred for rejecting an object as a member of when it's real class is , and ( | ) is defined as the loss incurred for neither accepting nor rejecting an object as a member of when it's real class is . The main distinction between Yao's approach and our approach is that in Yao assumed the losses incurred for making any wrong decisions are the same. This assumption is not true for a lot of real world situations. Consider an example where we only have three types of disease where 1 = covid-19, 2 = cancer, and cancer. These two types of loss functions are not equal and should be treated differently. One can easily relate these loss functions with the actual cost of making a different diagnosis. Hence in our approach, the losses incurred for making different wrong decisions are considered differently. The expected losses for taking different actions namely acceptance, rejection and deferment for an objects ∈ [ ] can be expressed by equation (5) as: The Bayesian decision procedure leads to the following minimum-risk decision rules: Now consider a special kind of loss function where That is, the loss of classifying an object belonging to into the positive region ( ) is less than or equal to the loss of classifying into the boundary region ( ), and both of these losses are strictly less than the loss of classifying into the negative region ( ). The reverse order of losses is used for classifying an object not in . Under these special kind of loss function we can simplify decision rules ( ), ( ) and ( ) are as follows. Similarly, for rule ( ) Now introducing parameter as If we impose From (9) and (11) its implies that > > and we get simplified rule as follows: With this simplified (P)-(B) rule, the three probabilistic region for any class can be derived from the equation (3) Where class represents an arbitrary class from the set of decision classes . It is necessary to have a further study on the probabilistic three regions of a classification, as well as the associated rules. In general, one has to consider the problem of rule conflict resolution in order to make effective acceptance, rejection, and abstaining decisions. Furthermore, for an object ∈ , it is desirable that the decision made with more attributes should stay the same as the decision made with fewer attributes. The decision-monotonicity properties for DTRS are discussed by Yao and Zhao [38]. Slezak and Ziarko [28] and Zhang et al. [45] also introduced criteria of decision-monotonicity for their generalized rough set models. Numerical Illustration Here, we incorporated cost into the classification process and vary costs for different types of misclassification errors. The following medical diagnostic example illustrates how our approach works and shows that the classification results based on the same training set could be different based on the different loss functions provided. Let us consider four types of diseases = { 1 , 2 , 3 , 4 } .Three actions = { , , } are associated with corresponding to each disease. Where indicates that the doctor decides to treat the ℎ patient indicates that the doctor decides not to treat the ℎ patient and indicates that the doctor neither treats nor not treats the patient, further tests are needed. The loss function of the two medical centres is given in table 3 and 4 respectively. As the two tables provide the loss function of two different medical centres, so they are significantly different. Also, it is clear that the cost for medical centre -1 is more than medical centre -2 for neglecting a sick patient, i.e. for delayed treatment and cost of unnecessary treatment, i.e. misdiagnosing a healthy patient. Our approach gives the flexibility of tailoring the classification results to meet individual requirements in terms of the minimum overall cost. A part of the training set acquired from statistical data of a medical journal that shows the relationships between patients' symptoms to a certain disease is represented by table-5.where the rows represent the patients { 1 , 2 , 3 , 4 , … … . } and columns represent the symptoms { 1 , 2 , 3 , 4 , … … . } . We use binary values for the sake of simplicity where " " indicate symptoms is present and " " denotes symptoms not present. Now for a new patient , let probabilities of having each disease be calculated from the training set using the equation (3) …………….. Comparison of thresholds values: For medical centre -1. P( 1 |[ ]) = 0.38 > 1 = 0.32, so class 1 is in the positive region. The resulting diagnosis is that the new patient has disease 1 . P( 2 |[ ]) = 0.22 < 2 =0.66, so class 2 is in the negative region. The resulting diagnosis is that the new patient has no disease 2 . P( 3 |[ ]) = 0.14 < 3 = 0.52, so class 3 is in the negative region. The resulting diagnosis is that the new patient has no disease 3 . So class 4 is in the boundary region. The resulting diagnosis is that the new patient need further test regarding the disease 4 . P( 1 |[ ]) = 0.38 > 1 = 0.34, hence class 1 is in the positive region. The resulting diagnosis is that the new patient has disease 1 . P( 2 |[ ]) = 0.22 < 2 = 0.65, so class 2 is in the negative region. The resulting diagnosis is that the new patient has no disease 2 P( 3 |[ ]) = 0.14 < 3 = 0.68, so class 3 is in the negative region. The resulting diagnosis is that the new patient has no disease 3 . P( 4 |[ ]) = 0.26 > 4 = 0.22, hence class 4 is in the positive region. The resulting diagnosis is that the new patient has disease 4 . As for the medical centre-2, class 1 and 4 are both in the positive region. The new patient could have both disease 1 and 4 , and either one of them. Rule conflict resolution should be added to further distinguish which disease the patient is more likely to have. From the above example, we can see that different cost setting could lead to different decisions. Therefore, instead of classifying objects based on misclassification error rate, in our approach, classification decisions are made based on minimum overall cost. Concluding Remarks: There are so many techniques for medical or clinical decision making under uncertainty. We mainly consider a rough set based decision-making technique which typically involves evaluation criteria to construct a decision region. This paper aims to provide a cost-sensitive solution of a COVID-19 suspected people to multi-class decision making. Instead of pair of lower and upper approximation in this paper, we use three pair-wise disjoint positive, negative and boundary regions. This approach can be considered a straightforward generalization of the three-way classification introduced in decisiontheoretic rough set models. One may further study three-way decision rules generated from different classes and the associated rule conflict resolutions for real classification applications. Compliance with ethical standards Conflict of interest: Both authors of this research paper declare that, there is no conflict of interest. Ethical approval: This article does not contain any studies with human participants or animals performed by any of the authors.
7,154
2021-11-23T00:00:00.000
[ "Computer Science" ]
Social Interaction, Emotion, and Economic Forecasting From the perspective of traditional philosophies of science, economic forecasts may be perceived as the results of purely rational reasoning, applying scientific theories, and econometric modeling. Yet, a sociological view on economic forecasting shows that economic forecasts mobilize more than these conventional epistemic resources. First, economic forecasters are embedded in a huge interaction network including different kinds of economists, policy makers, and representatives of the economy. In the epistemic process of economic forecasting, this network actively helps improve the forecasts in (at least) three ways: it helps forecasters to produce new imaginaries of the economic future and to discover emerging developments, it increases the forecasts’ social legitimacy, and it produces a common view on the economic future that helps to decrease uncertainty in markets. Second, economic forecasters mobilize emotions that help them to overcome the shortcomings of quantitative data, statistics, and econometric modeling: they develop a feeling for numbers – and numbers support them in developing a feeling for the economy – they have to control their emotions to keep cool when the economy or politics confronts them with increasing dynamics, and they are impassioned about their work. Drawing on data gathered in numerous economic forecasting institutes in Germany, Austria, and Switzerland, I argue that the main resources in producing credible and accurate economic forecasts consist of various forms of social interaction and the mobilization of emotion. Introduction Modern capitalistic economies are future-oriented. To be successful in such an economy, economic actors manufacture knowledge about possible futures of the economy, and they aim at bringing their plans, strategies, and actions in line with this knowledge. The main challenge in this endeavor lies in the future in general (including the economic future) being open. Thus, producing scientific knowledge about the future is a radically uncertain process. This chapter asks how one specific kind of actor -the The Field: Economic Forecasters in German-Speaking Countries Before I present my results, I have to clarify which empirical field I am talking about. Nowadays, numerous organizations publish economic forecasts: banks, financial institutes, rating agencies, academic research units, etc. The institutes examined in my research share at least four common characteristics. First, they earn their money exclusively by producing economic expertise (for example, forecasts) and do not use forecasts to sell something else. Thus, as an example, banks are excluded because they use their forecasts to sell other services or use them as part of their customer relationship management. Second, the institutes are called "semi-official": their work is partly financed by the government, and it is institutionalized within the policy-making process (Reichmann 2009). Third, they are "independent" in a specific way: they do not belong to any political movement, to a company, an interest group, or a political party and have neither commercial nor political aims. And fourth, the forecasting institutes' members consider themselves to be part of academia: they have an identity as academic scholars and do things only scholars do (for example, giving courses at universities, earning their Habilitation, and so on) and their practices stick to the rules of economics' methodology (Evans 1997, 408). However, despite their academic identity, the vast majority of the forecasting institutes analyzed in this paper are organized outside universities. Another important clarification is that, in German-speaking countries, the growth rate of the Gross Domestic Product (GDP) stands at the center of every economic forecast, and, especially in public discussions, it is what economic forecasts are often reduced to. The forecasts under research are very different. Most of them contain about one hundred pages. Others are part of reports of about 700 pages. Forecasts are summarized in short press releases showing the main economic indicators and a few points outlining the main messages. The institutes publish economic forecasts two to four times a year, and most of them present their forecasts to the public at press conferences. 1 The article is divided into two parts. The first one shows how different kinds of social interaction enable forecasters to produce knowledge about the economic future. In the second one, I analyze how they mobilize emotion as an epistemic resource. It starts by presenting two theoretical concepts that help in understanding how actors use interaction to produce expectations and assumptions about the future. Then I describe and analyze the social conditions of the epistemic process in the field of economic forecasting and examine the two dimensions of "epistemic participation" in detail. In a final section, I take a closer look at the role emotion plays in producing economic forecasts. Interaction and the Future In his classic definition, Erving Goffman states that "[s] ocial interaction can be identified narrowly as that which uniquely transpires in social situations, that is, environments in which two or more individuals are physically in one another's response presence" (Goffman 1983, 2). In the 21st century, Goffman's "body to body starting point" (Goffman 1983, 2) of interaction must be reformulated because new technologies enable humans to interact and form social situations without being bodily co-present. Nevertheless, Goffman's main point remains useful: interaction is a reciprocal social action of two or more individuals. Each interaction partner orients his or her actions towards the past, present, or future actions of the other partner(s). In Goffman's understanding, interaction does not have to be reduced to oral speech; although speaking is a common element 1 The data used in this paper were collected starting in 2004 and (at the time of writing this paper) consist of 42 qualitative interviews (30-100 minutes) with economists directly engaged in producing the forecasts, which are used by national, regional, and local governments, special interest groups, and labor unions. In addition, I spent some time at different forecasting institutes taking notes and have collected a large volume of documents from all forecasting institutes in the German-speaking countries. The interviews were conducted in German and translated by the author. Quotes from the interviews are marked "Interview," followed by the number of the interview and a time stamp. 143 of interaction, it is not a prerequisite. However, human interaction does include a consensus on a common immediate goal of action, a common definition and understanding of the situation, and it is embedded in a complex interaction order. It also plays a significant role in the process of producing expectations about possible futures. In the following sections, this paper briefly introduces two theoretical concepts -"mental time traveling" and "foretalk" -that stem from different scientific fields but come to a common result. These concepts help us to understand how actors produce assumptions about the future by emphasizing the underlying interactional element of forecasting. Mental Time Traveling and Foretalk Thomas Suddendorf's work on the development of mental capacities in young children and animals provides an interesting view on how humans interact to imagine the future. Initially, his approach may seem to be slightly a-sociological, but, on closer inspection, it acquires an interactional element. Suddendorf focuses on the question "What makes humans unique?". In his book The Gap (Suddendorf 2013), he identifies eight main differences between humans and animals: one of them is that humans are able to do what he calls "mental time traveling," that is, mentally form expectations and stories about the future. It is one of the fundamental human capabilities to imagine the future; and no other being in the world is able to "recall past episodes and imagine future events, including entirely fictional scenarios (such as the invention of an actual time machine)" (Suddendorf 2013, 89). 2 Suddendorf argues that "mental time travel into the past and mental time travel into the future are two aspects of the same faculty" (Suddendorf 2013, 90). He refers to brain imaging studies that "have found that when participants are asked to recall past events and imagine future situations, the same areas of the brain […] are involved" (Suddendorf 2013, 94). In a second step, he argues that the human imaginative capacity, no matter whether about past or future events, is divided into three systems: a memory for how to do things (procedural memory), a memory for facts (semantic memory), and a memory for events (episodic memory). Episodic memory is not just responsible for us remembering past experiences, it also produces and imagines futures (Suddendorf 2013, 91). Humans use episodic memory in several ways to produce imaginaries. Of course, they use experiences from the past to construct futures. However, they are also able to imagine situations they have never experienced before. There is almost no limit to possible situations humans can imagine and, interestingly enough, humans can even evaluate these fictional situations (Suddendorf 2013, 95). The problem is that episodic memory is well-known to be error-prone, no matter whether we use it oriented towards the past or the future (Suddendorf 2013, 98 ff.). But -and this is the more sociological aspect of Suddendorf's argument -humans have developed a unique technique to increase the quality of their episodic memory and their "mental time travels," namely interaction. As Suddendorf states, "we have radically improved our chances of getting it right through a wonderfully effective trick: we share our plans and predictions with others [and] we have an extraordinarily effective way of exchanging our mind travels through language [...]." Suddendorf argues that, by "exchanging our experiences, plans, and advice, we have vastly increased our capacity for accurate prediction" (Suddendorf 2013, 99). Suddendorf is an evolutionary psychologist. As such, he argues that both the ability to mind-travel and the ability to share real and fictitious stories about the past and the future with others interactionally increase the chance of survival. For him, it is an advantage in evolutionary competition to be able to create mental images for possible futures and thereby control the future better (Suddendorf 2013, 101-3). David Gibson (2011bGibson ( , 2012 also emphasizes the interactional element of imagining the future, and, by asking how this interaction is shaped in microsociological and conversational detail, he comes to two conclusions that enrich Suddendorf's argumentation. He refers to interaction about possible futures using the term "foretalk" -a combination of forecasting and talk (Gibson 2012). He focuses on conversation and decision-making under extreme circumstances; in other words, on "talk at the brink." As an example, he analyses the process of decision-making during the Cuban Missile Crisis in 1962, when President Kennedy and his top advisers had to decide within a couple of hours how to react to the Soviet Union's installation of nuclear missiles on the island of Cuba (Gibson 2011a). In such extreme situations, people create possible future scenarios together by "foretalking" (Gibson 2011a). This group "foretalk" shapes decisions through two mechanisms. First, "foretalk" brings to light possible futures that might not otherwise have been imagined. Thus, "foretalk" is an epistemic resource that enables us to produce new imaginaries of the future. Second, decision-makers anticipate the need to legitimate their decisions afterwards. The "foretalk" helps to justify decisions and improves their legitimacy. Both Suddendorf and Gibson emphasize the interactional basis of producing knowledge about the future. They show that the production of possible futures, for example, about economic development, does not take place in a social vacuum -it is not a purely mind-centered skill. It follows that concepts such as fantasy, creativity, mathematics, or cognition alone are not enough to provide an understanding of how fictional expectations are constructed. There are social and interactional aspects of producing economic futures that go beyond the "reserve stock of knowledge" (Schutz 1967, 77) that individual people have accumulated and can access. Economic forecasts are based on an interactional process. Interaction and Economic Forecasting The ways in which economic forecasters generate a common view by constantly negotiating their views with each other and with external groupshow they "foretalk" and how they exchange ideas from their "mental time travels" -can be elucidated empirically. Economic forecasters produce their forecasts using several channels of interaction as part of their epistemic process. To avoid misunderstandings, it is important to emphasize that this paper focuses on forecasting institutes in German-speaking countries, which operate quite differently from, for example, forecasting institutes in the United States or in the UK. There are national differences between forecasting systems and the political uses of the forecasts, especially between the United States and Europe (Campbell and Pedersen 2014). In general, one could say that American forecasters are more commercially oriented whereas European forecasters operate closer to the state (Friedman 2009(Friedman , 2014. Interaction and Econometrics Textbooks show different ways of producing economic forecasts (e.g., Döhrn 2014;Tichy 1994). They differ mainly in terms of whether forecasters have more trust in numbers, quantitative data, mathematics, and econometric models or whether they rely more on qualitative data gathered from representatives of the economy (Evans 1997;McNees 1990). In practice, forecasters never rely solely on calculation. Econometric models are used merely as a starting point. And these models are increasingly taking a back seat in the process of manufacturing a forecast. In fact, econometric models play a fairly minor role in producing economic forecasts, and the interviewees for this study agreed with Evans's claim that "macroeconomic models support forecasting activity, but do not actually produce forecasts" (Evans 1997, 426). Instead of econometrics, the more important parts of the forecasting process consist of various forms of interaction with various interaction partners. Interaction can be either informal or more institutionalized (see also Reichmann 2013, 861-67), and the interaction includes both internal partners (such as colleagues from the same institute) and external ones (such as academic economists and representatives of "the economy"). Forecasters have developed numerous formal and informal interaction channels and a permanent communication flow enabling them to contact those who represent, in one way or another, "the economy." They build formal and informal platforms where they meet these representatives to gather data and information and thus jointly produce an image of the economic future. Economic forecasters supplement the human capacity for "mental time traveling" to imagine possible futures using the "trick" (Suddendorf 2013) of sharing their predictions with others to obtain information about their respective views of and alternative perspectives on the future. Furthermore, forecasters "foretalk" (Gibson 2011b) with selected interaction partners in several ways, thereby ensuring that economic forecasting does not take place in a social vacuum. This paper emphasizes three reasons why forecasters engage in foretalking with various representatives of economics and the economy: novelty, legitimacy, and stability. First, foretalking enables forecasters to entertain possible futures and spot emerging developments they would have missed without the foretalk. They use interaction as a resource for novel imaginaries. Second, foretalk increases the social legitimacy of the forecasts in the sense that they are more likely to be believed. As Holmes (2013) shows, central bankers also develop strategies to increase their legitimacy by intensive communication with the public and the economy. Holmes' argument is parallel to the way in which forecasters increase the legitimacy of their forecasts by involving those who use forecasts in the process of producing them. Users become co-producers of forecasts and thereby have less reason to reject them. Third, foretalk improves the stability of the view of the future. Foretalk helps to bridge the gap between the knowable and unknowable elements of economic futures by providing (highly) unstandardized data, including judgments that econometric models could not process. This comprehensive interaction process may not make economic forecasts more accurate in a numerical sense. Nevertheless, it increases the range of knowledge about the intentions and assumptions of economic and political actors and therefore builds a more reliable basis for creating forecasts. Patterns of External Interaction The forecasters are embedded in a network that includes several groups of interaction partners, such as other economists from universities, entrepreneurs, policymakers, and members of the government and the state administrations. This interaction network is a constitutive part of the epistemic process of economic forecasting. The members of this network are transformed from ordinary interaction partners into co-producers of the economic forecasts. This network is called here an "epistemic network" because it is an active part of the forecasters' epistemic processes. The forecasters do not just interview, survey, or observe the others in the network; they want them to actively co-produce the forecasts. In this sense, forecasters give them the opportunity to participate in the epistemic process of forecasting -this is why I call it "epistemic participation" (Reichmann 2013). This epistemic network includes a lively interaction between economic forecasters from different institutions. The forecasting institutes may follow conflicting scientific paradigms and they compete for funding; nevertheless, they frequently interact and cooperate, both formally and informally. On the more formal side, the institutes' members attend meetings and workshops to discuss economic topics; they talk in advance about their views on current economic development; they meet at conferences, political hearings, and public discussions. On the more informal side, the forecasters know each other from a variety of activities and relationships developed outside their formal work, whether from their time together as university students or from previous cooperations, co-authoring articles, or spending leisure time together. Within the forecaster community, all forecasters have individually formed networks of "foretalkers" (Gibson 2011b) and personal sources of information. Furthermore, economic forecasters are part of a network of scholars working at academic institutions: they hold lectures and seminars at universities, they work on common research projects, and they co-author papers and books with researchers from universities. These close ties to universities not only sustain the forecasters' identities as scientists (Evans 1997, 408) but also give them the chance to exchange ideas, share new insights and discuss problems, or, in Gibson's (2011b) words, to "foretalk" with academic economists. As Evans (2007, 691) argues, these "professional networks" are the source of certain types of expertise that help overcome the uncertainties of econometric models and allow judgment between models. Exchanging ideas with colleagues is something familiar to most scientists. But the forecasters' epistemic networks include not just other economists who have more or less similar knowledge they can bring into the "foretalk." In particular, their external networks include policymakers and business representatives. The policymakers with whom they interactfor example, members of government units, federal banks, interest groups, lobby organizations, labor unions, and social partners, etc. -provide a different stock of knowledge and a fresh view on "the economy." This part of the external interaction network enables forecasters to interact with "the economy" to gather information about "the economy's" plans. In practice, forecasters are able to interact only with a limited number of representatives of "the economy." Still, for the forecasters, their interaction partners are like intermediaries for "the economy." When forecasters talk about their network, they rhetorically reify "the economy" and utter sentences such as: "It is really important to speak to the economy." Of course, they are aware that they cannot really speak to "the economy" as such, but they interpret their intermediaries as windows on it. Forecasters describe this part of their network as the most important one. Indeed, they say it is more important than econometric models or academic conferences. It is a place where those who forecast economic developments meet to "foretalk" with those who create economic policy, shape the economic policy frame, and actually make economic decisions. And it is a place where two quite different groups of "mental time travelers" exchange their imagined futures. The business representatives in their networks (such as CEOs, businessmen, and industrial lobbyists) consider forecasters to be scientific consultants conducting studies to answer their questions. But forecasters also give informal advice that helps the business representatives get an idea of what others think about recent economic developments and of the expectations in other economic sectors. Forecasters allow them to leave the "fog of uncertainty" (Interview 10, 00:36:45) and get a "bird's eye view" (Gilbert and Jaszi 1944) on the economy. For that purpose, several economic forecasting institutes conduct regular panel studies. To obtain information about business representatives' views of the economic future, they gather data from certain groups -for example, financial experts, CEOs, purchasing managers, port executives, and so on -at specific time intervals using standardized questionnaires. This process can also be conceptualized as one part of an ongoing (standardized) interaction between various groups of "mental time travelers." The integration of this external group works in many ways. During the forecasting process, the forecasting institutes first autonomously produce a forecast, which is called a "draft forecast" (field term). This first step is dominated by applying econometric models, which are analyzed by Evans (1997Evans ( , 1999 in detail. After that, the continuous formal and informal discussions with the groups start. With an eye to recent problems on the political agenda, forecasters contact specialized policymakers to discuss the draft forecast, exchange views regarding ongoing economic developments, and explore the perceptions of the members of the policymaker network. This process is generally not standardized, and it is permanently ongoing. As one member of a special interest group puts it: "There are consultations; there are even continuous consultations between us and these forecasting institutes. Of course, we do not influence the results; they are their own. But within this process of consultation, actually we are not the only ones participating in this process: the collective bargaining partners and the most important ministries are involved. In most cases, this is an ongoing process, but one that practically comes to a head when the forecasts are actually produced. In fact, they ask us to give input, to make them more true. Actually, our insights, those of the economic chambers, and those of the Treasury, Federal Reserve Bank, perhaps the Ministry of Economic Affairs are extremely highly valued by the forecasters. Not to say that the insight of the others is less valued, labor unions and so on, but we do indeed have our own data, and we are very liberal with this information, and we give it to the forecasters, and when they see that our insights are contrary to their forecast or their capital-investment tests, they have to think of a response. Well, this is how it works. It is an ongoing process that obviously comes together four times a year. But I think that the real value lies in the ongoing consultations. In the official meeting, to be honest, they tell us the forecast, and those of us who already know it and were somehow consulted during the preparations nod and the others watch, that's it." (Interview 17, 00:27:50) Before the forecasts are presented to the public, several meetings take place. They are formal in comparison to the more informal talks previously described in this section. At these meetings, the final draft forecasts are discussed with a group of policymakers. Normally, those who participate in these meetings are also involved in the prior talks. A forecast takes about two to three weeks to prepare completely, but the interaction and the "foretalk" take place continuously. The "mental time travelers" keep in permanent contact and ensure that information on economic policy plans, on the political climate, and even on shifts in the economic paradigm are exchanged continuously. We should not misinterpret this dense epistemic interaction network of forecasters and policymakers as purely a question of political power. Although the interests of particular groups and organizations may influence forecasts in the process of epistemic participation, there is no evidence that ideologically suitable forecasts can be simply ordered by policymakers. What is more important for the question of how forecasts for the uncertain (and non-ergodic) parts of the economy are made is that it is really the economic forecasters who benefit most from being in a process of epistemic networking with policy makers. The impact of these contacts with political actors on the epistemic process of economic forecasting cannot be overstated: they bring to light new imaginaries about the future, they socially legitimate the forecasts, and they help to base the forecasts on better information and more diverse perspectives. Patterns of Internal Interaction Another part of the epistemic process is much more closed and takes place inside the forecasting institutes. This process of internal interaction enables different forecasters to harmonize and stabilize their "mental time travels" and involves another type of "foretalk." There are five discrete internal roles the forecasters have to play. 3 Each role is responsible for a specific part of what they call "the economy." One examines public finance and the government's budget; another focuses on the labor market; a third looks at fiscal policy and inflation; and a fourth studies foreign trade. The fifth role is to integrate the data, the arguments, and the information collected by the other economists: the economist concerned is the one responsible for the national economy and is the "single person" also found in a group of econometric modelers -the one who "integrate[s] the disparate inputs and make[s] judgments about the wide range of factors that have impacts on the national and international economy" (Evans 2007, 688). At the outset, each of the five economists playing those roles individually produces a forecast on their respective topics using both quantitative models and additional information gathered during the external interaction described in the previous section (Patterns of External Interaction). Each of them produces calculations, creates interpretations, and thinks about the assumptions underlying these results. In this part of the forecasting process, each forecaster tries to "get a sense for what the present development may cause at the end of the year" (Interview 23, 00:18:25, my emphasis). This brings to light that "mental time traveling" is not just a cognitive but also an emotional activity. After the phase of working alone on the first forecasts, a further interaction process starts. The five types of internal forecasters meet to discuss their individual results, exchange data, discuss their aggregate-related forecasts, and describe and justify their assumptions. They interact and "foretalk" with each other and try to align their forecasts and harmonize their "mental time travels." Their aim is to create a forecast with no internal contradictions. One of the forecasters describes this step in detail: "And if someone sees 'Okay, this doesn't fit here and there', we just start again and take information from the others and go back to our offices and we begin to recalculate -we cut off the corners to make the calculations fit we call it Rundrechnung." (Interview 25,00:35:39;my emphasis) The notion of Rundrechnung is an interesting one as it shows the iterative character of the interaction process. It is barely translatable, but a literal translation may be "round-calculation" or "circle-calculation." It summarizes the process of several re-adjustments of the common forecast until it is a smooth, rounded, and theoretically consistent forecast. This notion describes accurately how economic forecasters adjust, re-adjust, and re-re-adjust their results until they have created a "rounded image" of the future. To them, this notion means that the components of the forecast fit together, that the forecast appears theoretically harmonious, and that there are no internal contradictions, no inconsistent corners, in the image it provides. For about two to three weeks, the forecasters continue to work individually on their special topic. They then meet again with the others to produce a new forecast that is in line with the views of the other four types of internal forecasts. The process of Rundrechnung is based mainly on social interaction and can be understood as a repeated "foretalk" of "mental time travelers," each with a different angle on the economy. Every economist is a specialist in one part of "the economy" and experiences it from a specific perspective. They come together to produce interactionally a common view that could not be produced individually. This clearly delineates that the forecasters are not passive observers of the economy but active participants in constituting the "knowledge" they create. Emotion and Scientific Reasoning After analyzing the huge interaction network economic forecasters are embedded in and analyzing how forecasters use this network to produce scientific knowledge, I now turn to a second epistemic resource forecasters use that is also beyond numbers and econometric models: Emotion. Typically, science and emotion are juxtaposed. Traditionally, philosophies of science, such as positivism, argues that emotion has no place in scientific research; it contaminates the methodological process of pure science and distorts and disturbs the knowledge produced. Traditionally, science is characterized as a "cool, logical, dispassionate" (Parker and Hackett 2014, 549) activity. In contrast, newer methodological approaches argue that emotion in general helps to understand and interpret the world (e.g., Damasio 1994) -and this is also true for the economic world. These newer approaches criticize and challenge the "myth of dispassionate investigation" (Jaggar 1989, 161). Within the sociology of science, emotion is an indispensable part of scientific research and scientific knowledge. There, the dichotomy between reason and emotion, a sacred cow in the classic philosophy of science, is strongly challenged. Yet, though the sociology of science and the sociology of emotion grew at the same time, there is no synthesized, homogeneous, and integrated theory of emotion in science. So far, any empirical work on the topic has analyzed the connections between emotion and scientific research in fine detail without joining the dots. In general, that research identifies two levels where emotion plays a role in science: First, there is emotion on the epistemic level, i.e., emotion is part of the process of producing scientific knowledge. And second, emotion plays a role on the institutional level, i.e., emotion forms and stabilizes institutions, e.g., through motivation, solidarity, etc. Even in classic sociological writings, we find close relations between emotion and scientific reasons. The most prominent example was delivered in the 1930s by Ludwik Fleck (1979Fleck ( [1935), who created the famous idea of "thought styles": cognitive frameworks that form the perception of the outer world. Thought styles are characterized by common research questions, by methodological standards, and by a common way to think and speak about both. Scientist with common thought styles build "thought collectives" ("Denkkollektive"), and these groups are harmonized by common emotions. As Fleck argues, these emotions are not an opposite to rational reasoning but a necessary part of the epistemic framework in which every scientist works. Fleck argues that the "concept of emotionless thinking is meaningless. There is no emotionless state or pure rationality as such" (Fleck 1979(Fleck [1935, 49). For him, scientific research and scientific perception are deeply social and emotional activities, and emotion is an inevitable resource for analyzing the world. Fleck's insights were widely neglected until Thomas S. Kuhn re-discovered them in the 1970s. Henceforth, recent sociologies of science have frequently analyzed the role of emotion in science and (as Fleck did in the 1930s) destroyed the dichotomy between reason and emotion. Newer research analyzes, for example, how highly influential scientists describe the emotional aspects of their work and find that there are variations between disciplines (Koppman, Cain, and Leahey 2015) and investigate socio-emotional aspects of scientific collaboration such as trust (e.g., Knorr Cetina 1999;Shapin 1994), solidarity (Collins 1998), job satisfaction (Hermanowicz 2003(Hermanowicz , 2005, or emotions such as shame, despair, pride, and joy in peer review panels, job meetings, and priority disputes (e.g., Bloch 2012; Lamont 2009). Emotions in Economic Forecasting Drawing on this line in sociological research, I turn now to scientific economic forecasting again to analyze in empirical detail how forecasters mobilize emotions to produce economic forecasts. The interviews show that institutional and social aspects of emotion play only a minor role. Rather, they suggest that scientific economic forecasters emphasize and value emotion as an epistemic resource. Let me start with an example from an interview with an experienced forecaster who worked for more than 40 years at the heart of German economic forecasting. I asked him if there were any special skills or abilities one needed to be able to forecast the economy. Later in the interview, the forecaster clarifies that, with the notion of "feeling," he does not mean an "empathy for numbers," such as human can have for other humans, but rather a feeling for what is possible to read from the statistics at hand and "how they are to be interpreted." The necessity for a feeling for numbers is a dominant phenomenon in the interviews. 4 Another interviewee, a young forecaster, describes it as follows: "[…] even when we do not have to produce a forecast right now to go public, we must, of course, have an idea of what impulse something has now for economic growth, if the government has now decided this or that. Our colleague who focuses on financial policy deals a lot with the numbers we are provided with and asks: What was actually done? And makes a summary of the hard data and facts, which is, so to speak, the fiscal impulse, and then we start different quantitative programs and try to get a sense of ('Gespür'), so to speak, what can this year still cause [...]" (Interview 23, 00:18:25, my emphasis) These interviews show that, for forecasters, the line between emotion and numbers seems to blur. It is an epistemic two-way: It is not only necessary to get a feeling for the numbers, to know how far and in what directions they can be interpreted, and to identify possible errors. The numbers also support the forecasters in developing a feeling for the economy, for possible economic developments, and for what is going on, what the current main problems are in the economy. This is the main two-way epistemic resource of emotion in economic forecasting: Feelings for numbers and numbers as a support for feeling the economy. How do forecasters develop such feelings for statistics and quantitativeinformed feelings for economic developments? This question is especially important as forecasters learn their business mainly on-the-job and, normally, economic forecasting is not taught at universities (Reichmann 2010, 67-73). The interviews show that the ability to develop such emotion is based on experience: "You simply gather experience by joining in. [...] Then, of course, you also start reading the literature; you read the literature about models; you may develop a model yourself and then begin to forecast. At the beginning, one believes very strongly in the results delivered by econometric models. If one realizes after a quarter of a year that they were maybe not right, because something happened that was perhaps outside the model world, then you start to also bring in your experiences -and that is exactly this experience I'm talking about. That one knows how to estimate the results correctly against the background of many years of experience and many years of observing cycles." (Interview 39, 00:03:50, my emphasis) This (very experienced) forecaster creates an opposition between econometric models on the one side and experience on the other. Where the models fail, forecasters bring in other epistemic resources, such as emotion, interaction, or experience. This is exactly the point when forecasters bridge the gap produced by the radical uncertain conditions that frame their epistemic world. The need for experience has another consequence: Normally, young economists who are "rookies" ("Frischlinge," Interview 37, 00:12:00) have to gain experience to develop the right emotions. This takes time, and it is difficult and complex for the more experienced ones to pass on such knowledge. There is also a different side to mobilizing emotion as an epistemic resource: the control of emotion. One forecaster answers the question of the special skill or abilities differently -but he also refers to emotions: "Yes, you have to, once, you have to stay cool and not let yourself be thrown off track by every little movement of any time series. So, for example, you've made a forecast and now, the stocks have fallen. Now you have to be very calm, let's wait and see […] So you have to keep calm, stay firm." (Interview 36, 00:15:00, my emphasis) In this case, the forecaster argues that, to make good forecasts, emotions have to be controlled. Forecasters are often confronted with high dynamics, e.g., in financial markets or politics. Such dynamics should not upset forecasters as they have to keep an overview. The control of emotion and "keeping calm" is a further epistemic resource for economic forecasters. The last case I want to present here is an economist who was deeply involved in producing the System of National Accounts (SNA) in Germany her whole academic life. She told me in great detail about her contributions to the SNA in Germany, about the technical developments, and about conflicts she had with others. After this relatively long part of the interview, she summarizes: "Let me say, it was actually a fulfilling program, I had there." I: "Yes, fulfilling in the sense of time consuming …?" "Yes, but also, I had fun. [laughs] It was really, well, with this 'account of the flow of income' ('Einkommenskreislaufrechnung') you can […] calculate balances that are not available elsewhere. And I was so crazy that I always found the new results exciting. [laughs] I was really, I really had, I really had fun, and I have to add that, I think that's the way it must be; otherwise you can't do that." (Interview 35,00:31:45) The same forecaster said a little later in the interview: "[…] for me it was like a crossword puzzle. Every time I was curious about the balances, for example. changes in inventories, or profits. Well, let's say it this way: it was really, like I invested some of my blood, sweat, and tears ('Herzblut') in the whole thing." (Interview 35,00:32:30) This interview shows another dimension of the role of emotion in economic forecasting. The work on the data and the development of a "feeling for numbers" mentioned above is "fun," and it needs more than a superficial glance at the data. In the above case, the forecaster even highly identifies with the statistics she produced. And she had great "fun" when working on them. Emotion as Epistemic Resource To sum up, the empirical data from the interviews show that emotion is mobilized in economic forecasting to produce knowledge about the economic future and ensure the quality of economic forecasts. Producing scientific knowledge under radical uncertainty requires investing in emotion that helps to bridge the gap left by the shortcomings of pure reasoning, of economics theory, and of econometric models. Economic forecasts would be worse 5 without the feeling for numbers, without the quantitativeinformed feeling for the economy, without a coolness towards the manifold dynamics of economy and politics, and without a kind of fun and "joy" ("Freude") when working with statistics and numbers. The interviews furthermore show that the traditional juxtaposition between emotion and reason is untenable. Economic forecasters argue that the future is open and that pure reasoning, statistics, and econometric models cannot fill the uncertain world they work in. Forecasters have to "add" something, and that is feelings, coolness, and passion. In doing so, they make it possible to produce legitimate scientific knowledge under radically uncertain conditions and ensure the quality of their forecasts. Conclusion Producing scientific economic forecasts involves not just econometric modeling, economic theories, and huge amounts of statistics -it is also full of social interaction and emotion. Forecasts are neither the result of simply feeding econometric models nor the result of pure reasoning in social isolation. Rather, economic forecasting is also based on various forms of social interaction and on mobilizing emotion. The interactional processes enrich and sharpen the expectations and imaginations of the economic future by increasing the forecasts' responsiveness to novelty, their social legitimacy, and their stability. Emotion helps to overcome the shortcomings of data, statistics, and econometric models. Social interaction is, first of all, a resource for economic forecasters to discover novel imaginaries of the future they would otherwise have missed. It also increases the social legitimacy of their forecasts because they integrate many political and economic actors into their epistemic process. Forecasters are confronted with the problem that the open character of the economic future increases the need to legitimate the knowledge they produce about the economic future. By including as many relevant actors as possible, the interaction process helps to justify forecasts and the political decisions deduced from this knowledge, even if they turn out to be "wrong" afterwards, and therefore improves the stability of the common view to the future. The interaction network provides access to the beliefs of many economic and political actors and enables the forecasters to pick up emerging trends entertained by actors who have a significant chance of performing the future. Economic forecasters mobilize emotion because they are aware of the risks and shortcomings of statistics and econometric models. They argue that they have to "add" something to the models, something that the models alone cannot fulfill. Based on experiences from past forecasting processes, they develop a feeling for numbers, one that helps them to analyze, process, and interpret what econometrics cannot depict. Both interaction and emotion are inevitable epistemic resources in forecasting the non-ergodic part of the economy.
9,482.2
2020-01-01T00:00:00.000
[ "Sociology", "Economics" ]
Design and strength analysis of C-hook for load using the finite element method . The special type of C-hook is investigated in this paper. The C-hook is designed to carry a special load, where is not possible to use classical hooks or chain slings. The designed hook is consisted of two arms that ensure the stability of the load being carried. The finite element analysis is performed for the control of the stress and deformation state in the whole hook. The fatigue analysis is performed for the check of a lifetime of C-hook. Introduction In nowadays, the hooks are normalized standard machine components of cranes [1,2]. The hooks are made from steel with high strength but sometimes the other materials are used. The lifting hooks can be classified into the single hook, C type hook, and double hook, etc. However, in special cases, the classic hooks cannot be used. For this reason, some engineers design and investigate the special types of hooks [3][4][5][6][7]. In this paper, a special design of the C-hook for the industry is investigated. The new type of C-hook is designed and used to bearing a lifting load because the normalized standard hooks and chain slings cannot be used. The stress analysis of the C-hook is performed using a finite element method. Design of C-hook The C-hook is designed and dimensioned for lifting and transporting the cut rings with the maximum weight of 6.500 kg (Fig. 1). The ring has the shape of a cut circle and a shape complex beam is located inside. The ring is located above the ground and is cut off from the device so the standard possibilities cannot be used. For this reason, the special shape of the C-hook is created. The designed hook (Fig. 2) is made from steel and is consisted of more parts e.g. arms, stiffeners, and pin. The arms with stiffeners are welded together and are made from steel 1.0570. Only the pin is the mobile part and is made from the material 1.7131. The pin is fixed by a washer and a nut. The basic dimensions of the C-hook are height 1775 mm, width 410 mm and the minimum length of the ring can be 900 mm. The mechanical properties of the used materials are given in Table 1. The pin is the most stressed part so it is made from steel with higher yield stress. Finite element analysis The stress-strain analysis of the C-hook is performed by the FEM. This method divides structure to the finite elements that are connected by nodes. The set of the simultaneous algebraic equations is used for the computations of these elements Kx=F (1) where K is the stiffness matrix, x is the displacement vector and F is the load vector. In most cases, the shape of the hooks is always complicated for the stress analysis is used in the finite element method (FEM). The FEM is the most widespread method for the stress-strain analysis of designed elements. Finite element analysis of C-hook The FEM analysis of the C-hook is performed. The computer model of the designed hook is divided into finite elements and the boundary conditions on the model are applied. The mesh consists of approximately 37 000 volume finite elements with quadratic approximation and approximately 183 000 nodes. The C-hook is loaded by ring weight and the C-hook is hanged on the standard hook. The welds are represented by contacts where the edges are connected to the face of individual parts. The computed displacement of the C-hook is shown in Fig. 3. The main deformation of the C-hook is in -X direction and can be stated, that the ring does not slide off from the C-hook. The maximal computed equivalent stress in the C-hook (Fig. 4) is equal to 72.311 MPa and is located on the pin. All computed values are smaller than the yield strength of the used materials (Table 1). Because the C-hook is used repeatedly, the fatigue analysis after the static linear simulation is performed. The S-N curves for the materials are defined. From the results can be concluded that the life of the assembly is bigger than 1e6 cycles (Fig. 5). Conclusion The stress analysis of the designed C-hook was investigated in this paper. The special type of C-hook was designed because classical hooks cannot be used. The parts of the C-hook were welded. Only the pin was the mobile part with a screw and ended by the nut. The analysis of the C-hook was performed using the finite element method. From the obtained results can be stated, that the computed stresses are smaller as the yield strength of used materials and the most stressed part of the C-hook is the pin. The safety factor is at least equal to 5. The fatigue life of the C-hook was investigated. As a result, the lifetime of the C-hook is maximum.
1,139.6
2020-01-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Microstructural evolution during high-temperature tensile creep at 1,500°C of a MoSiBTiC alloy Abstract Microstructural evolution in the TiC-reinforced Mo–Si–B-based alloy during tensile creep deformation at 1,500°C and 137 MPa was investigated via scanning electron microscope-backscattered electron diffraction (SEM-EBSD) observations. The creep curve of this alloy displayed no clear steady state but was dominated by the tertiary creep regime. The grain size of the Moss phase increased in the primary creep regime. However, the grain size of the Moss phase was found to remarkably decrease to <10 µm with increasing creep strain in the tertiary creep regime. The EBSD observations revealed that the refinement of the Moss phase occurred by continuous dynamic recrystallization including the transformation of low-angle grain boundaries to high-angle grain boundaries. Accordingly, the deformation of this alloy is most likely to be governed by the grain boundary sliding and the rearrangement of Moss grains such as superplasticity in the tertiary creep regime. In addition, the refinement of the Moss grains surrounding large plate-like T2 grains caused the rotation of their surfaces parallel to the loading axis and consequently the cavitation preferentially occurred at the interphases between the end of the rotated T2 grains and the Moss grains. Introduction Suppression of greenhouse gas emissions is a serious problem worldwide. One of the possible solutions for the problem is to enhance the conversion efficiency of heat engines by increasing their operating temperature [1,2]. The development of heat-resistant materials is a key issue in achieving enhanced conversion efficiency. To date, Ni-based superalloys have been widely used as heat-resistant alloys because they have excellent oxidation resistance and high fracture toughness as well as superior high-temperature creep properties [3][4][5][6]. Therefore, extensive studies have been conducted [4,6]. However, the inlet temperature in the cutting-edge turbine systems is approaching 1,700°C [7], which is higher than the melting temperature of Ni (1,455°C). Thus, a cooling system and a thermal barrier coating would be inevitable if the Ni-based superalloys are continued to be used. However, they will cause a large reduction in thermal efficiency. Therefore, the development of new ultrahigh temperature materials beyond Ni-based superalloys is strongly demanded. Mo-Si-B-based alloys are the expected candidates for structural materials at ultrahigh temperatures [1,8] because of their high melting points and excellent hightemperature strength. However, these alloys have a higher density (∼9.5 g/cm 3 ) and a lower fracture toughness (∼15 MPa(m) 1/2 ) than those of Ni-based superalloys (8.6-9.2 g/cm 3 ). Yoshimi et al. found that the addition of TiC to the Mo-Si-B alloys (MoSiBTiC) results in a low density equivalent to the Ni-base superalloy and an excellent fracture toughness higher than 15 MPa(m) 1/2 [9][10][11][12][13][14][15]. The microstructure of those alloys is composed of Mo-rich solid solution (Mo ss ), tetragonal D8 l -structured Mo 5 SiB 2 (T 2 ), NaCl-type (Ti,Mo)C x , MoC 0.5 -type orthorhombic (Mo,Ti) 2 C phases, and their eutectic phases. Recently, the solidification pathway of the complex microstructure of a MoSiBTiC alloy was investigated [15], and the spectroscopic analysis revealed that the Mo ss phase includes Si, Ti, and C and the T 2 phase includes Ti and C [13]. Although the MoSiBTiC alloys have excellent properties in a wide range of temperatures, it is still unclear as to how the microstructure contributes to their excellent properties. Recently, Yamamoto-Kamata et al. performed tensile creep tests for the 65Mo-5Si-10B-10Ti-10C (at%) alloy at temperatures ranging between 1,400 and 1,600°C and at stresses ranging between 100 and 300 MPa in a vacuum of lower than 10 −3 Pa [14]. They found that the MoSiBTiC alloy exhibited a rupture time of about 400 h at 1,400°C under 137 MPa with a rupture strain of about 70%. The creep curves of the MoSiBTiC alloy displayed no clear steady state but were dominated by tertiary creep after showing a minimum creep rate. The strain rate gradually increased in the tertiary creep regime. In the final stage of the tertiary creep regime, the strain rate rapidly increased toward rupture. Interestingly, the creep curves displayed strain-rate oscillations in the tertiary creep regime. Furthermore, they reported that the Mo ss phase was refined at the rupture strain. From these results, they supposed that dynamic recrystallization (DRX) would take place in the MoSiBTiC alloy during tensile creep deformation at high temperatures. Here, a question arises: i.e., the DRX generally occurs in the materials with a low stackingfault energy [16][17][18][19][20]. However, because molybdenum with the bcc structure has a high stacking-fault energy [21,22], it should be confirmed whether or not the DRX indeed occurs in the Mo ss phase in the MoSiBTiC alloy from a microstructural perspective. Therefore, one motivation of this study is to investigate the microstructure evolution, particularly the Mo ss phase, during the tensile creep test at high temperatures and to examine the mechanism of the observed grain refinement of the Mo ss phase. Moreover, the high-temperature creep behavior in the MoSiBTiC alloy in connection with the observed microstructural evolution is discussed. Experimental procedures We used a MoSiBTiC alloy with the composition of 65Mo-5Si-10B-10Ti-10C (at%). Pure Mo rod (99.99 mass %), Si (99.9999 mass%), B (99.95 mass%), and coldpressed TiC powder (99.95 mass%, grain size 2-5 µm) were used as starting materials. TiC powders were compacted by applying a load of 20 kN using a handpress machine. Button-shaped ingots with approximately 90 g weight and 45 mm diameter were fabricated by conventional arc-melting and casting methods (five remelting cycles) in an Ar atmosphere. The as-cast ingots were subjected to heat treatment at 1,800°C for 24 h in an Ar atmosphere. The details of the microstructure of the as-cast MoSiBTiC alloy were shown previously [9,13]. High-temperature tensile creep tests were performed at a temperature of 1,500°C under a constant stress of 137 MPa in a vacuum of 10 −3 Pa using a specially designed creep test machine based on the Instron8862 equipped with a vacuum furnace. For the creep test, a dog-bone-shaped specimen with a gauge length of 5 mm, a width of 2 mm, and a thickness of 1 mm was used. The normal direction of the specimen surfaces is parallel to the cooling direction during arc-melting and casting, i.e., perpendicular to the copper mold in the furnace and the tensile direction is in a plane perpendicular to the cooling direction. The details of the tensile creep test were described previously [14]. Figure 1 shows the high-temperature tensile creep curve of the MoSiBTiC alloy at 1,500°C, 137 MPa. To observe the microstructure evolution during creep deformation, the creep tests were interrupted at strains of 3%, 12%, 32%, 53%, and 72% (rupture) followed by furnace cooling. The cooling rate was about 30°C/min within the temperatures between 1,500°C and 300°C. As shown in Figure 1, the 3% strain was in the primary creep regime, whereas the 12% and 32% strains were in the tertiary creep regime where the strain-rate oscillation was observed. The 53% strain was just before a rapid increase in the strain rate toward rupture. For microstructural observations, the gauge area of the crept sample was cut to the rectangular shape with approximately 2 × 2 × 1 mm 3 in dimensions by a low-speed diamond cutter. These samples were mechanically polished using abrasive papers of #80-#2,000 and buffed into a mirror surface with an oily diamond slurry with particle diameters of 3 µm, 1 µm, and 0.25 µm. In addition, vibration polishing was performed for 5 h with a mixture of diamond slurry 40-50 nm in particle size and lubricant on the Buehler VibroMet 2 (Buehler, Lake Bluff, IL, USA) to remove the probably introduced surface strain by the mechanical polishing. Microstructure observations and determination of the crystal orientations in the constituent phases of the crept MoSiBTiC alloy were performed by scanning electron microscope-backscattered electron diffraction (SEM-EBSD) using TSL's OIM software. The SEM-EBSD observations were carried out on a FEG-SEM SU5000 (Hitachi, Tokyo, Japan) at an acceleration voltage of 25 kV, an emission current of 74 µA, a working distance of 20 mm, and a beam step size of 0.5 µm. Microstructure changes with creep deformation The microstructure of the MoSiBTiC alloy crept to different strains was examined by SEM-EBSD observations. Figure 2 shows the phase maps of the specimens crept to different creep strains. In the phase maps, the red, green, yellow, and blue colors represent the Mo ss , T 2 , (Ti,Mo)C x , and (Mo,Ti) 2 C phases, respectively. The tensile direction is the horizontal direction of the micrographs. As shown in Figure 2(a), the microstructure of the MoSiBTiC alloy prior to the creep test was composed of Mo ss , Mo 5 SiB 2 (T 2 ), (Ti,Mo)C x , (Mo,Ti) 2 C phases, and their eutectics of Mo ss + T 2 + (Ti,Mo)C x and Mo ss + T 2 + (Mo,Ti) 2 C. The compositions of those phases were shown by Uemura et al. [13]. Miyamoto et al. reported that the primary phase of this alloy was the (Ti,Mo)C x phase. The morphology of the Mo ss phase was classified into the following two types: (1) a coarse dendrite-like single grain formed around the coarse primary dendrite-like (Ti,Mo)C x phase and (2) fine grains in the three-phase eutectics. In addition, the small cavities shown by black color in Figure 2(a) were observed to a small extent. At the 3% creep strain (Figure 2(b)), the (Mo,Ti) 2 C phase was found to almost disappear, which suggested that the (Mo,Ti) 2 C phase may be a metastable phase at 1,500°C. The cavities seem to be increased a little bit at the phase boundaries between the T 2 and the Mo ss phases. The relationship between the number of cavities and creep strain will be described later. There were fewer changes in the microstructure at 12% strain ( Figure 2(c)). It is interesting to see in Figure 2(d and e) that the T 2 phase tends to rotate to orient its long axis toward the tensile direction during deformation. It appears that the (Ti,Mo) C x phase in the eutectic region was spheroidized and the cavities were observed at the interphases between Mo ss and T 2 phases and/or between Mo ss and (Ti,Mo)C x phases. At the rupture strain of 72% ( Figure 2(f)), the constituent phases appear to be slightly coarsened, but as described later, the grain size of the Mo ss phase remarkably decreased. Moreover, the number of cavities rapidly increased and they coalesced into large sizes. Figure 3 shows grain color maps of the Mo ss phase, in which each grain is displayed in distinct colors. The grain size of the Mo ss phase increased with increasing strain up to 3% (Figure 3(a and b)), but conversely decreased with further increasing strain (Figure 3(c)). Surprisingly, the grain size of Mo ss remarkably decreased at strains over 32%. This significant decrease in the Mo ss grain size first occurred in the eutectic region (Figure 3(d)) and then occurred in the coarse dendrite-like grains as well ( Figure 3(e and f)). Figure 4 shows the changes in the grain size of the constituent phases with increasing creep strain. As mentioned earlier, the grain size of the Mo ss phase increased within the strain of up to 3%. This is probably due to coarsening of the Mo ss phase in eutectics. The decrease in the grain size of Mo ss was remarkable particularly in the range of strains from 3% to 32%. The average grain sizes of the Mo ss phase were 39.7 µm, 7.8 µm, and 5.5 µm at strains of 3%, 32%, and 72%, respectively. In contrast, there were fewer changes in the average grain sizes of T 2 and (Ti,Mo)C x phases, although their grain sizes increased to a small extent of up to 3% in strain. phase was observed (Figures 3 and 4). This orientation dispersion of the Mo ss phase became more significant with further increases in the creep strain ( Figure 5(d)). At the strains of 53% and 72% ( Figure 5(e and f)), differently oriented fine Mo ss grains were observed, although the original colony structure seems to remain. To make clear the change in the orientation distribution of the Mo ss phase during creep deformation, we examined the IPFs (Figure 6). The individual IPFs in Figure 6 were obtained from the same areas shown in Figure 5. In Figure 6(a), we can see that the orientations of Mo ss colonies were localized around specific directions rather than distributed randomly. Moreover, upon comparing Figure 6(a) with Figure 6(b), the orientation distribution of the Mo ss phase seems to be similar to each other, although these IPFs were obtained from different specimens. Accordingly, we can assess the change in the orientation distribution of the Mo ss phase during creep deformation from the IPFs in Figure 6. Looking through these IPFs, we can see that the sharpness of orientation distributions in the Mo ss phase became much weaker with increasing creep strain. It should be noted that the orientations of the Mo ss phase were gradually spread around the initial specific orientations into random distribution with increasing strain. At 53% and 72% strains ( Figure 6(e and f)), the orientation of the Mo ss phase was almost randomly distributed, but still appears to be weakly localized around the initial orientations. From the results described earlier, we found that the average grain size of the Mo ss phase increased in the primary creep stage, but conversely decreased significantly with increasing creep strain in the tertiary creep stage. In addition, the decrease in the grain size was accompanied with the weakening of sharpness of orientation distributions in the Mo ss phase. To understand the mechanism of the observed grain refinement in the Mo ss phase during creep deformation, we examined the change in the grain boundary character distributions. Figure 7 represents the kernel average misorientation (KAM) maps of the Mo ss phase at different creep strains. The characters of individual grain boundaries are also displayed in the KAM maps by the distinct colors. The KAM maps revealed that there was less strain on the whole (Figure 7(a)). Low-angle grain boundaries (LAGBs) and high-angle grain boundaries (HAGBs) were observed to a small extent in the three-phase eutectic regions at the creep strain of up to 3% (Figure 7(a and b)). With increasing strain, the KAM value increased, particularly in the Mo ss phase in the eutectic structure. In addition, lines with a higher KAM value were observed in the coarse Mo ss grains at 12% strain (Figure 7(c)). These lines probably correspond to the LAGBs with a misorientation angle of <2°. At the strain of 32%, the LAGBs were found to significantly increase and consequently the Mo ss phase was divided into many subgrains (Figure 7(d)). With further increasing creep strain, not only LAGBs but also many HAGBs were introduced at 53% strain (Figure 7(e)). These HAGBs would be formed as a result of the incorporation of lattice dislocations within the LAGBs during deformation. At the rupture strain of 72%, the number of HAGBs further increased, and the KAM value also increased considerably (Figure 7(f)). Figure 8 shows changes in the length of LAGBs and HAGBs in the Mo ss phase with the creep strain. The total lengths of LAGBs and HAGBs changed less up to 3% creep strain, but the length of LAGBs increased by approximately one order of magnitude to be as large as their initial length after the sample crept to 32% in strain. However, the total length of HAGBs was found to considerably increase at strains of >53%, but the length of LAGBs conversely decreased slightly. Figure 9 represents the fraction of misorientation angles of the LAGBs, where the fraction of LAGBs with higher misorientation angles increased with increasing creep strain. These findings strongly supported the assumption that the HAGBs introduced during creep deformation are developed by the incorporation of lattice dislocations into the LAGBs rather than by recrystallization of new grains. Therefore, we can conclude that the refinement of the Mo ss phase during creep deformation could be attributed to the dynamic recovery or continuous dynamic recrystallization (CDRX) rather than the DRX accompanying the nucleation and growth of new grains. Changes in microstructure in T 2 and (Ti,Mo)C x phases An SEM-EBSD observation revealed that the KAM in the T 2 and the (Ti,Mo)C x phases increased slightly after creep deformation, which suggests that these phases in the MoSiBTiC alloy can plastically deform at 1,500°C. As shown in Figure 2, the T 2 phase tended to rotate to orient its long axis being parallel to the tensile direction during deformation. Figure 10 shows the changes in 100 and 001 pole figures for the tetragonal D8 l -structured T 2 phase during creep deformation. Before the creep deformation, the 100 pole of the T 2 phase was sharply localized parallel to the ND direction (normal direction of the specimen surface), and then the 010 and 001 poles were almost distributed on the great circle perpendicular to the ND direction, and particularly the 001 pole was localized at directions between 30°and 50°from the RD direction. Looking through these pole figures, the sharply localized 100 poles at the ND direction were found to hardly change until the rupture strain. However, the 010 and 001 poles rotated toward the TD and the RD directions, respectively, during creep deformation. This rotation of the T 2 phase seems to be completed at creep strains between 12% and 32%. Because the three-dimensional SEM observation with the combination of the focused ion beam serial sectioning technique demonstrated that the T 2 phase has a thin plate shape with the orientation of (001) as plate surfaces and of {100} as side surfaces [13], the finding that the 001 poles of the T 2 phase rotated toward the RD direction means the plate surfaces of the T 2 phase oriented parallel to the tensile direction. Figure 11 shows the relationship between the number density of cavities and the creep strain. It is found that the number of cavities significantly increased beyond the creep strain of approximately 50%, and they coalesced into large sizes resulting in the fracture of the specimen. In addition, we found from the phase map shown in Figure 2(d-f), e.g., that the creep cavities were preferentially formed at the interphase boundaries between Mo ss and T 2 phases and/or Mo ss and (Ti,Mo)C x phases. Grain refinement of the Mo ss phase during creep deformation From the obtained results, it is considered that the grain refinement of the Mo ss phase during tensile creep deformation would contribute to a good creep elongation of the MoSiBTiC alloy. The grain refinement during high-temperature deformation is likely to be attributed to DRX, which can be classified into the following two types: discontinuous dynamic recrystallization (DDRX) and CDRX [20]. DDRX occurs in connection with nucleation and growth of new grains, whereas CDRX occurs through the rearrangement of dislocations introduced by deformation due to the dynamic recovery and subsequent increase in the misorientation of subgrain boundaries. Kobayashi et al. reported that the CDRX occurred in an Al-Li alloy with a high stacking-fault energy during high-temperature deformation, causing superplastic deformation [23]. In the MoSiBTiC alloy, we found that the grain refinement of the Mo ss phase took place by the following process: first the LAGBs were introduced by dynamic recovery and rearrangement of dislocations and then an increase in the misorientation angles, probably by incorporation of dislocations within the LAGBs during creep deformation, resulted in the formation of HAGBs. Consequently, fine-grain structures with many HAGBs were developed. Therefore, we can conclude that the CDRX would give rise to the grain refinement of the Mo ss phase in the MoSiBTiC alloy instead of DDRX. Recently, Chaudhuri et al. studied high-temperature deformation behavior and microstructural evolution in pure Mo [24] and Mo-TZM alloy [25] at elevated temperatures (1,400-1,700°C) and at different strain rates (10 −3 -10 s −1 ). From the overall assessment of microstructure, they concluded that the dynamic recovery (CDRX) is the most dominant restoration process during the deformation in pure Mo and Mo-TZM alloy, although recrystallized grains were partially observed at the grain boundaries under specific conditions (e.g., at the higher strain rates and lower temperatures for Mo-TZM). Kamata et al. reported that the creep curves exhibited large creep strain rate oscillations at higher stresses and lower temperatures for MoSiBTiC alloys [14]. Therefore, there is a possibility that DDRX could take place in the Mo ss phase of MoSiBTiC alloys under specific conditions, but further investigation is required to comprehensively understand the creep mechanism of MoSiBTiC alloys. Influence of microstructural changes on creep deformation From the careful observations of microstructure changes in the MoSiBTiC alloy during creep deformation at 1,500°C and under 137 MPa, we found that the microstructural changes significantly govern the creep behavior in the MoSiBTiC alloy as illustrated in Figure 12(a). In the following sections, the creep deformation behavior observed in the MoSiBTiC alloy in connection with the evolution of microstructures is discussed. Primary creep regime In this regime, the grain size of the Mo ss phase increased considerably. The dislocation creep of the Mo ss phase is considered a rate-controlling process. However, the grain size of the Mo ss phase conversely decreased to approximately 20 µm within the strains between 3% and 12%. The Mo ss grains in the eutectic area were divided into subgrains by introduction of LAGBs due to the dynamic recovery and rearrangement of lattice dislocations ( Figure 12(b)). Thus, this refinement of the Mo ss phase would give rise to grain boundary sliding during creep deformation. From the findings that the minimum creep rate appeared at the strain of 6%, and no significant creep damage, such as cavities, was observed between the strains of 3% and 12%, the observed minimum creep rate is likely to be determined by the competition between a decrease in the strain rate due to work hardening and an increase in the strain rate associated with the grain refinement. Tertiary creep regime The contribution of changes in (a) Mo ss , (b) T 2 , and (c) (Ti,Mo)C x phases to creep deformation is described as follows: (a) The average grain size of the Mo ss phase further decreased to <10 µm and the LAGBs were transformed into HAGBs with increasing creep strains by the incorporation of lattice dislocation into the LAGBs. According to previous bicrystal studies, the HAGB can slide more easily than LAGB and coincidence site lattice (CSL) grain boundaries [26,27]. In addition, there is a common consensus that the superplastic deformation can occur in metallic materials with grain sizes of <10 µm [28]. Thus, the findings suggest that the superplastic-deformation-like mechanism would be operated in the tertiary creep regime. If this is the case, the strainrate oscillation observed in the creep curve shown in Figure 1 can be explained as follows: the decrease in the grain size and the increase in the misorientation angle of grain boundaries with increasing creep strain enhanced grain boundary sliding, resulting in a continuous increase in the strain rate. However, the grain boundary sliding gives rise to stress concentration at the trip junctions, and consequently the grain boundary sliding is retarded unless the stress concentration is relaxed, e.g., by diffusion. These successive events can promote the strain-rate oscillation on the creep curve of the MoSiBTiC alloy. (b) The plate-like T 2 phase was rotated such that the surfaces of the plate-like phase became parallel to the loading axis during creep deformation. Under an applied stress, a torque should operate for the plate-like T 2 phase that is not parallel to the loading axis. In case that a T 2 phase would be surrounded by coarse Mo ss grains, the rotation of the T 2 phase could be difficult because of the constraining by the surrounding Mo ss grains. However, the marked refinement of the Mo ss grains probably makes it possible to rearrange the Mo ss grains easily. Thus, the rotation of the plate-like T 2 phase can be promoted under applied stress. In addition, the rearrangement of fine Mo ss grains might result in the relaxation of the stress occurring near the T 2 /Mo ss interface in the Mo ss phase. By analogy from the stress distribution of short-fiber reinforced composites, the shear stress acting near the T 2 /Mo ss interface will be maximum at the end of the T 2 phase when the plate surfaces of the T 2 phase become parallel to the tensile axis [29]. Accordingly, the shear stress at the T 2 /Mo ss interface perpendicular to the tensile axis may be hard to be relaxed and then cavitation is prone to occur there. (c) The cavities were formed at the coarse (Ti,Mo)C x / Mo ss interfaces as well as the T 2 /Mo ss interfaces. The coarse (Ti,Mo)C x phase with dendrite-like shape is likely to restrict the rearrangement of the fine Mo ss grains surrounding the (Ti,Mo)C x phase. Thus, the relaxation of the incompatible stress near the (Ti,Mo)C x /Mo ss interfaces may have not been enough, which may result in the formation of cavities to reduce the incompatible stress. End of the tertiary creep regime Although the creep strain rate increased moderately in the accelerated creep regime, it increased more rapidly over the creep strain of approximately 60% to rupture. This creep behavior is attributed to a significant increase in the number density of cavities and they coalesced into large sizes. Conclusions In summary, the microstructural evolution in the TiCreinforced Mo-Si-B-based alloy (MoSiBTiC alloy) during tensile creep deformation at 1,500°C and 137 MPa is reported in this study. The main results obtained are summarized as follows. (1) The grain size of the Mo ss phase increased in the primary creep regime. However, near the strain showing a minimum creep rate (12% strain), the grain size of the Mo ss phase decreased observably. With a further increase in the creep strain in the tertiary creep regime, the Mo ss phase was remarkably refined into an average grain size of <10 µm. (2) The refinement of the Mo ss phase was found to proceed through CDRX: first the Mo ss grains were divided into subgrains due to dynamic recovery and rearrangement of lattice dislocations, and then the LAGBs introduced were transformed into the HAGBs probably owing to the incorporation of dislocations within the LAGBs. (3) According to the aforementioned results, the creep deformation of the MoSiBTiC alloy in the tertiary creep regime is considered to be governed by the grainboundary sliding and rearrangement of grains in the Mo ss phase; i.e., a superplastic-deformation-like mechanism would contribute to the creep deformation. (4) The plate-like T 2 phase rotated their surfaces parallel to the loading axis during creep deformation. The refinement of Mo ss grains surrounding the T 2 phase would make it possible for the T 2 phase to rotate under an applied stress. (5) The cavitation was found to preferentially occur at the interphases between Mo ss and T 2 phases and/or between Mo ss and (Ti,Mo)C x phases. The number of cavities hardly changed with an increase in the creep strain until the rotation of the T 2 phase parallel to the tensile direction was completed. Thereafter, the number of cavities was increased exponentially with further increasing creep strain and they coalesced into large ones, resulting in fracture.
6,386.8
2020-01-01T00:00:00.000
[ "Materials Science" ]
Circadian Gene cry Controls Tumorigenesis through Modulation of Myc Accumulation in Glioblastoma Cells Glioblastoma (GB) is the most frequent malignant brain tumor among adults and currently there is no effective treatment. This aggressive tumor grows fast and spreads through the brain causing death in 15 months. GB cells display a high mutation rate and generate a heterogeneous population of tumoral cells that are genetically distinct. Thus, the contribution of genes and signaling pathways relevant for GB progression is of great relevance. We used a Drosophila model of GB that reproduces the features of human GB and describe the upregulation of the circadian gene cry in GB patients and in a Drosophila GB model. We studied the contribution of cry to the expansion of GB cells and the neurodegeneration and premature death caused by GB, and we determined that cry is required for GB progression. Moreover, we determined that the PI3K pathway regulates cry expression in GB cells, and in turn, cry is necessary and sufficient to promote Myc accumulation in GB. These results contribute to understanding the mechanisms underlying GB malignancy and lethality, and describe a novel role of Cry in GB cells. Introduction Glioblastoma (GB) is the most common and aggressive type of glioma among all brain tumors, and it accounts for 57.3% of all gliomas [1]. It was classified in 2016 as a WHO grade IV diffuse oligodendroglial and astrocytic brain tumor, but the most recent classification (2021) includes these type of tumors in the "Gliomas, glioneuronal tumors, and neuronal tumors" group, termed as Glioblastoma, IDH-wildtype [2]. Despite current treatments, the median survival of GB patients is 15 months [3], and it is estimated that only 6.8% of patients survive five years after diagnosis [1]. To understand the genetic, molecular and cellular bases of gliomagenesis is fundamental for the development of effective therapies. In terms of histopathology and genetic expression, GB is a very heterogeneous type of tumor, even within the same patient [4]. However, there are common mutations in GB affecting different pathways that show mutual exclusion: the p53 pathway, the Rb pathway and components of the PI3K pathway [5]. Previous studies from our lab used a GB model in Drosophila, developed by Read and collaborators in 2009, that recapitulates key aspects of the disease both genetically and phenotypically [6][7][8][9][10][11][12][13]. This model is based on the expression of constitutively active forms of the epidermal growth factor receptor (EGFR λ ) and phosphatidyl inositol 3 kinase (PI3K) catalytic subunit (dp110 CAAX ) (orthologues of EGFR and PI3K catalytic subunit in Drosophila, respectively). We used the binary expression system Gal4/UAS [14] to express EGFR λ and PI3K dp110 CAAX specifically in glial cells under the control of repo-Gal4 driver [6]. The co-activation of EGFR and PI3K signaling pathways in Drosophila glial cells reproduces the cascade of signaling events that occurs in GB patients [6]. In consequence, GB cells upregulate myc expression, which is essential for tumoral transformation, and the glial tumor cell numbers increase along with the expansion of the glial membrane. As a result, Int. J. Mol. Sci. 2022, 23, 2043 2 of 16 GB progression causes a reduction in the number of synapses in neighboring neurons and premature death [6,9,15]. Furthermore, EGFR and PI3K pathway co-activation regulates processes such as progression and entry into the cell cycle and protein synthesis [6,7]. c-myc is one of the oncogenes most amplified in human cancer, including GB. About 60%-80% of human GB cases show elevated Myc levels [16]. Myc regulates cell proliferation, transcription, differentiation, apoptosis and cell migration. It is the point where EGFR and PI3K pathways converge; thus, Myc is considered essential for GB transformation [6,[16][17][18]. Furthermore, in vitro and in vivo studies have shown that myc inhibition prevents glioma formation, inhibits cell proliferation and survival and even induces disease regression [16,19]. These features are conserved in Drosophila [6]. In the recent years, the study of alterations in circadian rhythm genes has emerged in different types of cancer, including GB [20]. Previous reports suggested that circadian rhythm genes play essential roles in different aspects of tumor progression. The central clock organizes the oscillations and rhythmicity of the physiological processes and modulates the expression of genes related to cell proliferation or differentiation, such as cell cycle components [21], proto-oncogenes and tumor suppressors [22]. In mammals, the structure responsible for coordinating circadian behavior throughout the body is the suprachiasmatic nucleus (SCN), located in the anterior region of the hypothalamus and made up of about 50,000 neurons in humans [23]. All the neurons that compose the central clock express the core circadian genes that control the oscillations that organize the cycles of the whole organism in absence of environmental cues. Furthermore, synchronization of the internal clock with light/dark cycles relies on cryptochrome protein (Cry), a blue light photopigment expressed in certain subsets of clock neurons. Cry is a receptor of near-UV/blue light and a regulator of gene expression that belongs to the group of DNA photolyases. It was suggested that the last universal common ancestor (LUCA) had one or several photolyases, supporting the evolutionary conservation of cryptochrome genes [24]. However, the mammalian gene that plays the role of Drosophila cry remains unknown. Interestingly, Drosophila Cry also acts as the mammalian Cry when expressed in peripheral clocks [25]. Besides, cry1 expression is androgen responsive, Cry1 regulates DNA repair and the G2/M transition and it is associated with poor outcome in prostate cancer and colorectal cancer. Regarding GB, studies in patients with primary gliomas show an association between a specific per1 variant with overall glioma risk. Several circadian genes, including cry1, exhibited differential expression in GB samples compared to control brains as described in the literature [26,27], and in human cancer gene expression databases (https://www.proteinatlas. org, accessed on 1 February 2022; https://cancer.sanger.ac.uk/, accessed on 10 January 2022). Besides, the expression of the circadian gene clk is significantly enhanced in high-grade gliomas and correlates with tumor progression [28]. Moreover, per1 and per2 expression increases the efficacy of radiotherapy also in GB cells [29]. Furthermore, high levels of cry1 inversely correlate with median survival in GB patients, acting as signal of poor prognosis (http://gepia.cancer-pku.cn/detail.php?gene= CRY1, accessed on 1 February 2022). Still, the functional mechanism of Cry in cancer susceptibility and carcinogenesis remains unsolved. Different studies show a relationship between Cry and Myc [30]; c-Myc levels decrease in cry1/cry2 null mutant mice [31]. Besides, cry1 expression is induced by Myc in GB cells in culture [32]. Taking into account the deregulation in the expression of circadian genes in tumor tissues and the pre-established relationship between Cry and myc, which is a key player in GB, here we show that cry is regulated by PI3K pathway, cry expression enhances Myc accumulation in GB cells and it is necessary for GB progression. Cry Expression in Glioblastoma To determine if cry expression was affected in glioma samples, we extracted RNA from the heads of 7-day-old adult control and glioma flies. Quantitative RT-PCR results (see Table 1 in Materials and Methods) indicate that cry mRNA levels are 50 times higher in glioma samples as compared to controls ( Figure 1A). This result goes in line with the data retrieved from TCGA-GBM dataset (at http://gliovis.bioinfo.cnio.es/, accessed on 1 February 2022) that indicate a significant increase of cry1 mRNA levels (RNA-seq) in GB samples, as compared to non-tumor tissue. Next, to determine if cry upregulation occurs in GB cells, we used a specific reporter line that generates a green fluorescent protein (GFP) tagged form of Cry (GFP-Cry) and visualized adult brains in confocal microscopy. The images show the GFP signal (Cry) and glial membrane marked in red with myristoylated red fluorescent protein (mRFP) ( Figure 1B-E,B'-E'). The quantification of GFP-Cry and mRFP co-localization is higher in glioma samples than in controls ( Figure 1B,C,F) suggesting an accumulation of Cry in glioma cells. This signal is restored to control levels upon cry knockdown by means of RNAi expression in glial or glioma cells ( Figure 1D-F). Next, we analyzed human mRNA expression databases for Glioblastoma multiforme (http://gliovis.bioinfo.cnio.es/, accessed on 10 January 2022). The results indicate that cry in GB patients is transcriptionally upregulated in primary tumors ( Figure 1G) and cry1 upregulation correlates with worse prognosis ( Figure 1H). Moreover, cry1 is also upregulated in secondary GB ( Figure 1I) and correlates with poor prognosis in secondary GB patients ( Figure 1J). All together, these results indicate that cry is transcriptionally upregulated in GB cells in Drosophila and patients and suggest a role in GB malignancy and aggressiveness. Cry Mediates GB Progression and Neurodegeneration To determine the contribution of cry to GB progression, we used a previously validated protocol to quantify tumor progression and the associated neurodegeneration in Drosophila [7, 9,11]. We stained adult control brains and compared them with GB, GB + cryRNAi and wt brains expressing cryRNAi in glial cells. We used a specific antibody against repo to visualize the nuclei of all glial cells and quantified the number of glial cells in the confocal images (Figure 2A-E). The results indicate that GB samples have a significant increase in the number of glial cells compared to control samples, but this increase depends on cry expression (Figure 2A-C,E). Besides, knockdown of cry in normal glia does not alter the number of glial cells ( Figure 2D,E). In addition, we quantified the volume of glial membrane. We used Imaris software to measure the volume of the red signal that corresponds to a myristoilated form of RFP (mRFP) expressed in glial cells under the control of repo-Gal4. The quantification of the volume show a significant expansion of glial membrane in GB compared to control samples, but this increase depends on cry expression ( Figure 2A'-C',F). Again, knockdown of cry in normal glia does not alter the volume of glial membrane ( Figure 2D',F). These results suggest that cry expression is required for GB progression, but not for normal glia development. cry Expression in Glioblastoma To determine if cry expression was affected in glioma samples, we extracted RNA from the heads of 7-day-old adult control and glioma flies. Quantitative RT-PCR results (see Table 1 in Materials and Methods) indicate that cry mRNA levels are 50 times higher in glioma samples as compared to controls ( Figure 1A). This result goes in line with the data retrieved from TCGA-GBM dataset (at http://gliovis.bioinfo.cnio.es/, accessed on February 1 st 2022) that indicate a significant increase of cry1 mRNA levels (RNA-seq) in GB samples, as compared to non-tumor tissue. Next, we studied the impact of GB progression and cry expression in neighboring neurons. We counted the number of synapses in motor neurons of adult neuromuscular junction (NMJ), a standardized tissue to study neurodegeneration [9,11,33]. To visualize synapses, we used an anti-Brp antibody (nc82) to detect active zones in the neurons, and counted the number of synapses in control samples, GB, GB + cryRNAi and normal glia + cryRNAi ( Figure 2G-J). The quantification of synapse number ( Figure 2K) shows that GB induction provokes a significant reduction in the number of synapses as compared to control samples, compatible with a neurodegenerative process. This effect was previously described [7, 9,11] as a consequence of GB progression. Moreover, cry knockdown in GB prevents the reduction in the number of synapses, and cryRNAi expression in normal glial cells does not cause any detectable change in the number of synapses. Finally, we aimed to determine the systemic effect of cry. We expressed cryRNAi in glia or GB cells, and we analyzed the life span of adult flies. The results show that GB causes a significant reduction of life span and a premature death, which is prevented by cryRNAi expression in GB cells. Moreover, cryRNAi expression in normal glial cells does not reduce lifespan but causes a significant increase in the average lifespan ( Figure 2L). Signaling Pathway to Control Cry Upregulation To decipher the specific signaling pathway responsible for cry transcriptional activation in GB cells, we analyzed the contribution of the two main pathways activated in this model of GB, EGFR and PI3K. Both pathways converge in the expression of the gene myc (see Figure 3A for detailed genetic epistasis in GB). Thus, we analyzed the contribution of PI3K, EGFR and myc to cry upregulation. We measured the fluorescent signal of GFP-cry reporter in control adult brains ( Figure 3B-B") and compared it with adult brains upon expression of the constitutively active forms of PI3K ( Figure 3C-C") or EGFR ( Figure 3D-D") in glial cells (under the control of repo-Gal4). In addition, we analyzed the GFP-cry signal in glial cells upon myc upregulation ( Figure 3E-E"). We quantified in the confocal images the signal of GFP that co-localizes with glial membranes (mRFP) ( Figure 3F). The results indicate that PI3K expression is sufficient to increase GFP-cry signal but not EGFR or myc overexpression. These results suggest that PI3K upregulation induces cry transcription, and EGFR or myc expression do not induce cry expression in glial cells. Cry Regulates Myc Expression in Glial Cells Next, to determine the epistatic relation between cry and myc, we analyzed Myc protein accumulation in glial cells upon cry expression. First, to analyze if Cry is sufficient to cause an increase in Myc protein levels, we used a specific antibody against Myc and analyzed Myc signal levels upon cry overexpression, myc overexpression or cry + myc overexpression in glial cells ( Figure 4A-D'). The quantification of Myc surface signal that coincides with glial cells (anti-repo) showed that cry expression in glia is sufficient to increase Myc protein signal in glial cells, comparable to myc upregulation. In addition, cry + myc upregulation show a summation effect on the increase of Myc protein levels ( Figure 4E). To conclude if cry is required for myc expression in GB, we quantified glial Myc signal in the control, cry RNAi, GB, GB + cryRNAi and cry upregulation ( Figure 4F-J'). The quantifications indicate that cryRNAi in glial cells does not reduce the amount of Myc in glial cells ( Figure 4K). In addition, GB condition triggers the number of Myc positive glial cells, as well as cry upregulation in glial cells ( Figure 4K). Finally, cryRNAi expression in GB cells prevents the accumulation of Myc in GB cells. Taking all these results together, we conclude that cry is sufficient to trigger Myc accumulation in glial cells, and cry expression is necessary for Myc accumulation in GB condition. cry Regulates myc Expression in Glial Cells Next, to determine the epistatic relation between cry and myc, we analyzed Myc protein accumulation in glial cells upon cry expression. First, to analyze if Cry is sufficient to cause an increase in Myc protein levels, we used a specific antibody against Myc and analyzed Myc signal levels upon cry overexpression, myc overexpression or cry + myc overexpression in glial cells ( Figure 4A-D'). The quantification of Myc surface signal that coincides with glial cells (anti-repo) showed that cry expression in glia is sufficient to increase Myc protein signal in glial cells, comparable to myc upregulation. In addition, cry + myc upregulation show a summation effect on the increase of Myc protein levels ( Figure 4E). To conclude if cry is required for myc expression in GB, we quantified glial Myc signal in the control, cry RNAi, GB, GB + cryRNAi and cry upregulation ( Figure 4F-J'). The quantifications indicate that cryRNAi in glial cells does not reduce the amount of Myc in glial cells ( Figure 4K). In addition, GB condition triggers the number of Myc positive glial cells, Cry Contribution to GB Progression To investigate the contribution of Cry to glioma progression, we determined the number of glial cells and volume of glial membrane network in control adult brain, GB (PI3K + EGFR), PI3K + cry, EGFR + cry or myc + cry expressed in glial cells ( Figure 5A-E'). The quantification showed that all these genetic combinations cause an increase in the number of glial cells as compared to control brains ( Figure 5F). However, only the GB condition provoked an expansion of the glial membrane volume, and the combination of PI3K + cry, EGFR + cry or myc + cry showed a volume of glial membrane comparable to control brains ( Figure 5G). To further determine the contribution of cry to GB expansion, we analyzed the contribution of single gene upregulation in glial cells for cry or myc, and the combination of cry + myc expression ( Figure 5H-K'). The quantification of glial cell number showed that cry or myc expression alone, or in combination, is sufficient to increase the number of glial cells with respect to control samples ( Figure 5L). Nevertheless, none of these genetic modifications is sufficient to expand glial membrane volume ( Figure 5M). These results suggest that cry or myc are sufficient to trigger glial cell number increase in adult brains, but not to expand the volume of glial membrane network. as well as cry upregulation in glial cells ( Figure 4K). Finally, cryRNAi expression in GB cells prevents the accumulation of Myc in GB cells. Taking all these results together, we conclude that cry is sufficient to trigger Myc accumulation in glial cells, and cry expression is necessary for Myc accumulation in GB condition. cry Contribution to GB Progression To investigate the contribution of Cry to glioma progression, we determined the number of glial cells and volume of glial membrane network in control adult brain, GB (PI3K + EGFR), PI3K + cry, EGFR + cry or myc + cry expressed in glial cells ( Figure 5A-E'). The quantification showed that all these genetic combinations cause an increase in the number of glial cells as compared to control brains ( Figure 5F). However, only the GB condition provoked an expansion of the glial membrane volume, and the combination of PI3K + cry, EGFR + cry or myc + cry showed a volume of glial membrane comparable to control brains ( Figure 5G). To further determine the contribution of cry to GB expansion, we analyzed the contribution of single gene upregulation in glial cells for cry or myc, and the combination of cry + myc expression ( Figure 5H-K'). The quantification of glial cell number showed that cry or myc expression alone, or in combination, is sufficient to increase the number of glial cells with respect to control samples ( Figure 5L). Nevertheless, none of these genetic modifications is sufficient to expand glial membrane volume ( Figure 5M). These results suggest that cry or myc are sufficient to trigger glial cell number increase in adult brains, but not to expand the volume of glial membrane network. Cry Upregulation in Glial Cells Causes Synapse Loss and Premature Death It was previously described that GB progression induces synapse loss, an early symptom of neurodegeneration. To determine the contribution of cry to synapse loss, we counted the number of active zones in motor neurons of adult neuromuscular junction in the control, GB (PI3K + EGFR), PI3K + cry, EGFR + cry or myc + cry samples ( Figure 5N-R). The quantification of the number of active zones showed that the expression in glial cells of GB (PI3K + EGFR), PI3K + cry, EGFR + cry or myc + cry is sufficient to reduce the number of synapses in NMJ neurons ( Figure 5S). Finally, to evaluate the systemic effect of GB and glial expression of PI3K + cry, EGFR + cry or my c + cry, we analyzed the lifespan of adult individuals. The results show that GB causes a premature death, as previously described in Drosophila and mice Xenografts [8,9,11], glial upregulation of EGFR + cry or myc + cry causes a significant reduction of lifespan but less aggressive than GB, and PI3K + cry upregulation in glial cells does not reduce lifespan ( Figure 5T). Discussion Different studies have established a relation between alterations in circadian rhythm genes and cancer [32,34]. Specifically, one of the genes associated with different types of cancer is cry [35][36][37]. Thus, this study aims to investigate the role of cry in a Drosophila GB model. The previous work of Luo et al. 2012 [38] describes a reduction of the number of glial cells positive for cry1/2 expression in glioma tissue compared to normal tissue. However, the authors show that glioma cells that are positive for Cry1/2 show an increase in the amount of Cry1/2 with respect to non-tumoral tissue. Moreover, both Madden et al. 2014 [26] (with a sample 10 times larger than that of Luo et al. 2012) and Wang et al. 2021 [27] (using data from three different databases) analyzed the expression of circadian genes in glioma tissue compared to healthy tissue and conclude that cry1 is overexpressed in glioma tissue. We also found this result in other databases such as https://www.proteinatlas.org/ and http://gliovis.bioinfo.cnio.es/, accessed on 10 January 2022, [39], which in turn is compatible with the observations in the Drosophila model of GB. Nonetheless, Fan et al. [40] investigated the role of Cry2 in rat glioma cells and observed that cry2 mRNA and protein levels showed aberrant rhythmic periodicity of 8 h, compared to 24 h in normal tissue. Thus, futures studies on the contribution of circadian rhythms genes should take into consideration the variations of expression. On the contrary, Dong et al. [41] state that glioblastoma stem cells (GSCs) displayed robust circadian rhythms dependent on core clock transcription factors. The use of Cry1/2 agonists induced anti-tumor effects suggesting that GSCs are sensitive to cry1/2 activity. Taken the different conclusions into consideration, most of the literature and our data suggest that cry is upregulated in glioma cells and promotes glioma progression; however, the role of cry expression in Drosophila, or cry1/2 expression in mammals, may differ according to the glioma subtype, the specific mutations in glioma cells and the cell population of study within the glioma and the hour of the day. We described an increase in cry1 mRNA levels in human GB samples and in a well-studied Drosophila model of GB. However, we cannot conclude that GB cells show an increase in cry transcription, or an enhancement of cry mRNA stability. The Drosophila GB model is based on the activation of the two most frequently mutated pathways in GB, PI3K and EGFR, which converge in Myc as a coincidence point. These pathways are of great relevance to promote GB cells expansion, GB progression and, in consequence, the deterioration of neighboring neurons and a premature death. The results indicate that cry upregulation in Drosophila GB cells depends on PI3K expression, and it is required for GB cells number increase and synapse loss ( Figure 6). In addition, cry expression in glial cells is sufficient to increase the number of glial cells. However, cry expression is expendable for normal wt glial growth during development. Taking into consideration that cry is upregulated in GB cells and promotes glia cells number increase, we did not observe any contribution to normal glia development, which makes Cry a potential target for GB treatment. Besides, we show that Cry is necessary and sufficient to induce myc expression in GB cells. This agrees with in vitro studies that revealed an increase in Myc levels as a result of cry upregulation [32]. Therefore, we propose that cry is part of the PI3K-Myc signaling pathway in GB, where cry upregulation would be associated with glial cells number increase. However, PI3K is a highly promiscuous enzyme that participates in numerous signaling pathways, and the results suggest that Cry contribution is restricted to the malignant features of GB dependent on myc, such as GB cell number increase and neurodegeneration. However, cry expression is independent of glial membrane expansion characteristic of GB progression. Besides, cry expression in glial cells partially reduces lifespan, but is less aggressive than GB. This result suggests that Cry plays a central role in GB and is required for GB formation, and cry mutations might be responsible for several features of GB. The human gene expression databases indicate that cry1 expression levels correlate negatively with lifespan, and it is associated with a poor prognosis. In conclusion, these results suggest that further studies on the contribution of Cry1 to human GB progression could lead to novel strategies to treat GB patients. great relevance to promote GB cells expansion, GB progression and, in consequence, the deterioration of neighboring neurons and a premature death. The results indicate that cry upregulation in Drosophila GB cells depends on PI3K expression, and it is required for GB cells number increase and synapse loss ( Figure 6). In addition, cry expression in glial cells is sufficient to increase the number of glial cells. However, cry expression is expendable for normal wt glial growth during development. Taking into consideration that cry is upregulated in GB cells and promotes glia cells number increase, we did not observe any contribution to normal glia development, which makes Cry a potential target for GB treatment. Besides, we show that Cry is necessary and sufficient to induce myc expression in GB cells. This agrees with in vitro studies that revealed an increase in Myc levels as a result Recent publications describe the communication between GB cells and neurons in human GB cells and mice xenografts based on the establishment of electrical and chemical synapses, which are essential for tumor progression [42,43]. A possible explanation for GB prevention by cry downregulation arises from the non-circadian function of Cry as a regulator of synapse number through the genetic and physical association with the key presynaptic protein Bruchpilot (Brp) [44,45]. In Drosophila, cry mutants show reduced brp expression levels. Actually, Cry interacts physically with Brp to modulate its stability, and triggers its degradation activated by light. Therefore, it is possible that cry overexpression in glial cells promotes the establishment of synapses with neurons. Moreover, the absence of light input impairs Brp degradation in glial cells, thus promoting tumoral progression. In conclusion, further experiments are required to unveil the molecular interactions of Cry and Brp proteins, including the putative formation of abnormal synapses between glial and neurons under the GB condition. The studies of other groups describe the beneficial effects of haloperidol on cry1 expression in GB cells, but these results obtained in cell culture suggest that the doses required to treat patients might be toxic; in consequence, specific delivery strategies combined with haloperidol are worth of study. In addition, we observed significant effects of cry knockdown in normal glial cells, in line with Bolukbasiet al., who recently described the extension of lifespan by foxo upregulation in glial cells [46]. We observed an effect of cry upregulation in the number of glial cells ( Figure 5L). Given that cry and foxo respond to PI3K pathway, it is tempting to speculate that cry expression is relevant for lifespan extension by PI3K pathway, and associated behaviors such as diet restriction. The classical definition of Cry as a regulator of circadian rhythms can now be expanded to the biology of glial cells, GB progression and the expansion of lifespan in Drosophila. This plethora of different phenotypes associated with one gene is now a common feature previously described for Troponin I [47][48][49], Caspases [50][51][52] or even other circadian genes as per1 [53,54] and contributes to the explanation of the multiple phenotypes observed in patients. This study describes the epistatic relationship between PI3K, cry and myc and the relevance for GB progression. The strengths of this study rely on the importance of understanding the mechanisms underlying the progression of a fatal tumor as GB, and the reliability of Drosophila as an animal model useful to study human disease. However, it is important to take into consideration the limitations of the results to put them in perspective. We used a model based on the activation of PI3K and EGFR pathways that reproduces the key features of human disease progression, but the contribution of additional mutations such as IDH or TP53 in GB require further studies; thus, new models in flies or other animal models of study will contribute to validate and narrow down our findings. The stock containing UAS-cryRNAi was previously generated and validated [57]; this construct produces a double-stranded RNA that corresponds to the 300-799 region of cryRA mRNA. The glioma-inducing line contains the UAS-dEGFR λ and UAS-dp110 CAAX transgenes that encode for the constitutively active forms of the human orthologues PI3K and EGFR, respectively [6]. The Repo-Gal4 line drives the Gal4 expression to glial cells and precursors [59,60] combined with the UAS-dEGFR λ and UAS-dp110 CAAX line allow us to generate a glioma thanks to the Gal4 system [14]. To visualize glial or GB cells membrane, we induced the expression of a myristoylated form of red fluorescent protein (UAS-mRFP, described in [9]) under the control of the specific glial promoter repo-Gal4. Gal80 TS is a repressor of the Gal4 activity at 18 • C, although at 29 • C is inactivated [61]. The tub-Gal80 TS construct was used in all the crosses to avoid the lethality caused by the glioma development during the larval stage. The crosses were kept at 17 • C until the adult flies emerged. To inactivate the Gal80 TS protein and activate the Gal4/UAS system to allow for the expression of our genes of interest; the adult flies were maintained at 29 • C for 7 days except in the survival assay (flies were at 29 • C until death). Immunostaining and Image Acquisition All tissues were treated in simultaneously for each experiment. Adult brains were dissected and fixed with 4% formaldehyde in phosphate-buffered saline for 20 min, whereas adult NMJ were fixed for 10 min; in both cases, samples were washed 3 × 15 min with PBS + 0.4% triton, blocked for 1 h with PBS + 0.4% triton + BSA 5%, incubated overnight with primary antibodies, washed 3 × 15 min, incubated with secondary antibodies for 2 h and mounted in Vectashield mounting medium with DAPI in the case of the brains. The primary antibodies used were anti-Repo mouse (1/200; DSHB, Iowa City, IA, USA) to recognize glial nuclei, anti-Bruchpilot-nc82-mouse (1/50; DSHB, Iowa City, IA, USA) to recognize the presynaptic protein Bruchpilot, anti-HRP rabbit (1/400; Cell Signaling, Danvers, MA, USA) to recognize neuronal membranes, anti-GFP rabbit (1:500; DSHB, Iowa City, IA, USA) and anti-Myc guinea pig (1/100; DSHB, Iowa City, IA, USA) to recognize the nuclear protein Myc. The secondary antibodies used were anti-mouse, -rabbit or -guinea pig Alexa 488 or 647 (1/500; Life Technologies, Carlsbad, CA, USA). Images were taken by a Leica SP5 confocal microscopy applying same conditions for each experiment. qRT-PCR The mRNA for all samples was extracted from adult brains and processed in parallel. For this, 1-to 4-day-old male adult mice were maintained at 29 • C for 7 days and collected on dry ice at ZT6. Total RNA was extracted by triplicate from 30 heads. RNA was extracted with TRIzol and phenol chloroform. cDNA was synthetized from 1 µg of RNA and cDNA samples from 1:5 dilutions were used for real-time PCR reactions. Transcription levels were determined in a 14 µL volume in duplicate using SYBR Green (Applied Biosystems, Waltham, MA, USA) and 7500 qPCR (Thermo Fisher Scientific, Waltham, MA, USA). We analyzed transcription levels of cry using Rp49 as a housekeeping gene reference. Sequences of primers were as follows. After completing each real-time PCR run, with cycling conditions of 95 • C for 10 min, 40 cycles of 95 • C for 15 s and 55 • C for 1 min, and outlier data were analyzed using 7500 software (Applied Biosystems, Waltham, MA, USA). Ct values by triplicate of duplicates from three biological samples were analyzed calculating 2DDCt. Survival Assays Lifespan was determined under 12:12 h LD cycles at 29 • C conditions. Three replicates of 30 1-to 4-day-old male adults were collected in vials containing standard Drosophila media and transferred every 2-3 days to fresh Drosophila media. Quantification Fluorescent reporter-relative cry signals within brains were determined from images taken at the same confocal settings avoiding saturation. For the analysis of co-localization rates, "co-localization" tool from LAS AF Lite software (Leica, Wetzlar, Germany), was used taking the co-localization rate data for the statistics analyzing the co-localization between the green signal (both cases) and signal coming from glial tissue from three slices per brain in similar positions of the z axis. Glial network was marked by a UAS-myristoylated-RFP reporter (mRFP) specifically expressed under the control of repo-Gal4. The total volume was quantified using the Imaris surface tool (Imaris 6.3.1 software, Oxford Instruments, Abingdon, UK). Glial nuclei were marked by staining with the anti-Repo (DSHB). The number of Repo + cells and number of synapses (anti-nc82; DSHB) were quantified by using the spots tool in Imaris 6.3.1 software (Oxford Instruments, Abingdon, UK). We selected a minimum size and threshold for the spot in the control samples of each experiment: 0.5 µm for active zones and 2 µm for glial cell nuclei. Myc glial signal was quantified using the Imaris surface tool (Imaris 6.3.1 software, Oxford Instruments, Abingdon, UK) creating a mask for the glial nuclei signal and exclusively selecting the myc signal corresponding to glial nuclei. Then we applied the same conditions to the analysis of the corresponding experimental sample. Statistics The results were analyzed using the GraphPad Prism 5 software. Quantitative parameters were divided into parametric and nonparametric using the D'Agostino and Pearson omnibus normality test, and the variances were analyzed with F test. The t-test and ANOVA test with Bonferroni's post hoc were used in parametric parameters, using Welch's correction when necessary. The survival assays were analyzed with Mantel-Cox test. The p limit value for rejecting the null hypothesis and considering the differences between cases as statistically significant was p < 0.05 (*). Other p-values are indicated as ** when p < 0.01 and *** when p < 0.001. Human GB Databases We used a public open access database (http://gliovis.bioinfo.cnio.es/, accessed on 1 February 2022) to analyze the expression of human Cry1 gene in GB samples. We used the "Adult" samples in CGGA Dataset and included the data from primary and secondary tumor types. The data shown in Figure 1 correspond to the "expression" and "survival" tabs. Please note that nomenclature corresponds to the 2016 classification. GBM-Glioblastoma multiforme.
7,960.4
2022-02-01T00:00:00.000
[ "Biology" ]
Hopkins’s confessional notes and desire: a reconsideration Since their publication in 1989, the confessional notes Gerard Manley Hopkins (1844-1889) kept as an undergraduate have been a major influence in shaping criticism of his work. The sexual indiscretions and longings the confessional notes record have been central to recent studies of eroticism in Hopkins’s poetry, corroborating the suggestion that his poems allowed for the homoerotic expression his religious vocation denied him. While not questioning the seriousness of Hopkins’s attraction to other men, this article seeks to establish the broader moral scrupulousness the confessional notes evidence. As well as recording lapses in sexual propriety, the notes reveal the startling range of what Hopkins considered to be failings in need of repentance. They are the product of a curious moral fastidiousness, which recorded the killing of insects or indulgence in eating biscuits with apparently the same concern as when registering sexual excitement at the crucifixion scene. One prominent aspect of the confessional notes is the frequency with which sinful behaviour is initiated by reading or writing. In this the notes provide early indications of Hopkins’s doubts over the legitimacy of writing poetry, which grew more explicit once he had joined the Jesuits. Any interpretations of his work which see Hopkins as displacing sexual desire into his writing should also recognise the very real qualms the confessional notes show him entertaining about poetry itself. Hopkins's confessional notes and desire: a reconsideration Martin Dubois (University of York) "Is a pen a metaphorical penis?Gerard Manley Hopkins seems to have thought so" (3). Thus opens Sandra M. Gilbert and Susan Gubar's major study, The Madwoman in the Attic.Gilbert and Gubar are referring to Hopkins's conception of creativity as an essentially masculine attribute, but the association of pens and penises is also apt in an altogether different sense.For Hopkins seems to have been both attracted and repelled by male bodies and by the act of writing in similar measures.The fear that poems "wd. interfere with my state and vocation" caused Hopkins to abstain from poetry for seven years on joining the Jesuits, and the notion that it was incompatible with his religious calling remained with him for the rest of his life, even when he was composing exultant sonnets celebrating God's presence in the world (Letters to Bridges 24).Physical beauty was similarly double-edged: at once able to show forth the divine element in creation, it was also "dangerous", as it often "does set danc-/Ing blood" ("To what serves Moral Beauty?" 1-2).The two desires -to write and for male bodies -were often linked.In 1864, for example, Hopkins drew up a list of related words in his notebook: " Spuere,spit,spuma,spume,spoom,spawn,spittle,spatter,spot,sputter" (Journals 16).David Alderson has observed that the missing word from this record of bodily and natural secretions (and perhaps the word which set off this particular line of enquiry) is "sperm" (141)(142).Speculating about the derivation of words would seem a roundabout method of exploring one's sexual desires, akin to looking up sexual terms in dictionaries, but curiosity about the provenance of words is entirely typical of Hopkins's agonized approaches to sexual longings.Indeed Hopkins made an unhappy record of several occasions on which he experienced "Evil thoughts in dictionary" (Manuscripts 157). Sexual and etymological, the erotic and the act of writing, were intimately bound together in his work and in his life. As a result, a common approach has been to interpret Hopkins's poetry as evidencing a displacement of frustrated sexual desires.According to this view, Hopkins's poetic celebration of male bodies, and the highly sensual nature of his writing, provided an outlet for exploring his attraction to other men.This could only be countenanced if such writing was devotional and the admiration of men showed forth God's creation; if not, such a poetic was liable to spill over into the self-accusatory.The debt to psychoanalytic theory in such readings is evident.Indeed, the most comprehensive study of Hopkins's sexuality, Julia F. Saville's A Queer Chivalry: The Homoerotic Asceticism of Gerard Manley Hopkins, applies Lacanian notions of split-subjectivity to explain the combination of bold homoerotic expression and harsh self-criticism in Hopkins's writing. Saville describes split-subjectivity as enjoining a "treatment of desire as an impulse that cannot simply be resolved or eliminated but requires ongoing management" (19), so that "his often successful sublimatory activities of poetry writing and religious devotion are offset by an excess of libidinal demand that remains frustrated and manifests itself in symptoms of erotogenic nature" (25).Michel Foucault's The History of Sexuality can also be seen to inform such approaches to Hopkins's work.Resisting the familiar notion of a historic repression of sexuality, Foucault instead posits the alternative of a "great process of transforming sex into discourse", a process which reaches its zenith in the nineteenth century (22).Foucault states: From the singular imperialism that compels everyone to transform their sexuality into a perpetual discourse, to the manifold mechanisms which, in the areas of economy, pedagogy, medicine, and justice, incite, extract, distribute, and institutionalize the sexual discourse, an immense verbosity is what our civilization has required and organized.(33) Foucault's categories -"economy, pedagogy, medicine, and justice" -exclude literature, yet critics such as Saville have not hesitated to infer that poetry could perform a similar displacement for Hopkins. Yet while Hopkins would have held "economy, pedagogy, medicine, and justice" to be indubitably respectable discourses, he had an altogether more tortured relationship with poetry.Saville underplays the extent to which Hopkins denigrated his art, preferring to see him as a "craftsman poet" in the pattern of "a monastic artisan" (88), relatively untroubled by qualms about the worth of writing poetry.The compunctions Hopkins had about the writing of poetry itself -never mind its subject -receive little attention in her exhaustive analysis of eroticism in his work.Nor, for that matter, does she sufficiently acknowledge the breadth of Hopkins's many scruples, which extended far beyond his sexual longings, and which are catalogued in just as much detail as the sexual indiscretions.This article attempts to recover a broader sense of how both the desire to write and the desire for other men were troubling for Hopkins, with reference to his early notebooks.A focus on the notebooks is justified by the integral role they have played in generating interest in Hopkins's sexuality over the last two decades.I argue that not only do they evidence sexual longings but also an eclectic assortment of what Hopkins considered to be forbidden desires, temptations that needed to be repented for and confessed.Prominent are misgivings about writing and a principal contention of this article is that the notebooks provide striking early signs of Hopkins's deepening conviction that poetic aspirations were illegitimate. Earwigs and biscuits: multifarious desires in the notebooks The publication in facsimile of Hopkins's early notebooks in 1989, including for the first time the records of sexual indiscretions excluded from the original (and until then only) edition of the notebooks, kicked up a storm of controversy among his critics.The notebooks revealed that the "intense homosociality" (Higgins 33) of Hopkins's life in Oxford was coupled with a keen awareness of the sinfulness of desire for other men.Yet perhaps the most remarkable element to the notes is the lack of any real differentiation between sins.On one day in April 1865, for instance, Hopkins entered "Evil thoughts partly abt.Urquhart" and "Intemperance in food at Addis' desert"; on the next, he chided himself for "Eating two biscuits at the Master's" (Manuscripts 155-6). Overindulgence in desserts or biscuits is a regular refrain of the notes, but there are also entries that sound even more innocuous, such as this one from August 1865: "Killing earwig" (182).There is, in the notes at least, no attempt to set out which sins are more serious than others; on frequency alone, there may be more to do with overeating than with lapses in sexual propriety.And so, though it is unlikely that Hopkins considered killing an earwig and sexual fantasising to be equally serious, any attempt to draw inferences about the extent of his sexual desires from the notes must also acknowledge that they catalogue with remarkable meticulousness a whole range of what even his confessors would have considered very minor failings.That is not necessarily to cast doubt on the seriousness or depth of Hopkins's desires for other men, as found in the notes; it is rather to underline the wider extent of his moral fastidiousness.A sense of the records of sexual sin as within a broad context of painstaking daily examinations of conscience is perhaps what is missing from studies such as those of Saville or Bernard Martin. A notable aspect of the confessional notes is the frequency with which sinful behaviour is initiated by reading or writing.Alderson and Saville have both drawn attention to Hopkins's admissions of sexual excitement at the crucifixion scene, and it is significant that this could be occasioned by the act of writing, as on Good Friday, 1865: "The evil thought in writing on our Lord's passion" (157).It is impossible to say with certainty what was being written on this occasion, but Hopkins had been composing "Easter Communion" at around this time and the sonnet has been proposed by Norman MacKenzie as a likely candidate for the entry that came a day later, on Easter Saturday: "In looking over the above poem an evil thought seemed to rise from the line before" (157).Erotic language seems to shadow its devotional 'other' here: it should of course be Christ who "rises" from the grave at Easter and not an "evil thought".Several lines from "Easter Communion" could have stimulated Hopkins's imagination in a poem whose diction seems to work against its purported aim, that of the encouragement of self-denial and penitence in the life of the believer.Its treatment of ascetic practice is anything but temperate: the alliterative sensuousness of "Lenten lips" (2) not so much contained by its partner image of being "striped in secret with breath-taking whips" (3) as inflamed by it. Even "the ever-fretting shirt of punishment" (11) is an image which "looks as if it revelled in the discipline" (Griffiths 273).No wonder that when he came to draw up a list of self-denials to be made during Lent the following year, Hopkins included "No verses in Passion Week or on Fridays" (Journals 53).(That the same list included "No pudding on Sundays" and "Not to sit in armchair except can work in no other way" again highlights the range and variety of his moral scruples).Yet on Easter Sunday, having the previous day entertained "evil thoughts" resulting from reading his sonnet, Hopkins regretted that there had been "No reading done" (Manuscripts 158).Of course this is a reference to his studies; the notebooks contain many entries repenting of days spent idling instead of working, which this instance naturally fits with.Anxieties about not keeping up with reading were common enough for Oxford undergraduates: an earlier Balliol member, Arthur Hugh Clough, had included many such reproaches in his own Oxford diaries.Reading one's poetry and reading in preparation for the term ahead were clearly very different activities.Even so, the lack of any association between the activity of one day (when reading led to sin) and the next (when not reading is sin) is surprising.A possible explanation is that the distinction between the two kinds of activities was indicated by subtle changes in expression."Reading" and "writing" tend to be of the commendable kind and appear in the notebooks mostly as activities not undertaken: "Dangerous thought about Dolben, no reading whatsoever" (158).As with the "looking over" of "Easter Communion", alternative (and perhaps more unexpected) verbs are used to indicate sinful behaviour: "An evil thought rose while I was making some poetry in fields" (158).Or: "Looking fully at a sentence in a newspaper with terrible associations" (191).Two entries concerned with the poem Beyond the Cloister (which probably survives as the fragment A Voice from the World) also observe this lexical rule: Hopkins notes "Dangerous scrupulosity abt.finishing a stanza" and "Repeating to myself bits of Beyond the Cloister" (197).Verse is "finished" or "repeated", not written or read. However, when the fault is more to do with wasting time than indecency, the more usual terms are invoked: "Folly in writing with hanging scruple some verses ab[out] geese and peas" (193).It should be owned that this distinction is not universally replicated throughout the notebooks and there are instances when expressions are employed opposite to the sense suggested here.Nevertheless, shifts in language offer likely indications of what Hopkins considered acceptable or unacceptable desires. It is also worth observing that the records Hopkins kept seem to have held a purpose beyond that of preparation for confession.Robert Bernard Martin observes that: Most of the diary is in a clear and easily legible hand of fair size, but the entries in preparation for confession are in pencil and so minute that they are at best difficult to decipher.At some later date, perhaps at his conversion, more probably immediately after confessing his sins, Hopkins drew a pencil line through these entries but without obscuring them; he seems to have wanted to be able to review them after he had been forgiven for them, and even after becoming a Roman Catholic he kept them.(100) Why Hopkins wished to review sins of earlier years is unclear, but there is a possibility it may not only have been to gauge how far he had come in controlling his desires or as a deterrence from further indiscretions.Foucault, in The History of Sexuality, has laid great emphasis on confession as a "reinforcement of heterogeneous sexualities", a particularly prevalent form of converting sex into discourse (61).In the approach Hopkins took, however, it seems plausible that confessional notes could also be an incitement to commit further sexual transgressions.And it is clear that the acts of reading and writing, or perhaps more correctly, "looking" and "making" (as the notebooks have it), continued to cause much anxiety for Hopkins beyond the end of the confessional notes in January 1866.Indeed, that he at once did and did not obscure the diary entries is remarkably like the approach he took to his poems on entering the Jesuit noviciate, burning his own copies of his verses in an event he called the "Slaughter of the innocents" (Journals 165), but telling Bridges, "I kept however corrected copies of some things which you have and will send them that what you have got you may have in its last edition" (Letters to Bridges 24).Associating the burning of his poems with the Herodic slaughter is curious: if the poems were "innocents", there would be no need to sacrifice them.Similarly paradoxical is his relation to Walt Whitman, which Hopkins wrote of many years later, in 1882: I always knew in my heart Walt Whitman's mind to be more like my own than any other man's living.As he is a very great scoundrel this is not a pleasant confession.And this also makes me more desirous to read him and the more determined I will not.(Letters to Bridges 155) "Confessing" his closeness to Whitman, Hopkins is also concerned to put him at a distance -by not acting on a desire to read him.But of course to know "Whitman's mind to be more like my own than any other man living" is to have read him already.Such contradictions and evasions may be characteristic of Hopkins's approach to male beauty, but, as the confessional notes show, they are also typical of his attitude to writing; and the two desires are often, though not always, concurrent. Eve Kosofsky Sedgwick, in Between Men: English Literature and Male Homosocial Desire, has suggested such evasions are typical of Victorian men of Hopkins's class: the sexual histories of English gentlemen, unlike those of men above and below them socially, are so marked by a resourceful, makeshift, sui generic quality, in their denials, their rationalizations, their fears and guilts, their sublimations, and their quite various genital outlets alike.( 173) This is true of Hopkins's sexuality, but it is also pertinent to his attitude to poetry: unable to fully countenance his aspiration to write, he engaged in unconvincing "denials" (such as the burning of the poems) and "rationalizations" (permitting himself to compose his great ode, The Wreck of the Deutschland, because of "the chance suggestion of my superior, but that being done it is a question whether I did well to write anything else" [Dixon Correspondence 88]).Hopkins frequently expressed distaste for his poems in the strongest terms.To Bridges, he wrote of his work that "the oddness may make them repulsive at first": Indeed when, on somebody returning me the Eurydice, I opened and read some lines, reading, as one commonly reads whether prose or verse, with the eyes, so to say, only, it struck me aghast with a kind of raw nakedness and unmitigated violence I was unprepared for.(79) The same expressions crop up when Hopkins describes other poems.He worried that Bridges found his method "repulsive" (137); felt his work to be akin to Whitman's "savagery" (Letters to Bridges 157); also found "copying one's verses out" to be "repulsive" (Letters to Bridges 304); and so on.Yet he was also, despite feigning modesty about the value of his efforts, eager to send his work around his small circle of readers.And he could be equivocal about the possibility of being published: opposed to the efforts of friends to get his poems into print, Hopkins admitted to Bridges that he retained copies so that "if anyone shd.like, they might be published after my death" (304). Bridges followed this instruction accordingly.It has been suggested that Hopkins did not follow up his initial ambition to be an artist at least in part because he worried about "the necessity of drawing from the nude" (Phillips 82).He told his friend Baillie that "the higher and more attractive parts of the art put a strain upon the passions which I shd.think it unsafe to encounter" (Further Letters 231).Such an inference seems true enough judging from the number of times drawings or reproductions of paintings occasioned sinful thoughts, as recorded in the notebooks.But it seems likely that poetry also posed similarly dangerous distractions: as Hopkins told Bridges on another occasion, in terms which call to mind Sedgwick's inference about gentlemanly evasions: poets and men of art are, I am sorry to say, by no means necessarily or commonly gentlemen.For gentlemen do not pander to lust or other basenesses nor, as you say, give themselves airs and affectations nor do other things to be found in modern works.( 176) There is every reason to believe that Hopkins worried about such "basenesses" occurring in his own poems, as the "de-Whitmaniser" (Letters to Bridges 158) letter demonstrates. It was not only the potential for "basenesses" that led to Hopkins rebuking himself for attempting to write poetry.As has already been suggested, it held associations with idleness and the squandering of valuable time.The feeling which provoked Hopkins's admission in the confessional notes that he had committed a "Folly in writing with hanging scruple some verses ab[out] geese and peas" is mirrored by later thoughts on the illusoriness of pursuing poetic ambitions.As he told Bridges in 1879: I cannot in conscience spend time on poetry, neither have I the inducements and inspirations that make others compose.Feeling, love in particular, is the great moving power and spring of verse and the only person that I am in love with seldom, especially now, stirs my heart sensibly and when he does I cannot always 'make capital' of it, it would be sacrilege to do so.Then again I have of myself made verse so laborious.(64) The congruence between this passage and the confessional notes is striking.Both are run through with the kind of rationalisations and denials that Sedgwick describes in Between Men.Hopkins did not believe he had "the inducements and inspirations" for writing poetry: the former may be true -he was discouraged, for example, by the refusal of The Month to publish The Wreck of the Deutschland -but the latter is surely disingenuous. Only two years earlier, in a sonnet dedicated "to Christ our Lord", Hopkins had described how his "heart in hiding / Stirred for a bird" ("The Windhover" 7-8); the echo of that image here ("stirs my heart") renders unconvincing the notion that poetic "capital" could not be made of devotional fervour.Hopkins was, in fact, only too ready to commit "sacrilege": 1879 saw him write nine complete poems and embark on several others. What is perhaps more sincere is the feeling that the spending of "time" was what could not be justified; Hopkins was heavily overworked at the time, having taken up additional burdens as his superior recovered from an injury .Again, this is analogous to the anxieties first recorded in the confessional notes, where Hopkins repeatedly regretted that not enough reading had been undertaken that day, or that he had been idling hours away with poetry.Such anxieties developed into a conscientiousness about daily labour (as opposed to "laborious" verse-composition) whose corollary was the deprecation of other, less obviously useful and productive desires.Poetry was, for almost all of Hopkins's mature years, to be regarded as "unprofessional" (Letters to Bridges 197). "Morals and scansion" The previous letter Hopkins had sent Bridges (in January 1879) opened: "Morals and scansion not being in one keeping, we will treat them in separate letters and this one shall be given to the first named subject..." (62).It has been the central contention of this article that the primacy of "Morals" over poetry, and indeed their incompatibility, is as prominent in the confessional notes of 1865-1866 as the aspect which has most drawn the attention of critics: the sexual desires recorded there.The desire to write verse, Hopkins felt, was incommensurate with the pious austerity he felt called to live.But though Hopkins may have wished to keep "Morals and scansion" separate, by the end of the letter to Bridges, he is enclosing "some lines I wrote years ago" and promising to send further poems in the near future (65).Such a contradiction is emblematic of Hopkins's attitude to poetry, for he was constantly drawn to what he had renounced, needing only "chance suggestion[s]" (like those of his Rector [Letters to Dixon 88]) to begin anew on his verses.Harsh self-censure often followed, attended by the feeling that "I have never wavered in my vocation, but I have not lived up to it" (Letters to Dixon 88). Hopkins's misgivings about verse-writing are in danger of being eclipsed by the status of the poems as conduits for homoerotic expression.Richard Dellamora's assertion that "Christ's beauty authorized priest and poet's continuing devotion to an embodied selfhood and to the poetic celebration of desire for other men" is something of a critical commonplace (47).Such declarations need to be moderated by an acknowledgment of the doubts Hopkins held about the value of poetry itself and what it might lead to.The reliance of critics on the confessional notes to make the case for a displacement of sexual desire on to the poems has partially obscured the very real qualms Hopkins held about his poetic aspirations themselves.Sexual feelings and guilt about writing poetry were, of course, often coupled together, and an element of Hopkins's censuring of his poetic aspirations was to do with a fear that they might become mired in sinful imaginings.But nor is it the case that they were one and the same.The confessional notes demonstrate this very poignantly, recording with exacting meticulousness aberrations from "the curfew sent", the self-denying practice "Which only makes you eloquent" ("The Habit of Perfection" 6; 8).What they also reveal is a more general scrupulosity about behaviour, with all kinds of transgressions repented of, many of which appear to modern eyes to be of a very minor sort, as they may have done to Hopkins's confessors too.Scrupulosity is a term that often attaches to Hopkins and it is well to remember that while he did admit to erotically charged fantasies, he was just as prompt in reporting "Intemperance at desert" (157).Critics have, quite rightly, submitted the entries dealing with Hopkins's sexual excitement over the crucifixion scene or while drawing a "crucified arm" to thorough analysis (167).What needs to be made clearer, however, is how the attempt to subjugate these desires was part of a wider renunciatory approach that recorded the killing of earwigs with the same care as it did homoerotic daydreams.This is a point which has been best expressed by Hopkins's closest friend, Robert Bridges.In May 1882, on a visit to Manresa House in Roehampton, where Hopkins was undertaking his Tertianship, they walked around the grounds together.Bridges wanted to Such a combination proved difficult to balance and necessitated confessions to the leaders of the High Church movement in Oxford, Liddon and Pusey, which the notebooks reveal Hopkins to have been meticulous in preparing for.What polarised Hopkins's critics was the catalogue of wet dreams (usually characterised as "E.S." -probably indicating "Emissio seminis"), admiration of boys and young men, and frequent returns upon what Hopkins called the "evil thought", the manuscripts revealed.Some critics downplayed the revelations; others leapt upon them as crucial evidence, notably Robert Bernard Martin, who, in his 1991 biography, placed great emphasis on Hopkins's supposed infatuation with Digby Dolben, supported by details garnered from the confessional notes.They have also been key to recent studies by Richard Dellamora and David Alderson, as well as to Saville's A Queer Chivalry.
5,860.4
2008-06-05T00:00:00.000
[ "Linguistics" ]
SyBLaRS: A web service for laying out, rendering and mining biological maps in SBGN, SBML and more Visualization is a key recurring requirement for effective analysis of relational data. Biology is no exception. It is imperative to annotate and render biological models in standard, widely accepted formats. Finding graph-theoretical properties of pathways as well as identifying certain paths or subgraphs of interest in a pathway are also essential for effective analysis of pathway data. Given the size of available biological pathway data nowadays, automatic layout is crucial in understanding the graphical representations of such data. Even though there are many available software tools that support graphical display of biological pathways in various formats, there is none available as a service for on-demand or batch processing of biological pathways for automatic layout, customized rendering and mining paths or subgraphs of interest. In addition, there are many tools with fine rendering capabilities lacking decent automatic layout support. To fill this void, we developed a web service named SyBLaRS (Systems Biology Layout and Rendering Service) for automatic layout of biological data in various standard formats as well as construction of customized images in both raster image and scalable vector formats of these maps. Some of the supported standards are more generic such as GraphML and JSON, whereas others are specialized to biology such as SBGNML (The Systems Biology Graphical Notation Markup Language) and SBML (The Systems Biology Markup Language). In addition, SyBLaRS supports calculation and highlighting of a number of well-known graph-theoretical properties as well as some novel graph algorithms turning a specified set of objects of interest to a minimal pathway of interest. We demonstrate that SyBLaRS can be used both as an offline layout and rendering service to construct customized and annotated pictures of pathway models and as an online service to provide layout and rendering capabilities for systems biology software tools. SyBLaRS is open source and publicly available on GitHub and freely distributed under the MIT license. In addition, a sample deployment is available here for public consumption. Introduction With help from widely accepted standard file formats such as SBGNML [1] and SBML [2] and numerous pathway databases such as Pathway Commons [3], WikiPathways [4], and Reactome [5], a vast amount of biological process data is now available to the scientific community in a format suitable for computation. Visualization of relational data via maps/pathways is compelling in all domains including biology as it is believed to bring out patterns, broad relationships and emerging trends in data, for an insightful analysis [6]. In visualizing relational information, a good layout of objects and their relations is vital since a poor layout will confuse the user, and an average user is expected to spend up to 25 percent of their time on manual layout adjustments [7]. Many web-based [8] and standalone tools [9], [10], [11] have been built for visual construction and analysis of biological pathways. Some of these tools have strong automatic layout capabilities while others rely on the user manually adjusting the map layout in need of an automatic layout service. In addition, while many prominent pathway databases such as Reactome [5] and BioModels [12] make their content available in widely used standard file formats, a visual display of such pathway data is either not available or done laboriously in a manual fashion, creating constant work for the team as new pathways are curated. Furthermore, currently available tools lack support for calculation and proper highlighting of graph-theoretical properties of pathways as well as operations to mine large pathways to find and show parts of pathways that are of particular interest to a user. A similar past effort providing such a layout service [13] works only for input BioPAX models [14], lacking support for input in newer standards such as SBGNML and SBML, with little room for configuration of both the layout operation and the image construction and without a mining facility. With this paper, we introduce a web service named Systems Biology Layout & Rendering Service (SyBLaRS) with the aim to fill the void for programmatically generating graphical representations of pathways while optionally highlighting paths or sub-pathways of interest and automatically laying them out (Fig 1). This also facilitates usage as a layout service by thirdparty visual pathway analysis software. SyBLaRS was built on Cytoscape.js [15] graph visualization library with the following main use cases in mind: • Create an image of provided biological maps (already with layout information) in popular standard formats, • Lay out the provided map in specified layout style chosen from many available ones and return the map with layout information, • Both lay out the provided map in specified layout style and create an image of it as raster image or scalable vector image formats, and • Optionally highlight a path or a sub-pathway (nodes, edges, or paths of interest) based on the results of a graph-based query. Materials and methods SyBLaRS accommodates a number of novel methods as well as widely known and used ones on automatic layout of pathways [16,17], calculating graph-theoretic properties [18] in pathways and mining pathways for subgraphs of interest [19]. The service makes use of Cytoscape.js [20] for some operations in addition to using it as its base rendering facility for constructing images as detailed in the next section. Automatic layout algorithms Even though SyBLaRS supports a wide range of automatic layout algorithms written as Cytoscape.js extensions, of particular interest is fCoSE [16] that supports the compound structures. Such structures are typically depicted as nested drawings in biological pathways representing compartments or molecular complexes (Fig 2). The fCoSE layout algorithm builds on a previous compound spring embedder layout algorithm, making use of the spectral graph drawing technique for producing a quick draft layout. It then applies heuristics where constraints are enforced and compound structures are properly shown. Meanwhile the layout is polished with respect to commonly accepted graph layout criteria. All available layout algorithms are customizable for one's specific needs (e.g., ideal edge length and whether or not any available coordinates should be taken into account as opposed PLOS COMPUTATIONAL BIOLOGY to starting from scratch) through the associated API. Similarly, all are suitable for interactive use as they complete in at most a few seconds for pathways of up to several hundred biological entities and similar number of interaction edges. Please refer to the associated GitHub repository for a complete set of options. Graph properties and mining algorithms One can predict the behavior of biological networks using measured graph-theoretical or structural properties of those systems as well as the local rules governing individual objects of the networks [21]. Hence, a researcher might like to analyze a particular pathway making use of its graph-theoretical properties such as betweenness centrality. SyBLaRS enables this through its query API allowing the calculation and display of the following properties: • Degree centrality, • Closeness centrality, • Betweenness centrality, and • Page rank. Oftentimes one is interested in discovering and analyzing certain types of paths or subpathways in a given biological pathway model, especially in large ones. In order to facilitate such queries, the algorithms in [19] were adapted to work with pathways in our supported formats. Shortest paths. SyBLaRS exposes the shortest paths algorithm of Dijkstra as provided by Cytoscape.js library, where the user specifies the source and target nodes and a single path of a shortest length between these nodes are highlighted with the specified color. The remaining graph mining queries were originally designed in [19] with the aim to extract paths or sub-pathways of interest from a given set of nodes of interest in a pathway database using a non-standard, proprietary notation. We adapted these algorithms to work with the current standard formats. Even though these algorithms seem like straightforward implementations of well-known graph algorithms such as Dijkstra's shortest paths [22], they differ from those in the following ways: • They support compound structures through modified DFS and BFS traversals, where upon reaching a particular compound node or a node within a compound node, we also visit its children or parent/sibling nodes, so as to seamlessly continue the traversal. The user has the option to choose the traversal direction (downstream to follow the edge directions in normal manner, upstream to follow them in the reverse direction and both streams to go in both directions). • Dijkstra's algorithm finds one of many potentially available shortest paths from a single dedicated node to another dedicated one, whereas algorithms such as Paths-between and Pathsfrom-to find all such paths between a group of source and target nodes. Besides, some of these algorithms allow one to define additional distance. For instance, if the actual shortest path between one source and one target node is 3 and the additional distance is 2, then these algorithms will find all paths of length between 3, 4 or 5. Neighborhood. This is a simple neighborhood query where one can specify a group of nodes, from all of which a parallel compound BFS is started. Any node discovered during these traversals gets added to the result (see Fig 1 for an example). Common stream. This query aims to find those nodes that are common to the up (common regulator), down (common target) or both streams of a specified group of entities. One often wants to know if there is a gene in the upstream of some genes in a pathway, which can provide a causal explanation for the co-regulation (eventually a way to control the associated module) [19]. Or two pathways affecting the same mechanism in an organism might be interesting as it suggests that a specific phenotype can have multiple molecular causes. Finding common targets of signaling proteins might help one develop alternative treatment strategies [23]. With the latest high-throughput sequencing technologies, one can scan for alterations in large quantities of samples [24]. A problem that appears regularly in high-throughput studies is selection of genes/proteins. One convenient way to determine new genes for sequencing is to find the vicinity of these genes implied in that particular complex disease. These genes that connect two or more of these "usual suspects" within a signaling path are expected to be of more significance for the disease [19]. Following queries address use cases like this. Paths-between. Paths-between query finds a "maximal" pathway comprising all the nodes of interest complemented by the "missing links" among these nodes. Its parameter k defines the maximum length of these links. Fig 5 shows an example paths-between query. Paths-from-to. Finding shortest paths between a single or all pairs of vertices in a graph is a well known and commonly used algorithm [22]. This query is a more general one, where the goal is to find all shortest paths from a source node set S to a target one T. The query could be constrained with a maximum length k of such paths. Furthermore, another parameter d is provided for relaxing the shortest requirement. Even though traversal based queries discussed above might be quite useful in mining subpathways of interest, they should be used with caution in large pathways due to their exponential nature in number of such paths of interest. Image construction Optionally, an image of the provided map is created in the specified format (JPG, PNG or SVG). Construction can be tailored in certain ways such as the image dimensions and the color scheme to be used. With some file formats, a single color is specified and SyBLaRS uses different shades of that color for rendering graph elements of varying types, whereas in some others, there are specific color schemes (e.g., red-blue) to choose from (Fig 6). The user can also specify whether or not the map should have a background of a specified color. Furthermore, when the result of a query is shown, the user may choose to include only the resulting paths or sub-pathways in the image as opposed to the whole pathway. Please refer to the associated GitHub repository for a complete set of options. Design and implementation SyBLaRS uses a package named cytoscape-sbgn-stylesheet [25] that provides a stylesheet for the SBGN maps. For other formats, we have designed and implemented our own stylesheets. Certain graph algorithms used to calculate graph-theoretic properties of the pathways come from the Cytoscape.js core, whereas others specialized in finding subgraphs of interest are from the cytoscape-graph-algos extension. SyBLaRS's architecture is sketched in Fig 7. SyBLaRS mainly depends on the Cytoscape.js library to draw graphs and manage graph operations. It uses sbgnml-to-cytoscape [26] and libsbmljs [27] packages and the cytoscape-graphml extension to convert SBGNML, SBML and GraphML files, respectively, into JSON format as accepted by Cytoscape.js. Cytosnap, which is a package to render graphs on the server side with Cytoscape. js, is used to apply layout and/or generate images of pathway graphs. Fig 8 shows the data flow and activity sequencing in SyBLaRS. The user is expected to form a query consisting of the file content and one or more of the layout, image and graph query options and to send it to the server. SyBLaRS first gets the query and parses it to its components. Then the corresponding Cytoscape.js graph is constructed from the file content. In case a graph query is applicable, it gets executed using the provided graph query options. SyBLaRS then transfers the graph, result of the graph query, layout and image options to Cytosnap, which first applies the specified layout, then highlights any query result, finally generating the corresponding image. SyBLaRS constructs a response message comprising the layout and image data, which are the outputs from Cytosnap and sent back to the user. Upon getting this response, the user is expected to parse it to its components and obtain the desired layout and image data. Results Once a service is set up, requests can be made through a web page, like the one used by our sample public deployment, or programmatically in the background without a user interface, say to produce an image of a newly curated pathway as it becomes available. Supported input file formats are SBGNML [1], SBML [2], GraphML [28], and JSON. Below is an example request to our sample deployment via curl: curl -X POST -H "Content-Type: text/plain" --data "request_body" syblars.cs.bilkent.edu.tr/file_format where, the file_format is one of sbgnml, sbml, graphml and json, and the request_body includes the map content and any layout, image and query options as detailed in the GitHub repository README. All three operations, performing a layout on the provided map, running a query on it and creating an image of it, are optional. If preferred, the service will return the layout information in JSON format. The format of the optional output of an image of the map is JPG, PNG or SVG. SyBLaRS has multiple use cases as a service including, but not limited to the following. For batch processing One can use SyBLaRS to quickly generate images for pathway models, where an optional automatic layout can also be performed. In fact, this could be automated in such a way that, in pathway databases where new models emerge periodically, a tool using a SyBLaRS service could construct such images on demand. We tested this use case for a hundred BioModels pathways [12] that are publicly available. In a matter of minutes, SyBLaRS was able to construct images of BioModels pathways with proper layout. These models and the corresponding images generated by SyBLaRS can be found on GitHub. See Fig 9 for an example of such an image as produced by SyBLaRS. As a layout service SyBLaRS may also be used as a layout service in pathway analysis applications, where the tool has rendering capabilities of its own but not a proper automatic layout facility. This use case has been tested with Newt [8], a web based pathway editor with advanced layout facilities by simply replacing the existing layout operations with remote ones on a SyBLaRS service. Sample runs proved acceptable in terms of execution time for interactive use. Individual use The publicly available deployment of SyBLaRS may also be used by individuals who would like to highlight certain paths of interest in their pathways or simply construct images of their pathways for inclusion in a document such as a scientific article. With SyBLaRS's user-friendly and simple graphical user interface, one can easily generate such images without having to install software or learn sophisticated pathway visualization tools. Availability and future directions SyBLaRS can be used both as an offline layout and rendering service to construct customized and annotated pictures of pathway models and as an online service to provide layout and rendering capabilities for systems biology software tools. In addition, it may be used for merely constructing automatically laid out static images of your pathway models with optional support to highlight paths or sub-pathways of your interest. SyBLaRS is open source and publicly available on GitHub and freely distributed under the MIT license. In addition, a sample deployment is available here for public consumption. The service in this deployment may also be used programmatically from your own application. As future work, we would like to embed newly calculated layout information in the provided file using the same format as the input, as opposed to returning it separately as JSON. We would also like to extend our query and layout (especially those styles that support Bezier curves for routing relations to produce aesthetically more pleasing layouts) algorithms.
4,039
2022-11-01T00:00:00.000
[ "Computer Science" ]
Consistency and Adequacy of Public and Commercial Health Insurance for US Children, 2016 to 2021 Key Points Question How do the rates and child and family characteristics associated with inadequate and inconsistent health insurance coverage compare for publicly vs commercially insured children in the US? Findings This cross-sectional study of 203 691 children found that publicly insured children experienced higher rates of inconsistent coverage, whereas commercially insured children faced higher rates of inadequate coverage. Public insurance consistency and commercial insurance adequacy improved substantially during the COVID-19 public health emergency. Meaning The findings of this cross-sectional study suggest that policies are needed to address the unique issues faced by each population of insured children to improve the consistency and quality of children’s health coverage in the postpandemic context. Introduction Consistent and adequate health insurance is critical to ensure children have access to affordable and high-quality health care.2][3] Among insured children, having adequate insurance-coverage that offers affordable access to needed services and health care professionals-is associated with lower unmet health care needs and higher quality care. 4,5[8] During the past 2 decades, the uninsured rate among children has steadily declined in the US.In 2021, 61.9% of US children were commercially insured, 36.4% were publicly insured, and only 5% were uninsured. 9This progress has been driven by policy reforms including expansions of Medicaid and the Children's Health Insurance Program (CHIP); commercial insurance regulations on cost sharing and coverage for preventive services; the establishment of the Affordable Care Act (ACA) Health Insurance Marketplace and subsidies as well as other efforts to enhance outreach and streamline enrollment. 10However, alongside declines in uninsurance, the proportion of US children who had inconsistent or inadequate insurance increased from 30.6% in 2016 to 34.0% in 2019. 4ior research has focused on documenting rates and trends in insurance consistency and adequacy at a national level for children covered by all insurance types 4,5,11 ; however, the insurancerelated challenges faced by publicly and commercially insured children are likely to differ due to differences in eligibility criteria, application processes, health care networks, cost-sharing requirements, as well as the accessibility, affordability, and quality of available care.The Methods This study was deemed exempt by the institutional review board of the University of Michigan, and informed consent was waived because only deidentified data were used.We followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline. JAMA Health Forum | Original Investigation Consistency and Adequacy of Public and Commercial Health Insurance for US Children Study Sample We conducted a secondary analysis of the 2016 to 2021 National Survey of Children's Health (NSCH), a nationally representative survey of US children from birth to 17 years old living in noninstitutional settings.Our analysis used the NSCH topical survey, which includes more than 100 survey items focused on 1 child in the sampled household.Nearly all NSCH respondents (93.8%) were biological, foster, and adoptive parents or stepparents of the selected child; 5.8% were other relatives (eg, grandparents) and 0.3% were nonrelatives.Detailed information on the NSCH sampling and data collection procedures are available from the US Census Bureau. 12Exposures The primary exposure was child insurance type at the time of the NSCH survey.Public insurance included coverage from any form of government assistance including Medicaid and CHIP. Commercial insurance included coverage through a family member's current or former employer or union, the ACA Marketplace, direct purchase from an insurance company, and TRICARE (the health care program of the US Department of Defense Military Health System) or other military health care. We excluded children who were covered by both public and commercial insurance (3.8% of the total sample) and those uninsured at the time of the survey (4.6%), including those insured only through the US Indian Health Service or through a religious health share. Outcomes The 2 primary outcomes were (1) inconsistent insurance, defined as having an insurance gap in the past 12 months and (2) inadequate insurance, defined as coverage failing to meet the following criteria: (i) benefits were usually or always sufficient to meet child's needs; (ii) coverage usually or always allowed child to see needed health care practitioners; and either (iii) no annual out-of-pocket (OOP) payments for child's health care or (iv) OOP costs were usually or always reasonable. Secondary outcomes were the 4 individual criteria that comprised insurance adequacy.These outcome definitions are the National Performance Measures for the US Department of Health and Human Services Title V Maternal and Child Health Services Block Grant program and have been applied in prior research on health insurance using the NSCH. 4,11,13,14variates Sociodemographic characteristics of the child were reported by the adult NSCH respondent and included age, sex, race and/or ethnicity (non-Hispanic Black, Hispanic, non-Hispanic White, and other [including American Indian, Alaska Native, Asian Indian, Chinese, Filipino, Guamanian, Japanese, Korean, Native Hawaiian, Samoan, Vietnamese, other Pacific Islander, other Asian, some other race, and multiracial]), family structure (2 married parents; 2 unmarried parents, single parent, other), family income (as a percentage of the federal poverty level [FPL]), having at least 1 primary caregiver born in the US, and primary household language (English, Spanish, other).Clinical characteristics included whether the child had none, 1, or 2 or more of a list of 24 chronic physical or mental health conditions or disabilities, or any special health care needs (CSHCN). 12The NSCH CSHCN screener identifies children who require more than average use of health care services, counseling, or medications, or who experience a functional limitation due to a condition for a duration of 12 months or longer. 12 Statistical Analysis We calculated rates of inconsistent and inadequate insurance overall and by sociodemographic and clinical characteristics stratified by insurance type.Because state Medicaid policies could drive heterogeneity in the outcomes, we also calculated rates of the outcomes for publicly insured children by state.To assess changes in the outcomes over time, we calculated rates by year and for the pooled prepandemic (2016-2019) and COVID-19 PHE (2020-2021) periods.We conducted unadjusted and adjusted logistic regressions to (1) Results The Changes During the COVID-19 PHE The Figure shows inconsistent and inadequate coverage rates by year and insurance type.Among publicly insured children, inconsistent insurance decreased from 4.8% before the PHE to 2.9% during the PHE (adjusted difference, −2.0 pp; 95% CI, −2.8 to −1.2 pp; 42% decline from baseline). Among commercially insured children, inadequate insurance decreased from 33.6% to 31.7% during the PHE (adjusted difference, −2.0 pp; 95% CI, −2.9 to −1.0 pp; 5.9% decline from baseline), primarily due to improvements in reasonable OOP costs (eTable 2 in Supplement 1).No PHE-related changes were identified for public insurance adequacy or commercial insurance consistency. Child Characteristics Associated With Inconsistent and Inadequate Coverage In adjusted models among publicly insured children, inconsistent coverage was significantly higher among Hispanic children and those with household incomes from 200% to 399% FPL (Table 3). Although differences by age category were not statistically significant, when comparing inconsistent insurance rates by year of age, we found that publicly insured children had markedly higher rates (7.3%) of inconsistent coverage at 1 year of age (eFigure 1 in Supplement 1).Inadequate coverage increased with child age and was significantly higher among publicly insured children with household incomes of 200% to 299% FPL and those with multiple chronic conditions and disabilities.Inconsistent coverage among publicly insured children varied from 0.6% in Vermont to 7.7% in Georgia, while inadequate coverage varied from 7.6% in Maine to 19.7% in Illinois (eTable 4, eFigures 2 and 3 in Supplement 1). In adjusted models among commercially insured children, inconsistent coverage was significantly higher for non-Hispanic Black children, children of other race or ethnicity, as well as children in households with unmarried parents, single parents, lower incomes, and those with multiple chronic conditions and disabilities.In contrast to publicly insured children, differences in inadequate coverage were identified for most commercially insured child characteristics including age, sex, race and/or ethnicity, family structure, income, and health needs.Inadequate coverage was particularly high for commercially insured children younger than 1 year (adjusted predicted probability, 40.0%), children with household income less than 100% FPL (38.0%) and those with 2 or more chronic conditions and (42.2%).eTable 3 in Supplement 1 provides the unadjusted differences by child characteristics and insurance type. Discussion Using nationally representative data, we found that inconsistent coverage is 3 times higher among publicly insured compared with commercially insured children.However, inadequate insurance is more prevalent overall, affecting nearly 1 in 5 children (16.5 million annually) in the US, with particularly high rates among the commercially insured.We also identified substantial improvements in public insurance consistency and commercial insurance adequacy during the COVID-19 PHE.Furthermore, we found that the child and family characteristics associated with higher rates of inconsistent and inadequate coverage differed by insurance type. Consistent with prior research showing that insurance gaps are a particular issue for publicly insured children, 3,15,16 we found that 4.2% of publicly insured children had a gap in the past year compared with only 1.4% of commercially insured children.Although some gaps are due to household income changes, a substantial share are for procedural reasons and nearly half of children who lose Medicaid-CHIP re-enroll within 12 months. 15As evidence of this, we found a notable spike in insurance gaps among publicly insured children at 1 year of age (7.3%), which reflects the first point of eligibility determination for most publicly insured children-given that being born to a mother with Medicaid automatically covers the child until their first birthday. Compared with 2016 to 2019, we found that inconsistent public insurance declined by 42% during the PHE when continuous Medicaid eligibility requirements were in place.The unwinding of these protections in 2023 is projected to leave 5.3 million children without Medicaid-CHIP coverage, potentially resulting in delays and forgone care. 17Among disenrolled children, 74% are projected to be disenrolled despite being eligible for Medicaid. 17The remaining 26% of Medicaid ineligible children will need to enroll in commercial coverage, which our findings indicate offers less adequate coverage, particularly for low-and middle-income families. States have promising policy options to bridge insurance gaps for publicly insured children in the post-PHE context.Since 1997, the US Centers for Medicare & Medicaid Services (CMS) has allowed states to grant 12-month continuous Medicaid-CHIP eligibility for children, which has been associated with reduced insurance gaps. 18However, as of January 2023, only 23 states have implemented this policy. 19In 2022, CMS approved Oregon's 1115 waiver that allows children to continuously maintain Medicaid-CHIP from birth until age 6 years, with 2-year continuous eligibility from age 6 to 17 years.Although multiyear continuous eligibility rules are the most durable approaches to improve coverage consistency, states can also address procedural disenrollment through automatic renewal, increased funding for consumer assistance, and working with managed care plans to maintain updated beneficiary contact information. 20Recent proposed rulemaking from CMS would make it easier for states to implement these streamlined enrollment and renewal intitiatives. 21r findings also point to a particular need for state Medicaid programs to conduct targeted outreach and linguistically and culturally competent navigation assistance for immigrant families.We found that 1 in 20 publicly insured Hispanic children had inconsistent coverage in the past year, with similar rates among children without a US-born caregiver (4.6%).These findings could partly reflect reticence among immigrant parents to enroll eligible children in Medicaid-CHIP for fear of immigration-related consequences.Although the Biden Administration reversed the Trump characteristics of children and families at greater risk of inconsistent or inadequate insurance may also vary by insurance type.Furthermore, policy responses during the COVID-19 public health emergency (PHE), such as continuous eligibility requirements for Medicaid and bolstered ACA Marketplace subsidies may have substantially affected children's insurance.Although a national study found no change in US children's insurance consistency and adequacy from 2019 to 2020 (the first year of the COVID-19 PHE), 11 aggregate estimates may mask changes between publicly insured compared with commercially insured children, who were differentially affected by COVID-19 PHE policies.Using nationally representative survey data from 2016 to 2021, the objective of this study was to evaluate the consistency and adequacy of health insurance for children insured by public compared with commercial insurance.We also compared changes during the COVID-19 PHE and identified sociodemographic and clinical characteristics associated with inconsistent and inadequate coverage within each insurance type. Figure . Figure.Inconsistent and Inadequate Health Insurance Coverage for US Children, by Year and Insurance Type, 2016 to 2021 JAMA Health Forum | Original Investigation Consistency compare outcome differences by insurance type,(2)estimate and Adequacy of Public and Commercial Health Insurance for US Children the PHE stratified by insurance type, and (3) identify child characteristics associated with the outcomes stratified by insurance type.For outcome differences by insurance type, we were primarily interested in unadjusted differences, which reflect both the unique populations served and the features of public compared with commercial insurance.All analyses applied survey weights to produce nationally representative estimates that account for the NSCH sampling structure and nonresponse.The NSCH public-use files include imputed data for several variables.12Childsexand race and/or ethnicity had low levels of missingness (<1%) and were imputed by the Census Bureau using hot-deck imputation.Household income had a high level of missingness (18%) and was inputted by the Census Bureau using sequential regression imputation.We used the Stata multiple imputation command to appropriately estimate means and variance based on the 6 imputed income values in the public-use data set.For all other variables with missing values not imputed by the Census Bureau, we included missing as a category in the analysis.Statistical tests were 2-tailed and P < .05 was considered statistically significant.Data analyses were performed from March to August 2023. Table 1 . Sample Characteristics by Child's Current Health Insurance Type, 2016 to 2021 a Other race or ethnicity included Alaska Native, American Indian, Asian Indian, Chinese, Filipino, Guamanian, Japanese, Korean, Native Hawaiian, Samoan, Vietnamese, other Pacific Islander, other Asian, any other race, and multiracial.Unweighted sample sizes and survey-weighted prevalence estimates.Variables without a missing category had no missing values. Table 2 . Prevalence of Health Insurance Consistency and Adequacy Among US Children, by Insurance Type, 2016 to 2021 Table 3 . Adjusted Predicted Probabilities of Inconsistent and Inadequate Health Coverage for Publicly Compared With Commercially Insured Children, 2016 to 2021 Insurance Consistency and Adequacy by Year and Pre (2016-19) vs. Post-Pandemic (2020-21) Differences for Publicly Insured Children eTable 2. Insurance Consistency and Adequacy by Year and Pre (2016-19) vs. Post-Pandemic (2020-21) Differences for Commercially Insured Children eFigure 1. Inconsistent and Inadequate Insurance by Child Age and Insurance Type, US 2016-2021 eTable 3. Unadjusted Predicted Probabilities of Inconsistent and Inadequate Coverage for Publicly and Commercially Insured Children, US 2016-2021.eTable 4. Inconsistent and Inadequate Insurance for Publicly Insured Children by State, 2016-2021 eFigure 2. Inconsistent Insurance for Publicly Insured Children by State, 2016-2021 eFigure 3. Inadequate Insurance for Publicly Insured Children by State, 2016-2021
3,409
2023-11-01T00:00:00.000
[ "Medicine", "Economics" ]
Fgf and Esrrb integrate epigenetic and transcriptional networks that regulate self-renewal of trophoblast stem cells Esrrb (oestrogen-related receptor beta) is a transcription factor implicated in embryonic stem (ES) cell self-renewal, yet its knockout causes intrauterine lethality due to defects in trophoblast development. Here we show that in trophoblast stem (TS) cells, Esrrb is a downstream target of fibroblast growth factor (Fgf) signalling and is critical to drive TS cell self-renewal. In contrast to its occupancy of pluripotency-associated loci in ES cells, Esrrb sustains the stemness of TS cells by direct binding and regulation of TS cell-specific transcription factors including Elf5 and Eomes. To elucidate the mechanisms whereby Esrrb controls the expression of its targets, we characterized its TS cell-specific interactome using mass spectrometry. Unlike in ES cells, Esrrb interacts in TS cells with the histone demethylase Lsd1 and with the RNA Polymerase II-associated Integrator complex. Our findings provide new insights into both the general and context-dependent wiring of transcription factor networks in stem cells by master transcription factors. T he placenta is an essential organ that ensures the exchange of nutrients, oxygen, hormones, metabolic by-products and other molecules between the maternal and fetal bloodstreams 1 . Essential insights into the molecular pathways controlling placental development have been gained by using trophoblast stem (TS) cells that can self-renew and differentiate into the various placental trophoblast cell types in vitro 2,3 . Mouse TS cells are derived from the trophectoderm of blastocysts and represent the developmental counterpart to embryonic stem (ES) cells derived from the preimplantation epiblast. Unlike ES cells, TS cells can also be derived from the extraembryonic ectoderm of early post-implantation conceptuses 2,4 . Derivation and maintenance of TS cells depends on fibroblast growth factor (Fgf) and Nodal/Activin signalling 2,[5][6][7] . Consequently, the withdrawal of both components leads to the differentiation of TS cells into various trophoblast cell types of the chorioallantoic placenta including spongiotrophoblast, syncytiotrophoblast and giant cells 2 . In TS cells, Fgf signalling predominantly stimulates the Mek/Erk pathway leading to the expression of essential TS cell-specific transcription factors (TFs) such as Cdx2 (refs 2,8,9). In addition to Cdx2, other key TFs that are critical to maintain the stem cell state of TS cells include Eomes, Esrrb, Elf5, Sox2 and Tfap2c (refs [10][11][12][13][14][15]. Interestingly, some of these, such as Eomes, Elf5 and Tfap2c, have seemingly TS cell-specific functions during this developmental window, whereas others, notably Sox2 and Esrrb, have pivotal roles also in regulating pluripotency of ES cells [11][12][13][14][15][16][17] . Recent findings suggest that the requirement for Fgf (Fgf4) signalling in TS cells cannot be replaced by the ectopic expression of a single one of these TFs (that is, Elf5, Eomes, Cdx2, Tfap2c, Sox2 or Esrrb). However, the combined ectopic expression of Sox2 and Esrrb has been shown to be capable of sustaining TS cell self-renewal in the absence of Fgf4 (ref. 18). While Sox2 functions by interacting with Tfap2c, which in turn recruits Sox2 to Fgfregulated genes, the critical interactors of Esrrb in TS cells remain unknown 18 . Esrrb (oestrogen-related receptor beta) plays a key role in trophoblast development as embryos deficient for Esrrb die before E10.5 because of severely impaired placental formation, characterized by an abnormal chorion layer and overabundance of giant cells 12 . In line with a pivotal role in trophoblast development, TS cells cannot be derived from Esrrb mutants 19 . Tetraploid aggregation experiments proved that the embryonic lethality can be rescued by wild-type (wt) trophoblast cells, thus demonstrating that the essential function of Esrrb during early development resides in the trophoblast compartment. Although Esrrb is dispensable for development of the embryo proper, it is required for self-renewal of mouse ES cells in ground-state conditions 16,20,21 . In this context, Esrrb cooperates with a range of TFs (e.g., Oct4, Sall4 and Ncoa3), chromatinremodelling complexes and with components of the transcriptional machinery including the Mediator complex and RNA Polymerase II (RNAPII) to regulate self-renewal 20,22,23 . Thus, similar to Sox2, Esrrb is a key TF in both ES and TS cells, raising questions about its specificity in different developmental contexts and whether it acts as a more general determinant of stemness irrespective of stem cell type. Here we address the function of Esrrb in TS cells. We show that the regulation and target gene network differ profoundly between ES and TS cells. Unlike in ES cells, Esrrb is the most prominent early-response gene to Mek inhibition in TS cells, the main downstream effector of Fgf signalling in the trophoblast compartment. We demonstrate that Esrrb depletion results in downregulation of the key TS cell-specific TFs, consequently causing TS cell differentiation. This function of Esrrb is exerted by directly binding, and activating, a core set of TS cell-specific target genes including Elf5, Eomes, Bmp4 and Sox2, with little overlap to its chromatin occupancy in ES cells. Finally, by characterizing the Esrrb protein interactome we discovered a number of novel, TS cell-specific interactions. Unlike in ES cells, Esrrb interacts in TS cells with the histone demethylase Lsd1 and with the RNAPII-associated Integrator complex. Taken together, our data reveal that Esrrb regulates highly stem cell-type-specific networks due to distinct interaction partners that are essential to maintain the self-renewal state of TS cells. Results Esrrb is an early target of Fgf/Erk signalling in TS cells. Derivation and maintenance of TS cells depend on the presence of Fgf signalling 2,24 . Numerous gene knockout experiments identified the mitogen-activated kinase Mek/Erk branch of the Fgf signalling pathway as predominantly active in both TS cells and extraembryonic ectoderm 18,[25][26][27][28] . Therefore, we first tested changes in expression of key TS cell TFs on Mek/Erk inhibition using the Mek inhibitor PD0325901 ('PD03'; Fig. 1a). Among the candidate TFs we examined after 3-48 h of treatment, Esrrb was the fastest and most profoundly downregulated gene, followed closely by Sox2, in line with a recent report 18 (Fig. 1b). Some TFs implicated in TS cell maintenance including Eomes, Elf5 and Cdx2 were also downregulated on Mek inhibition albeit at a slower pace, whereas the expression of others such as Ets2 or Tfap2c remained unchanged. These data were confirmed by immunostaining for some of the most prominent TS cell TFs, namely Cdx2, Elf5, Eomes and Tfap2c ( Fig. 1c; Supplementary Fig. 1a). To further refine this analysis and to obtain an unbiased genome-wide coverage of the immediate-early-response genes of Mek inhibition in TS cells, we performed RNA sequencing (RNAseq) analysis after 3 and 24 h of PD03 treatment. This global expression analysis identified in total 399 genes that were deregulated after 3 and 24 h by Fgf signalling (Fig. 1d; Supplementary Data 1). The majority of these genes were induced by Erk activation as 240 of them were downregulated on Mek inhibition, while only 159 genes were upregulated using stringent confidence parameters (Fig. 1d,e; Supplementary Data 1). Functional gene annotation analysis using MouseMine confirmed that affected genes were specifically enriched for extraembryonic (trophoblast) tissue development, as well as for embryonic lethality and transcriptional control in particular for the downregulated genes ( Supplementary Fig. 1b,c). Of particular note were the dynamics of downregulation on Mek inhibition; thus, we identified 38 early responders that were downregulated, but only 10 that were upregulated (Fig. 1d). Notably, of the known TS cell TFs, this analysis confirmed Esrrb as the earliest, most rapidly silenced gene on PD03 treatment (Fig. 1e). These results provided a comprehensive overview of Fgf-regulated genes in TS cells and identified many potential candidates with a role in trophoblast development. The finding that Esrrb was the most rapidly downregulated gene after 3 h of PD03 exposure suggested that it may be a direct target of Mek/Erk signalling. Next, we asked whether in addition to Fgf either Nodal/Activin or Bmp4 signalling can also regulate Esrrb expression in standard TS cell culture conditions. Because levels of Esrrb were not affected by either SB431542 (a Nodal/Activin signalling inhibitor) or LDN (a Bmp signalling inhibitor) treatment, we concluded that, unlike Fgf/Mek signalling, Nodal/Activin and Bmp4 signalling did not directly regulate Esrrb expression in TS cells ( Supplementary Fig. 1d). Notably, the Esrrb sensitivity to Fgf pathway inhibition is TS cell-specific, as PD03 treatment of ES cells does not affect Esrrb levels 16 . Instead, in ES cells Esrrb expression is strongly induced by the Gsk3-beta inhibitor and Wnt agonist CHIR99021 (CH) 16 . To examine whether Gsk3-beta and Wnt signalling are involved in regulation of Esrrb in TS cells, we treated them with either CH or the canonical Wnt inhibitor IWR-1. After 72 h of treatment, we found that Esrrb levels were unaffected by either of these compounds ( Supplementary Fig. 1e). Hence, the regulation of Esrrb diverges profoundly in ES and TS cells, as it is mediated by Gsk3-beta and Erk1/2 signalling, respectively. Taken together, these insights prompted us to investigate the specific function of Esrrb in TS cells in greater detail. Esrrb is pivotal to maintain the TS cell state. To gain first insights into which genes may be primary targets of Esrrb, we (Fig. 2a). We confirmed these findings by reverse transcriptase-quantitative polymerase chain reaction (RT-QPCR) and at the protein level by immunostaining for Eomes and Elf5 (Fig. 2b,c). Interestingly, when specifically examining the trajectories between control and 24 h DES treatment, other prominent TS cell regulators such as Cdx2 were less influenced during this immediate-response window (Fig. 2a). To further examine Esrrb as a primary mediator of TF induction by Fgf signalling in TS cells, we analysed the overlap of affected genes between the DES and PD03 RNA-seq data sets (Fig. 2d,e). Strikingly, we found that both DES and PD03 treatments had an impact on the same set of prominent stem cell genes Nr0b1, Zic3, Sox2, Id2, Cdx2, Eomes and Elf5 (Fig. 2d,e). Taken together, these data indicated that Fgf-Mek signalling regulates, via Esrrb, essential TFs such as Sox2, Cdx2, Eomes and Elf5 that sustain TS cell self-renewal. To account for possible off-target effects of DES treatment, for example, on Esrra and Esrrg, we also performed knockdown (KD) experiments using three short-hairpin RNAs (shRNAs) directed against Esrrb (KD-1, KD-2 and KD-3) and two scrambled shRNAs as controls (scr-1 and scr-2). Esrrb transcript levels were reduced in the KD-1, KD-2 and KD-3 lines by up to 90% compared with control lines, and these results were also confirmed on the protein level ( Fig. 2f,g). We found that depletion of Esrrb triggered differentiation despite the presence of Fgf as indicated by the morphological appearance of trophoblast giant cells and loss of proliferative capacity (Fig. 2h). Expression analysis revealed the rapid loss of stem cell markers including Cdx2, Eomes, Elf5, Nr0b1 and Bmp4, and concomitant upregulation of genes associated with trophoblast differentiation including Syna, Gcm1, Cdkn1c, Prl2c2 (also known as Proliferin ¼ Plf) and Prl3d1 (placental lactogen 1 ¼ Pl1; Fig. 2g). We confirmed these results at the protein level by using western blot analysis (Fig. 2f). Moreover, this effect was specific to Esrrb depletion as cotransfecting the KD-1 shRNA targeted against the 3 0 -untranslated region with an Esrrb-coding region expression construct fully rescued the KD phenotype ( Supplementary Fig. 2a,b). These data demonstrate that Esrrb is required for TS cell gene expression and self-renewal. To gain further insights into the cohort of genes regulated by Esrrb, we performed an RNA-seq analysis on Esrrb KD-1 and KD-2 TS cells 5 days after transfection. Global expression analysis identified 59 genes that were affected by Esrrb KD in TS cells (Supplementary Fig. 2c; Supplementary Data 3). Gene ontology (GO) term analysis revealed overrepresentation of processes related to placental development and trophoblast morphology among genes affected by the Esrrb KD ( Supplementary Fig. 2d,e). In addition, on the global level, downregulated genes contained known TS cell markers including Eomes, Cdx2, Nr0b1, Id2 and Sox2, whereas upregulated genes were highly enriched for factors associated with trophoblast differentiation. These results confirmed that Esrrb presides over a network of genes involved in extraembryonic development and specifically in maintenance of the stem cell state within the trophoblast niche. Esrrb forms stem cell-type-specific transcriptional networks. To explore whether Esrrb directly regulates the key TS cell genes, we performed chromatin immunoprecipitation (ChIP) followed by QPCR and found extensive binding on putative transcriptional regulatory regions of Elf5, Eomes, Esrrb, Sox2, Bmp4, Cdx2 and Tfap2c (Fig. 3a). To obtain a comprehensive global overview of the binding sites of Esrrb in TS cells, we carried out ChIP followed by high-throughput sequencing (ChIP-seq) and compared these data to the binding profile of Esrrb in ES cells where it plays a well-appreciated role in maintaining pluripotency 29 . We identified 14507 Esrrb-binding sites in TS cells ( Fig. 3b; Supplementary Data 4). Globally, these sites were predominantly found at intronic and intergenic regions (Fig. 3c), similar in feature distribution to that observed in ES cells. However, their precise location exhibited only a partial (3,027) overlap with those in ES cells ( Fig. 3b; Supplementary Data 3). The markedly different Esrrb-binding profile between ES and TS cells was exemplified by a significant enrichment of genes involved in trophectodermal differentiation and placental development among the TS cellspecific peaks compared with the ES cell-specific peaks ( Fig. 3d; Supplementary Fig. 3a). These results suggest that contextdependent binding of Esrrb is linked to specific developmental processes. Notably, we identified Esrrb binding at principally all known core TS cell genes, including itself, implying that Esrrb has a self-reinforcing function similar to that ascribed to many pluripotency genes in ES cells ( Fig. 3e; Supplementary Fig. 3b). We tested the functionality of the Esrrb-binding sites at Eomes and Elf5, that is, two of the important TS cell genes we had identified as primary targets of Esrrb by ChIP-QPCR and ChIP-seq, in luciferase assays. Selected regions of both genes stimulated reporter activity (Fig. 3f), and this effect was abolished by either mutating Esrrb-binding sites or by DES treatment ( Supplementary Fig. 3c). These results further confirmed that Esrrb directly binds to and regulates Eomes and Elf5 in TS cells. On a more global level, the majority of genes deregulated either on Esrrb KD or 24 h DES treatment were directly bound by Esrrb ( Fig. 3g; Supplementary Fig. 3d). To gain better insights into the context-dependent Esrrb binding, we performed de novo motif analysis using MEME/ DREME followed by Tomtom suits 30,31 . In TS cells, similar to ES cells, Esrrb peaks (defined here as ± 200 bp around peak summit) were highly enriched in the canonical Esrrb/Esrra-binding motifs, Cdkn1c ARTICLE suggesting that the context-dependent binding specificity may rely on other TFs (Fig. 3h). Central motif enrichment analysis 32 showed centred and symmetrical Esrrb/Esrra motif distribution (Fig. 3i). Space motif analysis (SpaMo) identified, among others, Cdx2 as a secondary motif enriched in a number Esrrb peaks (Fig. 3h). These findings raised the question of whether Cdx2 could potentially recruit Esrrb to TS cell-specific sites and thereby mediate the context-dependent activity of Esrrb in TS versus ES cells. To examine the functional overlap of genes regulated by Cdx2 and Esrrb, we depleted Cdx2 in TS cells by shRNA-mediated KD. Expression analysis showed that similar to the Esrrb KD, key TS cell markers were downregulated (Esrrb, Eomes and Elf5), whereas differentiation markers were upregulated ( Supplementary Fig. 4a). However, when we compared ChIP-seq data sets of Esrrb (this study) and Cdx2 (published by Chuong et al. 33 ), we identified only a small (4.1%) subset of Esrrb peaks that were co-bound by Cdx2 when using the previously published list of 11462 Cdx2-specific peaks (Supplementary Fig. 4b; Supplementary Data 5) and even fewer (o1%) when applying the identical analysis criteria used in our study on the Cdx2 ChIP-seq data set for peak calling ( Supplementary Fig. 4c). This small subset of co-bound loci did not contain any prominent known TS cell genes. To further examine the potential cooperation between Esrrb and Cdx2, we performed coimmunoprecipitation experiments followed by either western blot or mass spectrometry analysis. While we identified a number of Cdx2 interactors including Tead4, Eomes and Tfcp2, we were unable to detect Esrrb ( Supplementary Fig. 5a- Input V e c t -F l a g E s r r b -F l a g IP: Flag V e c t -F l a g E s r r b -F l a g 100 kDa Supplementary Fig. 6a,b). Next, we purified Esrrb-bound proteins in mild conditions, identical to those employed in ES cells 23 . Using an unbiased protein identification approach using liquid chromatography-tandem mass spectrometry (LC-MS/MS), we found Esrrb (29 and 30 unique peptides, protein annotated as 'ERR2') in addition to numerous high-confidence interaction partners in several independent experiments (Table 1, Supplementary Data 6). Among these, we detected a number of epigenetic complexes that were previously identified as parts of the Esrrb interactome in ES cells including multiple subunits of NuRD, p400/Trrap and Mll/ Trx (Fig. 4a) 23 . Interestingly, we never detected any component of the SWI/SNF complex, another prominent interactor in ES cells that is essential for early embryogenesis 23,34 . Instead, the TS cell-specific Esrrb protein network included components of the lysine-specific demethylase 1 (Lsd1, also known as Kdm1a) complex (Table 1, Supplementary Data 6). Lsd1 is a histone demethylase that selectively removes mono-and dimethyl groups from either lysine 4 of histone H3 (H3K4) or H3K9 (ref. 35). Intriguingly, recent evidence points to an important function of Lsd1 in maintaining the TS cell state by preventing early onset of differentiation 36 . We confirmed the presence of Lsd1 in Esrrb immunoprecipitates by immunoblotting (Fig. 4b); we also performed a reciprocal identification of Lsd1 interactors by rapid immunoprecipitation mass spectrometry of endogenous proteins (RIME) 37 . The LC-MS/MS analysis identified Esrrb as one of the Lsd1 protein interactors in TS cells in addition to other Lsd1-specific interacting TFs (for example, Scmbt2 or Ap2c ( ¼ Tfap2c)) and chromatin-modifying complexes (for example, subunit of the FACT complex; Supplementary Table 1; Supplementary Data 7). Taken together, these results suggested that Esrrb operates in distinct protein complexes that exert specific functions in TS cells and more general functions shared with ES cells. We then sought to investigate in more detail the cooperative function between Lsd1 and Esrrb. For this purpose, we performed Lsd1 ChIP-QPCR and ChIP-seq analyses in TS cells and compared this with the Esrrb occupancy profiles. Importantly, Lsd1 bound to the core set of Esrrb targets including Elf5, Eomes, Bmp4 and Sox2 (Fig. 4c); globally 60% of Esrrb peaks were co-occupied by Lsd1 (Fig. 4d-f; Supplementary Data 8) and co-bound loci were associated with a significant proportion of genes deregulated on Esrrb inhibition or KD ( Supplementary Fig. 7a). However, when we specifically inhibited Lsd1, genes involved in onset of differentiation were upregulated (including Ovol2 and Zic3) but expression of the key TFs controlling TS cell self-renewal was not, or only mildly, affected (Fig. 4g). This result is in line with previous reports suggesting a role of Lsd1 primarily in regulating differentiation genes 36 , as also supported by Lsd1's broad expression pattern within the entire trophoblast compartment (Supplementary Fig. 7b). Transcriptional protein interactome of Esrrb in TS cells. Besides interactors involved in epigenetic regulation of transcription, we identified also TFs and cofactor complexes that directly interact with RNAPII (Table 1; Fig. 4a). Similar to some shared epigenetic complexes, we found that the TFs Nr0b1, Esrra, Tf2l1, Zfp462 and others overlapped with the Esrrb interactome in ES cells, thereby further validating our immunoprecipitation (IP) LC-MS/MS analysis (Fig. 4a). Since Nr0b1 has been found to have an important role in ES cell self-renewal, we confirmed by co-immunoprecipitation that it also interacts with Esrrb in TS cells (Fig. 5a). ChIP-seq analysis for Nr0b1 in TS cells showed binding overlap with Esrrb on a subset of essential TS cell-specific (for example, Cdx2 and Tfap2) and general developmental loci (Lin28a and Cdh1; Fig. 5b-d; Supplementary Fig. 8a; Supplementary Data 8). As with Esrrb before, we observed that Nr0b1 binding in TS and ES cells showed a small overlap, with only 52 Esrrb/Nr0b1 co-bound regions shared between ES and TS cells ( Supplementary Fig. 8b, Supplementary Data 8). These detailed novel data on the context-specific wiring of transcriptional networks are supported also by the limited overlap of Tfcp2l1, another TF that complexes with Esrrb in both TS and ES cells, with Esrrb TS cell peaks (Supplementary Fig. 8c; Supplementary Data 8). Intriguingly, in contrast to the Esrrb interactome in ES cells 23 , we never detected components of the prominent RNAPIIassociated complex Mediator as an Esrrb interactor in TS cells. This finding prompted us to search for alternative explanations of Esrrb-mediated RNAPII recruitment and activation at its target Fig. 9). We also confirmed the interaction of Integrator components with Esrrb by co-immunoprecipitation (Fig. 5e,f). Until recently, the Integrator complex was implicated in small nuclear RNA transcription but a recent study found that it also functions in Egf-mediated transcriptional activation of immediate-earlyresponse genes 38,39 . This important finding may explain how Esrrb attracts the transcriptional machinery in the absence of the interaction with Mediator in TS cells (Fig. 5g). In summary, our results provide comprehensive insights into the stem cell-type-specific regulation and function of Esrrb, suggesting an exciting mechanism of how Fgf via Esrrb can rapidly and specifically impact on the transcription of key genes controlling self-renewal of TS cells (Fig. 5h). Discussion Esrrb is known to play a central role in maintaining pluripotency of ES cells by acting in concert with various other key pluripotency genes. Despite this, mouse mutants deficient for Esrrb die of a trophoblast defect that can be rescued by tetraploid aggregation experiments, thus definitively ruling out a contributing defect intrinsic to the embryo proper 12 . Although it has been demonstrated that Esrrb is required for early trophoblast development, the function of Esrrb in TS cells has not yet been elucidated. Here we show that Esrrb establishes highly stem cell-type-specific functional networks both at the level of chromatin occupancy as well as at the level of proteinprotein interactions. While some overlap in binding partners and target gene profile is observed between ES and TS cells that may confer more generic 'stemness' functions, we here show that Esrrb exerts lineage-specific pivotal roles in the TS cell compartment. Our data demonstrate that, in striking contrast to the situation in ES cells, Esrrb is an immediate target of Fgf/Mek signalling in TS cells and in turn directly activates key TS cell genes. To decipher the mechanism whereby Esrrb regulates TS cell-specific transcriptional regulation, we identified the Esrrb protein interactome-the first of its kind in TS cells to date. Several lines of evidence suggest that Esrrb is the main mediator of Fgf-driven Erk signalling in TS cells. First, Esrrb is rapidly downregulated on Mek inhibition identifying it as a direct target of the immediate-early Mek/Erk response. Second, the overlap of genes that are misregulated on short-term inhibition of Fgf-Erk signalling (PD03) and Esrrb (DES) includes key TS cell regulators such as Sox2, Eomes, Cdx2 and Elf5. Third, a great proportion of genes that were deregulated by either Esrrb KD or DES treatment are bound by Esrrb, strongly supporting their direct regulation. Indeed, we confirmed such a direct transcriptional control function of Esrrb at the Eomes and Elf5 loci, where mutagenesis of Esrrb-binding sites in putative enhancer regions abolished luciferase reporter activity. This effect was apparent despite the presence of Fgf signalling demonstrating that Esrrb binding is vital for activation of Elf5 and Eomes. Thus, Esrrb is an essential mediator of Fgf-Erk signalling that induces Elf5 and Eomes expression. Taken together, our data show that Fgf-Erk and Esrrb constitute the major axis controlling critical TS cell genes. If Esrrb has diverse functions in different developmental contexts, we would expect that it binds to and regulates different genes in these settings. Indeed, we found that there is only a partial overlap of sites bound by Esrrb in ES and TS cells suggesting that some functions of Esrrb might be conserved (for example, driving self-renewal and proliferation) while others might be divergent. This insight led to the crucial question about protein interaction partners that mediate the general and specific functions of Esrrb in TS versus ES cells. To date, we are lacking protein interactomes in TS cells that would clarify whether the same general factors and mechanisms drive self-renewal in embryonic and extraembryonic stem cells. In this study we provide a comprehensive analysis of the Esrrb, Lsd1 as well as the Cdx2 binding partners in TS cells as a key resource to elucidate their mechanistic roles in stemness and trophoblast development. Similar to Esrrb-interacting proteins in ES cells, we identified two separate classes of Esrrb interactors in TS cells: (i) epigenetic regulators that remodel and modify chromatin and (ii) regulators that can interact directly with the transcriptional machinery. Importantly, we found that, while some of these interactors in TS cells overlap with ES cells, others do not, further suggesting both general and specific mechanisms of Esrrb action in distinct stem cell types. One of the proteins identified as a TS cell-specific Esrrb interactor was the lysine-specific demethylase Lsd1. In ES cells, Lsd1 occupies enhancers of active genes critical for pluripotency. On differentiation, Lsd1 decommissions these enhancers ensuring the shutdown of the pluripotency programme 40 . In contrast, in TS cells, it has been shown that the transcription of stem cell marker genes Cdx2 and Eomes is reduced considerably faster in the absence of Lsd1 than in controls on induction of differentiation, in line with the observation that Lsd1-depleted TS cells exhibit a lowered threshold for differentiation onset 36 . Thus, although depletion 36 or inhibition of Lsd1 has no clear-cut effect on TS cell marker silencing in stem cell conditions (Fig. 4g), it appears that Esrrb and Lsd1 cooperatively promote the 'naive' TS cell state to maintain a fine-tuned balance of gene transcription at joint TS cell target genes. Besides epigenetic regulators, we identified numerous TFs that interact with Esrrb in TS cells. One of these factors is Nr0b1 ( ¼ Dax1), which associates with Esrrb also in ES cells 23,41,42 . Nr0b1 is part of the ES cell self-renewal network where it interacts with Oct4 and gets recruited to Oct4/Sox2-binding sites 23,42 . However, we discovered that similar to Esrrb, Nr0b1 does not show an extensive binding overlap between TS and ES cells, again underpinning the finding that, although both TFs are shared between ES and TS cells, they exert largely divergent functions depending on stem cell type. This raises the question about how the context-dependent recruitment of Esrrb and Nr0b1 to distinct sites is achieved in different stem cells. Regarding Esrrb, Cdx2, as a key TS cell regulator, is an obvious candidate for this role. This notion is further supported by our findings that similar genes are downregulated on Esrrb and Cdx2 depletion. However, we could not detect an extensive overlap between published Cdx2 (ref. 33) and our Esrrrb ChIP-seqbinding profiles, and neither did we observe a direct interaction between these two factors at the protein level. We did, however, identify other prominent Cdx2 interactors including Tead4 and Eomes, thus strongly validating our approach. Although Esrrb and Cdx2 ultimately co-regulate, directly or indirectly, a similar set of target genes, it is therefore likely that both TFs function in parallel pathways to regulate the stem cell state of TS cells. Taken together, these findings provide new and comprehensive insights into the TF interaction network that governs TS cell self-renewal and identity. It will be important to elucidate in the future how this network exerts specificity in TS cells with partially shared components present also in ES cells. In fact, our comprehensive identification of interaction partners may provide first leads into how this context-dependent wiring of transcriptional networks is achieved, by revealing association with distinct components of the core transcriptional machinery depending on stem cell type. In ES cells, Esrrb was identified as being uniquely associated with the RNAPII complex and numerous subunits of the Mediator complex 23 , indicating a critical role for Esrrb in transcriptional activation. The Mediator complex is a multifunctional RNAPII-associated scaffold that is required for mRNA transcription at different stages of the process. The interaction with TFs is crucial for recruitment and specificity in response to signalling 43 . In TS cells we did not detect an interaction between Esrrb and the Mediator complex raising the question of an alternative way to stimulate transcription. Instead, we identified numerous subunits of the Integrator complex interacting with Esrrb. Although the Integrator complex has been implicated mostly in the transcription of small nuclear RNAs 38 , a recent study demonstrated its involvement in both initiation and release from pausing of RNAPII during mRNA transcription 39 . Intriguingly, this mechanism was demonstrated for early-response genes that are activated by Egf. Since Fgf has also a very rapid impact on transcription of some key genes, notably Esrrb, in TS cells, this raises the exciting possibility that Esrrb activates transcription by association with the Integrator complex and release of RNAPII from pausing (Fig. 5g). This would suggest that not only specific signals and TFs shape self-renewal and identity of different stem cell types but that general mechanisms of transcriptional control also contribute to confer stem cell specificity. Taken together, we demonstrate here an essential TS cellspecific role of Esrrb and provide key insights into mechanisms of Fgf-Erk-mediated self-renewal in TS cells. The raw reads were trimmed to remove adapter sequences (minimum overlap required of 3 bp) and bad-quality bases at the end of each read using Trim Galore (http://www.bioinformatics.babraham.ac.uk/projects/trim_galore). All reads were aligned to the mouse genome (GRCm38) with Burrows-Wheeler alignment (BWA) 46 using default options. Peak calling was performed with MACS2 (ref. 47) using only unique mapping and non-duplicated reads from each ChIP sample and a single pooled IgG control. To combine the results from the five replicates without pooling, we used the irreproducible discovery rate (IDR) approach developed by the ENCODE project 48 . Data for Esrrb ChIP-seq binding in ES cells from ref. 29 (SRX000542 and SRX000543) were downloaded from the European Nucleotide Archive. The publicly available data consisted of four replicate experiments. Fastq files were used as provided without base-trimming, while alignment, peak calling and IDR analysis were performed in the same way as for the in-house samples. Annotation of binding sites according to genomic features was performed by overlapping the sites with Ensembl v77 annotations. Binding sites within À 2,500 and þ 500 bp from transcription start sites were classified as overlapping 'promoters'. Binding sites falling between the transcription end site and þ 2,500 bp downstream of the end site were classified as overlapping 'downstream' sites. Binding sites falling outside gene, promoter and downstream sites were classified as 'intergenic'. Since some peaks overlap multiple annotations, these were disambiguated with the following priority: (1) promoters, (2) exons, (3) introns, (4) downstream, (5) intergenic. Motif analysis was performed with MEME-chip 49 using the JASPAR_CORE_2014_vertebrate database, searching for zero or one occurrence of the motif per peaks and with a maximum number of motifs discovered by MEME of 12. Functional annotation of genes associated with peak regions was performed using GREAT 50 with the whole mouse genome as background. RT-QPCR. RNA was isolated using the RNeasy kit (Qiagen) and DNAseI-treated with the TURBO DNA-free kit (Life Technologies AM1907) according to the manufacturer's instructions. cDNA was synthesized using 3.5 mg RNA primed with random hexamers according to the RevertAid H Minus M-MuLV Reverse Transcriptase protocol (Thermo Scientific EP0451). DNA was diluted and QPCR performed using SYBR Green Jump Start Taq Ready Mix (Sigma S4438), on a Bio-Rad CFX96 thermocycler. Primer pairs are provided in Supplementary Table 2. RNA KD. RNA KD experiments were performed using the pSuper-neo system. Oligos (see Supplementary Table 3 for shRNA sequences) were cloned into BglII/XhoI sites. TS cells were transfected with 4.5 mg of plasmid and selected after 24 h with 600 mg ml À 1 G418. For immunofluorescence staining of mouse conceptuses, E6.5 implantation sites of wt (C57BL/6Babr  CBA) F1 intercrosses were dissected, counting the day of the vaginal plug as E0.5, and processed for routine paraffine histology. All animal experiments were conducted in full compliance with UK Home Office regulations and with approval of the local animal welfare committee at The Babraham Institute, and with the relevant project and personal licences in place. Sections (7 mm) were deparaffinized, boiled for 30 min in 10 mM sodium citrate pH 6.0 or 1 mM EDTA pH 7.5, 0.05% Tween-20 and blocked with PBT-BSA. Primary antibodies and dilutions used were as follows: mouse anti-Esrrb 1:200 (R&D Systems H6707), rabbit anti-Nr0b1/Dax1 1:200 (Santa Cruz sc-841), rabbit anti-Lsd1 1:100 (Abcam ab17721) and goat anti-Sox2 1:100 (R&D Systems AF2018). Primary antibodies were detected with appropriate secondary AlexaFluor 488, 568 or 647 antibodies, counterstained with DAPI and observed using an Olympus BX41 or BX61 epifluorescence microscope. All antibodies used are listed in Supplementary Table 5. Co-immunoprecipitation. Esrrb-coding sequence (PiggyBac-Esrrb-ires-Neo, a kind gift from Austin Smith, CSCR, Cambridge, UK) was cloned to result in PiggyBac-CAG-Avi-Esrrb-3xFlag-ires-Neo construct. TS EGFP cells were transfected with the construct along with the empty vector control using Lipofectamine 2000 (Invitrogen), selected with G418 and expanded in 10 15-cm dishes. Co-immunoprecipitation was performed as described before 23 . Cells were washed in PBS, harvested, resuspended in Buffer A (10 mM Hepes pH 7.6, 1.5 mM MgCl 2 and 10 mM KCl) and disrupted by 10 strokes in dounce homogenizer. Extracts were spun down and the pellet resuspended in Buffer C (20 mM Hepes pH 7.6, 25% Glycerol, 420 mM NaCl, 1.5 mM MgCl 2 and 0.2 mM EDTA), passed through a 19-G needle and dialysed to Bufffer D (20 mM Hepes pH 7.6, 20% Glycerol, 100 mM KCl, 1.5 mM MgCl 2 and 0.2 mM EDTA) using dialysis cassettes (Fisher Scientific). Anti-FLAG M2 agarose beads (120 ml; Sigma) equilibrated in buffer D were added to 1.5 ml of nuclear extract in No Stick microcentrifuge tubes (Alpha Laboratories) and incubated for 3 h at 4°C in the presence of Benzonase (Novagen). Beads were washed five times for 5 min with buffer D containing 0.02%NP-40 (C-100*) and bound proteins were eluted four times for 15 min at 4°C with buffer C-100* containing 0.2 mg ml À 1 FLAG-tripeptide (Sigma). Eluates were pooled and analysed using mass spectrometry or western blot. Mass spectrometry. Immunoprecipitated proteins from two biological replicates each of Esrrb-and vector-transfected TS cells were run a short distance (B5 mm) into an SDS-PAGE gel, which was then stained with colloidal Coomassie stain (Imperial Blue, Invitrogen). The entire stained gel pieces were excised, destained, reduced, carbamidomethylated and digested overnight with trypsin (Promega sequencing grade, 10 ng ml À 1 in 25 mM ammonium bicarbonate) as previously described 51 . The resulting tryptic digests were analysed using LC-MS/MS on a system comprising a nanoLC (Proxeon) coupled to a LTQ Orbitrap Velos Pro mass spectrometer (Thermo Scientific). LC separation was achieved on a reversed-phase column (Reprosil C18AQ, 0.075  150 mm, 3 mm particle size), with an acetonitrile gradient (0-35% over 180 min, containing 0.1% formic acid, at a flow rate of 300 nl min À 1 ). The mass spectrometer was operated in a data-dependent acquisition mode, with an acquisition cycle consisting of a high-resolution precursor ion spectrum over the m/z range 350-1,500, followed by up to 20 CID spectra (with a 30-s dynamic exclusion of former target ions). Mass spectrometric data were processed using Proteome Discoverer v1.4 (Thermo Scientific) and searched against the mouse entries in Uniprot 2013.09, and against a database of common contaminants, using Mascot v2.3 (Matrix Science). Quantitative values were calculated with Proteome Discoverer for each identified protein as the average of the three highest peptide ion peak areas. The search results and quantitative values were imported into Scaffold v3.6 (Proteome Software Inc.), which reported a total of 1,249 proteins across the four samples, with a calculated protein false discovery rate of 0.2%. After applying further filters (minimum of two unique peptides per protein with at least one in both biological replicates, ratio of quantitative values 42 for both Esrrb/vector pairs) 90 proteins remained, as shown in Supplementary Data 6. RIME. RIME was carried out as described 37 . Briefly, cells were crosslinked in media containing 1% formaldehyde (EM grade; tebu-bio) for 8 min. Crosslinking was quenched by adding Glycine to a final concentration of 0.2 M. The cells were washed with and harvested in ice-cold PBS. The pellet was resuspended in 10 ml of LB1 buffer (50 mM HEPES-KOH (pH 7.5), 140 mM NaCl, 1 mM EDTA, 10% glycerol, 0.5% NP-40 or Igepal CA-630 and 0.25% Triton X-100) for 10 min at 4°C. Cells were pelleted, resuspended in 10 ml of LB2 buffer (10 mM Tris-HCL (pH 8.0), 200 mM NaCl, 1 mM EDTA and 0.5 mM EGTA), and mixed at 4°C for 5 min. Cells were pelleted and resuspended in 300 ml of LB3 buffer (10 mM Tris-HCl (pH 8), 100 mM NaCl, 1 mM EDTA, 0.5 mM EGTA, 0.1% Na-deoxycholate and 0.5% N lauroylsarcosine) and sonicated in a water bath sonicator (Diagenode Bioruptor). A total of 30 ml of 10% Triton X-100 was added, and the lysate was centrifuged for 10 min at 20,000 r.c.f. The supernatant was then incubated with 100 ml of magnetic beads (Dynal) prebound with 20 mg of either anti-Lsd1 (ab 17721 Abcam) or anti-IgG (sc-2027 Santa Cruz) antibody, and IP was conducted overnight at 4°C. The beads were washed 10 times in 1 ml of RIPA buffer and twice in 100 mM ammonium hydrogen carbonate solution. Detailed results including peptide sequences, peptide scores, ion scores, expect values and Mascot scores have been included in Supplementary Data 7. RNA-seq. Total RNA was prepared using the RNeasy kit (Qiagen 74104) followed by DNase treatment using the TURBO DNA-free kit (Life Technologies AM1907) according to the manufacturers' instructions. mRNA was isolated using the Dynabeads mRNA purification kit (Life Technologies 61006) and prepared into an indexed library using the ScriptSeq v2 RNA-Seq Library Preparation Kit (Epicentre SSV21106) according to the manufacturers' instructions. Libraries were quantified/ assessed using both the KAPA Library Quantification Kit (KAPA Biosystems KK4824) and Bioanalyzer 2100 system (Agilent). Indexed libraries were pooled and sequenced with a 100-bp single-end protocol. The raw reads were trimmed to remove adapter sequences (minimum overlap required of 3 bp) and bad-quality bases at the end of each read using Trim Galore (http://www.bioinformatics. babraham.ac..uk/projects/trim_galore). All reads were aligned to version 75 of the Ensembl mouse reference cDNA and ncRNA sequences using Bowtie 1 (ref. 52) allowing for multimapping between reads and transcripts. The MMSEQ gene expression analysis software 53 was used to estimate gene expression levels. The marginal posterior mean and s.d. of the log expression parameter corresponding to each gene was used as the outcome in a Bayesian model selection algorithm implemented in the MMDIFF software 54 . In the Esrrb KD analysis, differential expression between biological replicates of the Esrrb KD (2  KD-1, 2  KD-2) and control (2  scr-1, 1  scr-2) was determined by comparing a baseline model with a single mean log expression parameter to an alternative model in which the two conditions have different means. We specified a prior probability for the alternative model of 0.1 and declared as differentially expressed those genes for which the posterior probability of the alternative model exceeded 0.6. In the PD03 RNA-seq analysis, two biological replicates were used for each condition (2  ctrl 3 h, 2  ctrl 24 h, 2  PD03 3 h and 2  PD03 24 h). In this analysis, we compared a baseline model specifying a single mean log expression parameter for all samples and an alternative model specifying a fold change parameter representing the difference in means between the four control samples and the four PD03 samples and a different fold change parameter representing the difference in means between the two PD03 3 h samples and the two PD03 24 h samples. The following design matrices were used to compare the baseline and alternative models, respectively: where the first four rows correspond to control samples, the next two rows correspond to PD03 3 h samples and the last two rows correspond to PD03 24 h samples. Note that the matrices have been transposed to optimize the use of space. In this analysis, a high-confidence set of genes misregulated by the PD03 treatment was established by selecting genes for which the posterior probability of the more complex model exceeded 0.95. Within this set, genes for which the absolute estimated 3 h fold change was greater than 1 were labelled 'early responders', while the others were labelled 'gradual responders'. A similar model comparison was used to analyse the DES data comprising three control samples, four samples taken after 24 h of DES treatment and three samples taken 4 days after DES treatment. In both analyses, the prior distributions for the intercepts and regression coefficients were set as described previously 53 . For enrichment analysis of differentially expressed genes, we used MouseMine (www.mousemine.org), setting a Holm-Bonferroni-corrected P value threshold of 0.05. Luciferase assays. Putative wt or mutated Eomes and Elf5 enhancers were cloned into the BamHI site of the pGL3-promoter vector (Promega) and co-transfected with Renilla plasmid into the TS EGFP line. Site-directed mutagenesis was performed using the QuikChange Site-Directed Mutagenesis Kit (Agilent Technologies) following the manufacturer's instructions. Control lines were generated by co-transfection of Renilla with either pGL3-promoter or pGL3-basic vectors (Promega). Cells were harvested 48 h after transfection and luciferase activity was measured with the Dual-Luciferase Reporter Assay kit (Promega E1910) following the manufacturer's instructions using a Promega GloMax 96-well luminometer running Glomax software. Firefly activity was normalized to Renilla luciferase activity values, which are represented with their s.d. Primer sequences are provided in Supplementary Table 4.
9,709
2015-07-24T00:00:00.000
[ "Biology", "Medicine" ]
A Probabilistic Patient Scheduling Model with Time Variable Slots One of the current challenges faced by health centers is to reduce the number of patients who do not attend their appointments. The existence of these patients causes the underutilization of the center's services, which reduces their income and extends patient's access time. In order to reduce these negative effects, several appointment scheduling systems have been developed. With the recent availability of electronic health records, patient scheduling systems that incorporate the patient's no-show prediction are being developed. However, the benefits of including a personalized individual variable time slot for each patient in those probabilistic systems have not been yet analyzed. In this article, we propose a scheduling system based on patients' no-show probabilities with variable time slots and a dynamic priority allocation scheme. The system is based on the solution of a mixed-integer programming model that aims at maximizing the expected profits of the clinic, accounting for first and follow-up visits. We validate our findings by performing an extensive simulation study based on real data and specific scheduling requirements provided by a Spanish hospital. The results suggest potential benefits with the implementation of the proposed allocation system with variable slot times. In particular, the proposed model increases the annual cumulated profit in more than 50% while decreasing the waiting list and waiting times by 30% and 50%, respectively, with respect to the actual appointment scheduling system. Introduction One of the current problems faced by health centers is the existence of patients who do not attend their appointments. These patients, commonly known as no-shows, cause damage which includes a drastic reduction in the health center's income and extend patients' access to medical care. No-show rates ranging from 4 to 79.2% [1] and losses reaching 150 million dollars only in the United States [2] have been reported. In order to reduce these numbers, health centers use reminders and sanctions. However, several studies have shown that these strategies only achieve a slight or moderate reduction in the no-show rates [3,4]. Moreover, it has been pointed out that sanctions may limit access to patients with limited income to medical centers [5], and automatic reminder systems might have an important economic impact on the health centers [6]. An alternative to these active strategies is the use of appointment scheduling systems. These systems aim at obtaining an allocation of the patients that reduce given performance measures, such us patient overtime or doctor idle time. The development of these patient scheduling systems is a very active field. Literature review articles covering seminal and more recent approaches include [7][8][9]. This vast productivity is a consequence of the large variety of peculiarities of the health centers that cause the systems to be practically tailor-made. In general terms, these systems can be differentiated according to (1) online vs. offline; (2) single vs. multiple servers; (3) appointment rules; (4) performance measures; (5) inclusion of environmental factors; and (6) modeling approach. Regarding the first classification, systems can consider online (sequential) and offline (simultaneous) scheduling. Primary care centers usually assign the appointment at the time the patient requests it (online scheduling), while specialties assign it days later, after checking the doctors' availability and analyzing the required resources (offline scheduling) [10]. Online systems are more common in practice; however, they are more difficult to model. On the other hand, offline systems are becoming increasingly important as requests can be made automatically, and patients are notified later. With respect to the second category, queuing theory allows the classification of these systems into single-server or multiple servers according to the number of providers being modelled at a time. The single-server assumption is usually associated with the fact that each doctor has their own set of patients associated. However, it is known that models based on multiple servers are more efficient, and in some cases, such as laboratories or x-rays tests, they are also optimal [7]. In terms of appointment rules, two parameters have to be considered. Firstly, the number of patients scheduled in each time slot (block-size), where the number of patients to be seen in the first slot (initial-block), is usually studied separately. Secondly, the time length of each slot (appointment interval) is being studied also. Each combination of these parameters describes an appointment rule. For example, individualblock/variable-interval describes the rule that assigns only one patient to each slot with variable time length. Regarding performance measures, these are used to describe the objective pursued when creating a particular scheduling system. Most of the models are built from an optimization problem seeking to achieve the best allocation of patients; although, in some cases, heuristic rules are created that are then validated by simulation. These measures are usually associated with the cost of patient waiting time, or the doctor's idle time, the revenues of attending a patient, or a combination of them. The inclusion of environmental factors into scheduling systems is still underdeveloped and a large consensus exists on it being one of the most promising lines to explore in future research. Cayirli and Veral [7] point out the importance of including no-shows, walk-in, urgent patient, emergencies, and second consultations in appointments scheduling systems. Although these factors can be addressed separately, through the center's policies, taking them into account can lead to better results. Gupta and Denton [8] add to these late cancellations, which are often classified along with no-shows, and patient preferences. More recently, Ahmadi et al. [9] add another environmental factor such as patient lack of punctuality, physician lateness, service interruption, random service time, and other patient appointment requirements. As for the mathematical model used for the solution, most scheduling systems make use of stochastic optimization or dynamic stochastic programming scheduling, because these are more robust to random arrivals and random service times. However, the recent availability of Electronic Health Records (EHR) and advances in data science have made it possible to obtain more accurate predictions about these two factors, random arrivals and service times, making the use of deterministic planning systems possible. Most of the deterministic models are formulated using integer or mixed linear programming models aiming at optimizing some performance measure of the scheduling. This type of model is widely used in specialties, where deterministic service times and near-zero no-show probabilities are assumed [9]. In this paper, we propose a deterministic integer linear programming (ILP) model for offline scheduling of patients in the presence of heterogeneous no-shows and variable times services in a specialty service of a public health center. To the best of our knowledge, this is the first offline scheduling model that considers both heterogeneous patient noshows and length variable appointment intervals. The system aims to maximize the expected revenues of the clinic considering the different show-rates of each patient during a whole week. The model is designed as a single-server given the fact that each doctor is assumed to have their own list of patients. The appointment rule used is individual-block/variableinterval, with no initial-block, for which only one patient is assigned to each variable time length slot. In order to validate the model, experiments are carried out to reproduce the routine of the psychiatric department of the Fundación Jiménez Díaz Hospital in Madrid, Spain, for an entire year. In this sense, patient show rates are estimated, and three different appointment intervals with information provided by the center are incorporated into the proposed model. We also take into account other environmental factors such as large waiting lists, major revenues from scheduling new patients, and the dynamic priority assignment scheme. The performance of the model is compared with other scheduling systems, including the one proposed by Ruiz-Hernández et. al [11], which is currently implemented at the psychiatric department of this hospital, showing a considerable improvement. The rest of the article is structured as follows. In Section 2, we present a literature review in scheduling systems in health centers with heterogeneous no-show probabilities. Next, in Section 2, the probabilistic patient scheduling problem with time-variable slots is introduced. In Section 3, the specific characteristics of scheduling in the health center basis for this study are described. In Section 5, the numerical experiments carried out to evaluate the model are presented and discussed. Finally, the article ends in Section 6 with the conclusions of the results obtained. Literature Review In this section, we review appointment scheduling systems that take into account variable appointment intervals and heterogeneous no-shows. First, we discuss models that consider different appointment intervals. Then, we move on to models with heterogeneous no-show probabilities. Finally, we will present the contributions of our proposal. A summary with the most recent works compared with the proposed method is presented in Table 1. Computational and Mathematical Methods in Medicine Computational and Mathematical Methods in Medicine It is important to point out the difference between the appointment interval and the service time. The former is the scheduled length of an appointment, while the latter is the actual time the patient spends at the appointment. In works considering different appointment intervals, it is usually assumed that the service time is deterministic but unknown, so it can be estimated. Cayirli et al. [12] simulate different sequence and appointment rules on a variety of environmental factors, such as different service times for new and follow-up patients and the presence of homogeneous absences. Huang and Verduzco [13] reclassify patients into different types of visits and determine appointment length by incorporating performance measures such as patient waiting time and physician downtime, in order to converge with the optimal appointment length for each class. Bentayeb et al. [14] developed a new appointment scheduler based on a time-of-service prediction model, which is developed using the data mining method. They use classification and regression trees to predict service times with 84% accuracy. They then simulate different scheduling rules to obtain a better sequence of patients. To the best of our knowledge, there are no research articles addressing variable appointment intervals that use an optimization approach for optimal patient assignment. On the other hand, systems that take into account heterogeneous no-show probabilities usually follow a stochastic programming approach in terms of the randomness of arrivals and service times. This means that, regardless of the appointment interval (fixed or variable), the service time is assumed to follow a certain probability distribution. These models are computationally intensive, which means that instead of using the probabilities directly, the appointment is normally split according to no-show probabilities. For example, Ratcliffe et al. [15] builds a dynamic stochastic scheduler that maximizes profits by controlling two classes of patients with different show rhythms. They develop analytical bounds and approximations that lead to partially optimal scheduling rules. Muthraman and Lawley [16] create a sequential scheduling model with exponential service times and multiple patient no-show probabilities, yet the appointment interval is constant. Zacarias et al. [10] study the analytical properties of accounting for different class probabilities and different appointment intervals in the scheduling of a full day. For example, they conclude that in the presence of homogeneous probabilities and variable appointment times, the patient should be scheduled according to the rule of the shortest processing time first (SPT). Yan et al. [17] develop a model for scheduling sequential appointments considering patient choice and service fairness simultaneously. They use stochastic programming that uses distinct groups of patients grouped by no-show probabilities and homogeneous appointment intervals. Samorani and Harris [18] determine the impact of the probabilistic classifier in scheduling appointments with no-shows. They try several classifiers to obtain N classes of patients in terms of their probabilities of no-shows. They then use a stochastic mixed-integer scheduler with random arrivals and service time and appointment interval determinants and constants. An alternative idea to the use of stochastic optimization could be to predict the no-show rates and assume determin-istic arrivals and service times. The recent availability of Electronic Health Records (EHR) and advances in data science has made it possible to improve this wide variety of scheduling systems. This is because modern predictive techniques applied to EHRs are capable of estimating the probability of patient no-show, which can be used to improve the scheduling system [4]. Regarding deterministic systems, Savelsbergh and Smilowitz [19] are the first to define the probabilities of no-shows for six different categories of patients depending on their preferences (strong or weak) for three different time windows (AM, noon, or PM). These environmental conditions were integrated into an online linear integer program to optimize patient allocation. Later on, Ruiz Hernández et al. [11] proposed a mixed deterministic integer program. The model is probabilistic in the sense that it incorporates the expected income of the center weighted by the probabilities of no-show predicted for each patient. This was the first model to incorporate no-show rates, rather than using an N class approach to obtain different classes of patients in terms of no-show. The present paper proposes an offline scheduling system that extends Ruiz Hernández's work by including the variable appointment interval required for each patient. As it will be seen in the experiments, the inclusion of this information allows to improve considerably the performance of the system. The contributions of this paper are the following: (1) the inclusion of further heterogeneous probabilities that consider information's about the patient, day and time of the appointment, month, and the indirect waiting time (lead time); (2) the inclusion of the variable appointment interval in a linear binary deterministic problem for an online scheduling system; (3) the development of a model for a weekly scheduling system with dynamic prioritization and differentiation for new patients that maximizes the expected revenue of the center and indirectly minimizes the doctor's idle time; (4) the application of the model to a health center in Madrid that potentially mitigates the effects of patient no-shows. The Probabilistic Patient Scheduling Problem with Time Variable Slots In this section, the mathematical formulation of the proposed patient scheduling model is presented. As discussed above, it takes into account the patients' no-show probabilities and the consultation times required by each patient. The goal of the model is to maximize the center's expected revenue through the reduction of no-shows. The model distinguishes between two types of patients (first visits and show-up visits). In addition, it takes into account the time the patient has been waiting for an appointment to assign a priority parameter that is updated every week. It also takes into account some policy requirements that set the minimum proportion of first visits that have to be scheduled each week. Before describing the model, the notation that will be used is presented: Sets: I: days of the week; T: time slots in any given day; 4 Computational and Mathematical Methods in Medicine K: set of patients to be scheduled for an appointment during the reference week. Parameters: q: proportion of the number of available slots that must be allocated to first visits; d k : binary parameter indicating if a patient k ∈ K has high (d k = 0) o low (d k = 1) priority; Z k : binary parameter indicating if a patient k ∈ K is a first visit (Z k = 1) or follow-up (Z k = 0). P itk : probability that patient k ∈ K will show-up to an appointment in fi, tg, for all i ∈ I and t ∈ T; w z : revenue obtained either from a first visit (z = 1) or follow-up (z = 0). t k : number of time slots required by the patient in their consultation. h 1 : slack parameter for the minimum number of slots that can be allocated in one day. h 2 : slack parameter for the maximum number of slots that can be allocated in one day. Variables: x itk : binary variable that takes the value 1 if the patient k ∈ K is assigned to slot fi, tg, for all i ∈ I and t ∈ T; x T k : binary variable that takes the value 1 if the patient k ∈ K is referred back to the waiting list. With this notation, taking into account that the operator d·e denotes the ceiling function (minimum integer not below), the model is formulated as follows: x itk , x T k ∈ 0, 1 f g, ∀i ∈ I, t ∈ T, k ∈ K, ð9Þ The objective function maximizes the clinic's expected revenue. Note that when w 0 = w 1 = w, the objective function maximizes the expected show-up rate; that is, it maximizes the weighted show-up rate. The set of constraints (2) ensures that only one patient is seen at a time. The constraints (3) guarantee that if the patient does not schedule in the reference week, they are sent back to the waiting list. As we are working with binary variables, it is also ensured that each patient is not scheduled more than once in a week. Constraint (4) ensures at least the minimum time is used for new patients (first visits). Constraint (5) grants that low priority patients are not scheduled until all high priority patients have been scheduled. Constraints (6) force the next slots of the day to be empty if one slot is. This ensures that either all slots are filled continuously or the rest of the day's slots remain empty. Finally, constraints (7) force the time spent in a day to be within acceptable limits. It should be noted that this model is an extension of the probabilistic model developed in. That study proposed a model to maximize the expected revenues based on noshow probabilities. It considered the distinction between new (first visits) and old patients and imposed priority for patients with long waiting times. However, the model did not take into account scheduling different appointment times, which can help to attend more patients in a week. Mathematically, this difference is seen in a change from the unit of time of appointment to a 5-minute slot. Moreover, the meaning of assigning a slot to the patient, k ∈ K, changes. In this case, it means that the patient is programmed to enter the appointment in that time slot, while the subsequent t k − 1 slots must all be equal to zero (see equation (2)). Constraints (4) and (5), which are weighted by the time the patient spends in an appointment, also change. Another contribution of our model is that it forces the assigner not to leave empty slots with the set of constraints (6). Finally, as in our proposal, doctor's working time is not restricted to the number of appointments; constraints are added (7) to ensure that the time limits are not exceeded. The Scheduling Process The scheduling process in the reference health facility used in this study, and for which the model is proposed, works as follows: (1) A waiting list is available with the patient's information for the appointment, including the number of weeks on the waiting list (sojourn), whether the patient is a first (new) visit or not, and the patient's consultation time. New patients are added to the list 5 Computational and Mathematical Methods in Medicine at the time the appointment is requested and the sojourn is initialized at one (2) The list of patients (referred to as a buffer) to be passed to the scheduler each week is constructed as follows: (a) The system first selects the patients with the longest waiting time (sojourn) and assigns them high priority (d k = 0). This group contains both first visits (Z k = 1) and follow-up visits (Z k = 0) (b) If the legal minimum number of slots dedicated to follow-up patients has not been filled ( dð1 − qÞ jIjjTje), the system sequentially adds patients in decreasing order of sojourn until the previous condition is met (or the waiting list is left empty). In all but the last iteration, patients are assigned high priority (d k = 0). This group contains both first visits (Z k = 1) and follow-up visits (Z k = 0) (c) Finally, if the number of first visits in the buffer is below the legal requirements (dq jIjjTje), the system sequentially adds first visits in decreasing order of sojourn until the legal requirements are met or there are no first visits left to include. These patients have low priority (d k = 1) and are first visits (Z k = 1). (3) After the buffer is selected, the system passes the lists of candidates to be scheduled to the probabilistic model with variable time (1). Once the appointment schedule is obtained, the patients who did not receive an appointment are sent back to the waiting list with their original sojourn values Numerical Experiments In order to evaluate the performance of our model, an experiment has been conducted that reproduces the routine of the psychiatric department of the Fundación Jiménez Díaz hospital in Madrid. Probabilities Estimation. To estimate the probabilities of no-show for each patient, we used a database with 76,658 appointments belonging to 5261 patients. The average noshow rate on the dataset is 14.05%. Each appointment was described by 97 predictors commonly used to predict noshows [1]. This set of predictors contained demographic var-iables, a set of variables that characterize the patient's previous attendance behavior, variables about the patient's condition, and variables related to the appointment. A logistic regression model with L1 regularization was used to obtain the no-show probabilities. This model, commonly known as Lasso Regression, has been previously used to predict no-shows because of its ability to automatically select variables and because of its interpretability [20]. The variables included in the model contain the day of the week, time, and lead time, which allowed to obtain specific and differentiated probabilities for each patient, day of the week, month, time, and sojourn value. Experimental Setup. We now describe the procedure used to reproduce the scheduling process during a week in the reference center (presented in Section 3). For an illustration of this process, see Figure 1: flow chart of the experimental setup. The experiment simulates 48 weeks of a doctor attending patients for six hours from Monday to Friday. These hours are equivalent to 72 slots of 5 minutes each day for a total of 360 slots to be covered throughout the week. Therefore, it is assumed that the doctor does not use extra slots (parameter h 2 = 0). Each appointment lasts between 20 and 30 minutes, which means that each patient requires between 4 and 6 slots per consultation. Consequently, each week, the doctor attends between 60 and 90 patients with an average of 75 if the probabilistic model with variable time proposed is adopted. The experiment assumes that the center has a list of patients where those who have been waiting for the longest have waited 8 weeks. This list is generated at random. The simulation is performed as follows: (i) At the beginning of each week, a set of patients (℘) is generated who ask for an appointment. This is done by generating a random number according to a discrete uniform distribution in ½62, 66. This number is used to randomly select patients from the database so that the proportion of first visits is respected. The selected patients are added to the end of the waiting list with a sojourn value of one. These patients are common to all the scheduling approaches. In the case of model with variable time slots, we assume that times follow a discrete uniform distribution in [4,6]. (ii) As described above, the buffer of patients to be passed to the scheduler is obtained from the waiting list Table 2. It is important to point out that these values have been provided by the health center basis for this study. In this way, simulation experiments reproduce the expected performance of the center if the proposed system was implemented during an entire year. Results . We now present the results obtained in the experiment. The performance of the proposed model is compared with the following: (i) the system currently implemented in the health center which assigns the patient to the first available slot with a fixed duration of 30 minutes (FIFO constant); (ii) the system which would assign each patient to the first available slot but would use the estimate of the number of slots they would need (FIFO variable); and (iii) the model proposed by Ruiz-Hernandez et al. [11] that assigns patients based on their probabilities using patient constant appointment time (Time constant). Our model will be referred to as time variable. All models are coded with CPLEX Solver for mathematical optimization in MATLAB R2020a, and the experiments have been conducted in a PC with an Intel Core I9 (2.6-4.5) GHz and 32 GB RAM processor. Table 3 shows the average results of the simulation over the 48 weeks, including computing times of the different methods. As can be seen, the proposed model achieves better results in terms of the number of patients in the queue and indirect waiting times. Similarly, the center's profit and the doctor's idle time are increased and reduced, respectively, with this approach. These results are achieved while keeping no-show rates at acceptable levels, just behind those of the time constant scheduling model. With respect to the computing time, our model exceeds the time required for existing methods. Nonetheless, the total computing time for allocating all the patients in a week (~32 seconds) is perfectly assumable for an offline scheduling system. For a more graphic assessment of the results, the cumulated performance measures over the course of each week are presented. Figure 2(a) shows the number of people on the waiting list for each week. In constant time models, since no more than 60 appointments can be assigned per week, the number of people on the waiting list increases over time. On the other hand, if we compare the variable time models, we can see that the probabilistic model presents better results, by making an assignment that minimizes the effects of the no-shows. It is important to note that the proposed model does not manage to keep the waiting list steady. This is due to the fact that the number of patients is too small in relation to the number that can be assigned. This could be solved either by increasing the number of patients arriving each week or by reducing the center's operating times. Figure 2(b) shows the sojourn value, i.e., the average number of weeks a patient has to wait before being scheduled for an appointment. As in the previous graph, in constant time systems, the average waiting time for patients increases over the weeks. This is a consequence of not having enough capacity to assign all patients. In contrast, the variable FIFO model remains stable over time but fails to reduce the sojourn value throughout the simulation. Finally, the probabilistic model with variable time not only decreases the time but takes it to the minimum values. Figure 2(c) shows the cumulated profits. Probabilistic models have higher benefits than the rest. Of these, the model with variable time obtains better revenues, since under this approach, a greater number of patients can be seen and their attendance probability is maximized. Figure 2(d) shows the cumulated doctor's inactivity time. It should be noted that the constraint added in the probabilistic model ensures that no slots can be left empty throughout the day unless subsequent appointments are not scheduled, see (7). The same applies to the FIFO model. This means that the inactive time of the doctors is highly associated with the number of no-shows. The other factor with direct impact in the doctor inactive time is the difference between the deterministic service time and the scheduled time, which directly affects constant-time models. As in the previous graph, variable time systems offer better results, as they take into account the real patient service times. Among them, the proposed model has a lower cumulative doctor inactive time. The same pattern can be seen in Figure 2(e), which shows the number of cumulated no-shows. The models with variable time present fewer no-shows, and within them, the model with constant time presents less cumulated no-shows. Figure 2(f) presents the complementary of the previous graphic, that is, the number of assigned patients who showed up. Again, models with variable time present better indicators over time since they can allocate a larger number of patients. Conclusions In this article, we have addressed the problem of no-shows in health centers. This problem causes significant damage to the centers, ranging from increased waiting times for patients to severe financial losses. To solve this problem, we have proposed a scheduling system based on a probabilistic scheduling with variable time model together with dynamic priority allocation scheme. The system is based on the solution of a mixed-integer programming model that maximizes the expected profits of the clinic, differentiating between first and follow-up visits. The model minimizes the impact of no-shows on the expected revenues based on the patient's show probabilities and their appointment time. The model is based on individual estimates of patients' show appointment probabilities. These probabilities have been estimated by using a logistic regression model with L1 regularization (lasso), because of its ability to select variables automatically. In addition, the model can handle different patient appointment times. These values have been simulated based on information provided by the health center from which the data were extracted. The experiments show that while both the waiting list and the waiting times are increased in the models with constant time, the proposed model is able to reduce the waiting list by 30% and the waiting times by 50% with respect to their values at the beginning of the simulation. The proposed model is also capable of increasing the cumulated earnings by more than 50%, while reducing the cumulated doctor's idle time by more than 40%, with respect to the current system used at the health center. There are several opportunities for future research. The first is to extend the probabilistic model with variable time developed to allow overbooking. Similarly, the model could be extended to more environmental factors affecting scheduling such as walk-in, early cancelations, and patient preferences. Finally, appointment times could be estimated, just as the attendance probabilities, in order to obtain more realistic results. To conclude, the proposed model is capable of working in a way that minimizes the probability of a patient missing an appointment, while allowing for more patients to be seen. It has proven to dramatically outperform models with constant time, as well as the variable time extension of the current hospital system. Data Availability Answer: No. Comment: The data used to support the findings of this study were supplied by the Jimenez Díaz Hospital under license and so cannot be made freely available. Requests for access to these data should be made to Enrique Baca-García.
7,173.6
2020-09-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Transverse single-spin asymmetries in proton-proton collisions at the AFTER@LHC experiment in a TMD factorisation scheme The inclusive large-$p_T$ production of a single pion, jet or direct photon, and Drell-Yan processes, are considered for proton-proton collisions in the kinematical range expected for the fixed-target experiment AFTER, proposed at LHC. For all these processes, predictions are given for the transverse single-spin asymmetry, $A_N$, computed according to a Generalised Parton Model previously discussed in the literature and based on TMD factorisation. Comparisons with the results of a collinear twist-3 approach, recently presented, are made and discussed. The inclusive large-pT production of a single pion, jet or direct photon, together with Drell-Yan processes, are considered for proton-proton collisions in the kinematical range expected for the fixed-target experiment AFTER, proposed at LHC. For all these processes, predictions are given for the transverse single-spin asymmetry, AN , computed according to a Generalised Parton Model previously discussed in the literature and based on TMD factorisation. Comparisons with the results of a collinear twist-3 approach, recently presented, are made and discussed. I. INTRODUCTION AND FORMALISM Transverse Single-Spin Asymmetries (TSSAs), have been abundantly observed in several inclusive proton-proton experiments since a long time; when reaching large enough energies and p T values, their understanding from basic quark-gluon QCD interactions is a difficult and fascinating task, which has always been one of the major challenges for QCD. Since the 1990s two different, although somewhat related, approaches have attempted to tackle the problem. One is based on the collinear QCD factorisation scheme and involves as basic quantities, which can generate single spin dependences, higher-twist quark-gluon-quark correlations in the nucleon. The second approach is based on a physical, although unproven, generalisation of the parton model, with the inclusion, in the factorisation scheme, of transverse momentum dependent partonic distribution and fragmentation functions (TMDs), which also can generate single spin dependences. The twist-3 correlations are related to moments of some TMDs. We refer to Refs. [1][2][3][4], and references therein, for a more detailed account of the two approaches, and possible variations, with all relevant citations. Following Ref. [3], we denote by CT-3 the first approach while the second one is, as usual, denoted by GPM. In this paper we consider TSSAs at the proposed AFTER@LHC experiment, in which high energy protons extracted from the LHC beam would collide on a (polarised) fixed target of protons, with high luminosity. For a description of the physics potentiality of this experiment see Ref. [5] and for the latest technical details and importance for TMD studies see, for example, Ref. [6]. Due to its features the AFTER@LHC is an ideal experiment to study and understand the origin of SSAs and, in general, the role of QCD interactions in high energy hadronic collisions. We recall our formalism by considering the Transverse Single-Spin Asymmetry A N , measured in p p ↑ → h X inclusive reactions and defined as: where ↑, ↓ are opposite spin orientations perpendicular to the x-z scattering plane, in the p p ↑ c.m. frame. We define the ↑ direction as the +ŷ-axis and the unpolarised proton is moving along the +ẑ-direction. In such a process the only large scale is the transverse momentum p T = |(p h ) x | of the final hadron. In the GPM A N originates mainly from two spin and transverse momentum effects, one introduced by Sivers in the partonic distributions [7,8], and one by Collins in the parton fragmentation process [9]. According to the Sivers effect the number density of unpolarised quarks q (or gluons) with intrinsic transverse momentum k ⊥ inside a transversely polarised proton p ↑ , with three-momentum P and spin polarisation vector S, can be written aŝ where x is the proton light-cone momentum fraction carried by the quark, f q/p (x, k ⊥ ) is the unpolarised TMD (k ⊥ = |k ⊥ |) and ∆ N f q/p ↑ (x, k ⊥ ) is the Sivers function.P = P /|P | andk ⊥ = k ⊥ /k ⊥ are unit vectors. Notice that the Sivers function is most often denoted as f ⊥q 1T (x, k ⊥ ) [10]; this notation is related to ours by [11] where m p is the proton mass. Similarly, according to the Collins effect the number density of unpolarised hadrons h with transverse momentum p ⊥ resulting in the fragmentation of a transversely polarised quark q ↑ , with three-momentum q and spin polarisation vector S q , can be written aŝ where z is the parton light-cone momentum fraction carried by the hadron, D h/q (z, p ⊥ ) is the unpolarised TMD (p ⊥ = |p ⊥ |) and ∆ N D q ↑ /h (z, p ⊥ ) is the Collins function.q = q/|q| andp ⊥ = p ⊥ /p ⊥ are unit vectors. Notice that the Collins function is most often denoted as H ⊥q 1 (z, p ⊥ ) [10]; this notation is related to ours by [11] where M h is the hadron mass. According to the GPM formalism [1,2,12], A N can then be written as: The Collins and Sivers contributions were recently studied, respectively in Refs. [1] and [2], and are given by: and For details and a full explanation of the notations in the above equations we refer to Ref. [12] (where p ⊥ is denoted as k ⊥C ). It suffices to notice here that J(p ⊥ ) is a kinematical factor, which at O(p ⊥ /E h ) equals 1. The phase factor cos(φ a ) in Eq. (7) originates directly from the k ⊥ dependence of the Sivers distribution [S · (P ×k ⊥ ), Eq. (2)]. The (suppressing) phase factor cos(φ a + ϕ 1 − ϕ 2 + φ H π ) in Eq. (8) originates from the k ⊥ dependence of the unintegrated transversity distribution ∆ T q, the polarized elementary interaction and the spin-p ⊥ correlation in the Collins function. The explicit expressions of ϕ 1 , ϕ 2 and φ H π in terms of the integration variables can be found via Eqs. (60)-(63) in [12] and Eqs. (35)-(42) in [13]. TheM 0 i 's are the three independent hard scattering helicity amplitudes describing the lowest order QCD interactions. The sum of their moduli squared is related to the elementary unpolarised cross section dσ ab→cd , that is The explicit expressions of the combinations ofM 0 i 's which give the QCD dynamics in Eqs. (7) and (8), can be found, for all possible elementary interactions, in Ref. [12] (see also Ref. [1] for a correction to one of the product of amplitudes). The QCD scale is chosen as Q = p T . The denominator of Eq. (1) or (6) is twice the unpolarised cross section and is given in our TMD factorisation by the same expression as in Eq. (7), where one simply replaces the factor ∆ N f a/p ↑ cos(φ a ) with 2f a/p . II. AN FOR SINGLE PION, JET AND DIRECT PHOTON PRODUCTION We present here our results for A N , Eq. (1), based on our GPM scheme, Eqs. (6), (7) and (8). The TMDs which enter in these equations are those extracted from the analysis of Semi Inclusive Deep Inelastic (SIDIS) and e + e − data [14][15][16][17], adopting simple factorised forms, which we recall here. For the unpolarised TMD partonic distributions and fragmentation functions we have, respectively: and The Sivers function is parameterised as where with |N S q | ≤ 1, and Similarly, the quark transversity distribution, ∆ T q(x, k ⊥ ), and the Collins fragmentation function, ∆ N D h/q ↑ (z, p ⊥ ), have been parametrized as follows: where ∆q(x) is the usual collinear quark helicity distribution, with |N All details concerning the motivations for such a choice, the values of the parameters and their derivation can be found in Refs. [14][15][16][17]. We do not repeat them here, but in the caption of each figure we will give the corresponding references which allow to fix all necessary values. We present our results on A N for the process p p ↑ → π X at the expected AFTER@LHC energy ( √ s = 115 GeV) in Figs. 1-3. Following Refs. [1,2], our results are given for two possible choices of the SIDIS TMDs, and are shown as function of p T at two fixed x F values ( Fig. 1), as function of x F at two fixed rapidity y values ( Fig. 2) and as function of rapidity at one fixed p T value (Fig. 3). x F is the usual Feynman variable defined as x F = 2p L / √ s where p L = (p h ) z is the z-component of the final hadron momentum. Notice that, in our chosen reference frame, a forward production, with respect to the polarised proton, means negative values of x F . The uncertainty bands reflects the uncertainty in the determinations of the TMDs and are computed according to the procedure explained in the Appendix of Ref. [15]. More information can be found in the figure captions. The analogous results for the single direct photon are shown in Figs. 4-6, and those for the single jet production in Figs. 7-9. In these cases, obviously, there is no fragmentation process and only the Sivers effect contributes to A N , with D h/c (z, p ⊥ ) simply replaced by δ(z − 1) δ 2 (p ⊥ ) in Eq. (7) (see Ref. [2] for further details). III. AN FOR DRELL-YAN PROCESSES Drell-Yan (D-Y) processes are expected to play a crucial role in our understanding of the origin, at the partonic level, of TSSAs. For such processes, like for SIDIS processes and contrary to single hadron production, the TMD factorisation has been proven to hold, so that there is a general consensus that the Sivers effect should be visible via TSSAs in D-Y. Not only: the widely accepted interpretation of the QCD origin of TSSAs as final or initial state interactions of the scattering partons [18] leads to the conclusion that the Sivers function has opposite signs in SIDIS and D-Y processes [19]. Which remains to be seen. Predictions for Sivers A N in D-Y and at different possible experiments were given in Ref. [20], which we follow here. FIG. 1. Our theoretical estimates for AN vs. pT at √ s = 115 GeV, xF = −0.2 (upper plots) and xF = −0.4 (lower plots) for inclusive π ± and π 0 production in p p ↑ → π X processes, computed according to Eqs (6)-(8) of the text. The contributions from the Sivers and the Collins effects are added together. The computation is performed adopting the Sivers and Collins functions of Refs. [14,16] (SIDIS 1 -KRE, left panels), and of Refs. [15,17] (SIDIS 2 -DSS, right panels). The overall statistical uncertainty band, also shown, is the envelope of the two independent statistical uncertainty bands obtained following the procedure described in Appendix A of Ref. [15]. In Ref. [20] predictions were given for the p ↑ p → + − X D-Y process in the p ↑ −p c.m. frame, in which one observes the four-momentum q of the final + − pair. Notice that q 2 = M 2 is the large scale in the process, while q T = |q T | is the small one. In order to collect data at all azimuthal angles, one defines the weighted spin asymmetry: where φ γ and φ S are respectively the azimuthal angle of the + − pair and of the proton transverse spin and we have defined (see Eq. (2)): Adopting for the unpolarised TMD and the Sivers function the same expressions as in Eqs. (10) and (12) √s=115 GeV y = -1.5 FIG. 2. Our theoretical estimates for AN vs. xF at √ s = 115 GeV, y = −1.5 (upper plots) and y = −3.0 (lower plots) for inclusive π ± and π 0 production in p p ↑ → π X processes, computed according to Eqs (6)-(8) of the text. The contributions from the Sivers and the Collins effects are added together. The computation is performed adopting the Sivers and Collins functions of Refs. [14,16] (SIDIS 1 -KRE, left panels), and of Refs. [15,17] (SIDIS 2 -DSS, right panels). The overall statistical uncertainty band, also shown, is the envelope of the two independent statistical uncertainty bands obtained following the procedure described in Appendix A of Ref. [15]. Notice that we consider here the p p ↑ → + − X D-Y process in the p − p ↑ c.m. frame. For such a process the TSSA is given by [20] A Our results for the Sivers asymmetry A sin(φγ −φ S ) N at AFTER@LHC, obtained following Ref. [20], Eq. (23) and using the SIDIS extracted Sivers function reversed in sign, are shown in Fig. 10. Further details can be found in the captions of these figures. IV. COMMENTS AND CONCLUSIONS Some final comments and further details might help in understanding the importance of the measurements of the TSSAs at AFTER@LHC. [15,17] (SIDIS 2 -DSS, right panel). The overall statistical uncertainty band, also shown, is the envelope of the two independent statistical uncertainty bands obtained following the procedure described in Appendix A of Ref. [15]. • The values of A N found for pion production can be as large as 10% for π ± , while they are smaller for π 0 . They result from the sum of the Sivers and the Collins effects. The relative importance of the two contributions varies according to the kinematical regions and the set of distributions and fragmentation functions adopted. As a tendency, the contribution from the Sivers effect is larger than the Collins contribution with the SIDIS 1 -KRE set, while the opposite is true for the SIDIS 2 -DSS set. The values found here are in agreement, both in sign and qualitative magnitude, with the values found in Ref. [3] within the collinear twist-3 (CT-3) approach. • The results for single photon production are interesting; they isolate the Sivers effect and our predictions show that they can reach values of about 5%, with a reduced uncertainty band. We find positive values of A N as the relative weight of the quark charges leads to a dominance of the u quark and the Sivers functions ∆ N f u/p ↑ is positive [14,15]. Our results, obtained within the GPM, have a similar magnitude as those obtained in Refs. [3] and [21], within the CT-3 approach, but have an opposite sign. Thus, a measurement of A N for a single photon production, although difficult, would clearly discriminate between the two approaches. • The values of A N for single jet production, which might be interesting as they also have no contribution from the Collins effect, turn out to be very small and compatible with zero, due to a strong cancellation between the u and d quark contributions. The same result is found in Ref. [3]. in D-Y processes at AFTER@LHC is a most interesting one. In such a case the TMD factorisation is believed to hold and the Sivers asymmetry should show the expected sign change with respect to SIDIS processes [18,19]. Our computations, Fig. 10, predict a clear asymmetry, which can be as sizeable as 10%, with a definite sign, even within the uncertainty band. Both the results of Ref. [3] and the results of this paper, obtain solid non negligible values for the TSSA A N measurable at the AFTER@LHC experiment. The two sets of results are based on different approaches, respectively the CT-3 and the GPM factorisation schemes. While the magnitude of A N is very similar in the two cases, the signs can be different; in particular, the TSSA for a direct photon production, p p ↑ → γ X, has opposite signs in the two schemes. In this paper we have also considered azimuthal asymmetries in polarised D-Y processes, related to the Sivers effect. As explained above, in this case, due to the presence of a large and a small scale, like in SIDIS, the TMD factorisation is valid, with the expectation of an opposite sign of the Sivers function in SIDIS and D-Y processes. Also this prediction can be checked at AFTER@LHC. ) for inclusive photon production in p p ↑ → γ X processes, computed according to Eqs (6) and (7) of the text. Only the Sivers effect contributes. The computation is performed adopting the Sivers functions of Ref. [14] (SIDIS 1, left panels) and of Ref. [15] (SIDIS 2, right panels). The overall statistical uncertainty band, also shown, is obtained following the procedure described in Appendix A of Ref. [15]. ) for inclusive photon production in p p ↑ → γ X processes, computed according to Eqs (6) and (7) of the text. Only the Sivers effect contributes. The computation is performed adopting the Sivers functions of Ref. [14] (SIDIS 1, left panels) and of Ref. [15] (SIDIS 2, right panels). The overall statistical uncertainty band, also shown, is obtained following the procedure described in Appendix A of Ref. [15]. [15] (SIDIS 2, right panel). The overall statistical uncertainty band, also shown, is the envelope of the two independent statistical uncertainty bands obtained following the procedure described in Appendix A of Ref. [15]. 2 (upper plots) and xF = −0.4 (lower plots) for inclusive single jet production in p p ↑ → jet X processes, computed according to Eqs (6) and (7) of the text. Only the Sivers effect contributes. The computation is performed adopting the Sivers functions of Ref. [14] (SIDIS 1, left panels) and of Ref. [15] (SIDIS 2, right panels). The overall statistical uncertainty band, also shown, is obtained following the procedure described in Appendix A of Ref. [15]. ) for inclusive single jet production in p p ↑ → jet X processes, computed according to Eqs (6) and (7) of the text. Only the Sivers effect contributes. The computation is performed adopting the Sivers functions of Ref. [14] (SIDIS 1, left panels) and of Ref. [15] (SIDIS 2, right panels). The overall statistical uncertainty band, also shown, is obtained following the procedure described in Appendix A of Ref. [15]. FIG. 9. Our theoretical estimates for AN vs. y at √ s = 115 GeV and pT = 3 GeV, for inclusive single jet production in p p ↑ → jet X processes, computed according to Eqs (6)-(8) of the text. Only the Sivers effect contributes. The computation is performed adopting the Sivers functions of Ref. [14] (SIDIS 1, left panel) and of Ref. [15] (SIDIS 2, right panel). The overall statistical uncertainty band, also shown, is the envelope of the two independent statistical uncertainty bands obtained following the procedure described in Appendix A of Ref. [15]. in D-Y processes as expected at AFTER@LHC. Our results are presented as function of M (upper plots), xF (middle plots) and x of the quark inside the polarised proton, x ↑ (lower plots). The other kinematical variables are either fixed or integrated, as indicated in each figure. They are computed according to Ref [20] and Eq. (23), adopting the Sivers functions of Ref. [14] (SIDIS 1, left panels) and of Ref. [15] (SIDIS 2, right panels), reversed in sign. The overall statistical uncertainty band, also shown, is obtained following the procedure described in Appendix A of Ref. [15].
4,543
2015-04-15T00:00:00.000
[ "Physics" ]
Relevance of Detail in Basal Topography for Basal Slipperiness Inversions: A Case Study on Pine Island Glacier, Antarctica Given high-resolution satellite-derived surface elevation and velocity data, ice-sheet models generally estimate mechanical basal boundary conditions using surface-to-bed inversion methods. In this work, we address the sensitivity of results from inversion methods to the accuracy of the bed elevation data on Pine Island Glacier. We show that misfit between observations and model output is reduced when high-resolution bed topography is used in the inverse model. By looking at results with a range of detail included in the bed elevation, we consider the separation of basal drag due to the bed topography (form drag) and that due to inherent bed properties (skin drag). The mean value of basal shear stress is reduced when more detailed topography is included in the model. This suggests that without a fully resolved bed a significant amount of the basal shear stress recovered from inversion methods may be due to the unresolved bed topography. However, the spatial structure of the retrieved fields is robust as the bed accuracy is varied; the fields are instead sensitive to the degree of regularisation applied to the inversion. While the implications for the future temporal evolution of PIG are not quantified here directly, our work raises the possibility that skin drag may be overestimated in the current generation of numerical ice-sheet models of this area. These shortcomings could be overcome by inverting simultaneously for both bed topography and basal slipperiness. Given high-resolution satellite-derived surface elevation and velocity data, ice-sheet models generally estimate mechanical basal boundary conditions using surface-to-bed inversion methods. In this work, we address the sensitivity of results from inversion methods to the accuracy of the bed elevation data on Pine Island Glacier. We show that misfit between observations and model output is reduced when high-resolution bed topography is used in the inverse model. By looking at results with a range of detail included in the bed elevation, we consider the separation of basal drag due to the bed topography (form drag) and that due to inherent bed properties (skin drag). The mean value of inverted basal shear stress, i.e., skin drag, is reduced when more detailed topography is included in the model. This suggests that without a fully resolved bed a significant amount of the basal shear stress recovered from inversion methods may be due to the unresolved bed topography. However, the spatial structure of the retrieved fields is robust as the bed accuracy is varied; the fields are instead sensitive to the degree of regularization applied to the inversion. While the implications for the future temporal evolution of PIG are not quantified here directly, our work raises the possibility that skin drag may be overestimated in the current generation of numerical ice-sheet models of this area. These shortcomings could be overcome by inverting simultaneously for both bed topography and basal slipperiness. INTRODUCTION Pine Island Glacier (PIG) has been one of the fastest flowing and most rapidly retreating glaciers in Antarctica over the past few decades (e.g., Mouginot et al., 2014;Rignot et al., 2014;Smith et al., 2017). If retreat continues, PIG has the potential to contribute significantly to future global sea-level rise (e.g., Favier et al., 2014;Seroussi et al., 2014). To make reliable predictions about its future requires an accurate description of the mechanical boundary at the base of the ice; how this boundary is treated introduces significant uncertainty into ice-sheet models (e.g., Ritz et al., 2015). Studies suggest that a detailed knowledge of the bed is particularly important in the vicinity of the grounding line (Schoof, 2007;Leguy, 2014), and for understanding propagation of thinning upstream of the grounding line (Wingham et al., 2009;Williams et al., 2012;Konrad et al., 2017). The potential effects of unresolved bed variations on predictive ice-sheet modeling can be significant (e.g., Durand et al., 2011;Sun et al., 2014;Nias et al., 2016). We investigate how a detailed knowledge of topography on PIG influences estimates of the basal resistance to ice flow. The resistance to ice flow at the bed can be separated into two key processes. Firstly, the resistance to basal sliding provided by inherent properties of the bed itself (e.g., Iverson and Zoet, 2015), which we term skin drag. This will evolve with time due to factors like meltwater flux over the bed and mobilization of the till. Crucially, it is distinct from resistance to ice flow due to bed topography, which is known as form drag. The relevant stresses arise from the ice deforming around bed obstacles. Form drag can therefore only be known accurately if the topography itself is known to a high resolution. A glacier flowing over a perfectly flat bed will not be subjected to any form drag but may experience some skin drag. Skin drag gives rise to finite (and non-zero) local shear stresses at the bed. A glacier flowing over an undulating bed will always experience some amount of form drag but possibly no skin drag, such as in the case of "perfect sliding" over a sinusoidal bed, which is a classical problem in glaciology (e.g., Nye, 1959;Budd, 1970). To characterize the basal boundary condition in ice-sheet models today, parameterized sliding laws are used. These attempt to describe processes that are thought to be occurring beneath the ice (e.g., Weertman, 1957;Lliboutry, 1968;Budd et al., 1979;Fowler, 2010). The most commonly used sliding law in largescale models simply relates the basal ice velocity to the basal shear stress, with two tunable parameters: a stress exponent, m, and a "slipperiness" parameter, C (Weertman, 1957). In many numerical models the stress exponent is taken as a constant (usually 3, depending on what processes are believed to be occurring at the bed), while the slipperiness parameter is allowed to vary spatially. In order to constrain the slipperiness, model optimization is carried out, where high resolution observations of the surface of the ice are used to infer values of the slipperiness at the boundary (e.g., Macayeal, 1992Macayeal, , 1993Sergienko et al., 2008;Morlighem et al., 2010;Petra et al., 2012). Spatial variations illustrate changes in slipperiness at the ice-bed boundary; these may be due to changes in basal conditions (e.g. more water at the boundary results in slippy conditions) or due to inaccuracies in the data, with the model producing slipperiness perturbations in an attempt to better match the data. With regard to the latter, given the accuracy of modern satellite data, the main source of error is usually in the ice depth field, i.e., how well the bed elevation is known. While some studies invert for basal topography itself, while keeping the basal slipperiness constant (e.g., Farinotti, 2009;Morlighem et al., 2011;Li, 2012;VanPelt, 2013), there are also a few studies that have developed methods for simultaneous inversion of both bed topography and basal slipperiness from surface measurements (Raymond and Gudmundsson, 2009;Raymond Pralong and Gudmundsson, 2011). However, the majority of studies only invert for basal slipperiness and keep the bed elevation fixed. This means that errors in the basal topography contribute to errors in the basal slipperiness estimation. It is therefore important to consider the potential robustness of results to any error in the ice depth data (e.g., Sergienko et al., 2008;Raymond Pralong and Gudmundsson, 2011;De Rydt et al., 2013), particularly given that in the BEDMAP2 dataset, only about 36% of grid cells (at 5 km resolution) actually contain bed elevation data (Fretwell et al., 2013). A lot of work has focused on the transfer of basal topography and slipperiness perturbations to the surface of the ice (e.g., Gudmundsson, 1997Gudmundsson, , 2003Gudmundsson et al., 1998;Schoof, 2002;Raymond and Gudmundsson, 2005;Martin and Monnier, 2014). Such work quantifies the wavelengths and magnitude of variations at the bed that are directly observable at the surface of the ice. In this study we consider a related question: given high-resolution surface measurements, how important for estimations of basal conditions on PIG is it to know the bed elevation to high resolution? Using an advanced 3D Stokes model to carry out a surface-to-bed inversion, we derive the basal conditions that minimize the misfit between modeled and observed surface velocity fields over six sites where highresolution bed and surface elevation data were collected as part of the iSTAR fieldwork (Bingham et al., 2017). We make comparisons with results that use both the smoother BEDMAP2 dataset and a completely flat bed, while keeping the surface elevation resolved to a high resolution in all cases. This allows us to consider the robustness of estimates of basal slipperiness and resulting basal stress to the resolution of the bed topography. If low resolution bed elevation fields are used, do estimates of bed conditions change significantly because form drag is not fully resolved in the model? This work extends on Bingham et al. (2017), who made the broad statement that the variation seen in basal traction under PIG is related to unresolved topography; in this study we carry out the necessary modeling to quantify this. We compare local features, as well as absolute values of the derived fields, and consider if there is a way to take how well the bed elevation is known into account when interpreting basal stress fields. THE MODEL We consider the isothermal nonlinear Stokes equations: where ρ is the ice density, g = (0, 0, −g) is the gravity vector, u = (u, v, w) is the ice velocity vector and σ the stress tensor. The stress tensor is given by where p is the pressure, and τ the deviatoric stress tensor. The deformation of the ice is described by the following constitutive relation: Frontiers in Earth Science | www.frontiersin.org where η is the (highly nonlinear) viscosity of the ice,ǫ is the strain tensor, n is Glen's flow law exponent (commonly taken as 3 Glen, 1955), A is the rate coefficient and Tr is used to represent the trace of a tensor. This system of equations is solved over a domain ⊂ R 3 . At the top surface ∂ S a stress-free boundary condition is applied: wheren is the unit normal. At the bottom surface ∂ B we have where τ b is the basal shear stress, C the sliding coefficient (often referred to as the slipperiness), m is the sliding exponent and T = I −n ⊗n is the tangential projection operator. These correspond to a Weertman-style sliding law in the tangential direction (Weertman, 1957) and a no-penetration condition in the normal direction. To ensure positivity of the sliding coefficient, C, we replace it by the parametrization κ = ln(C) in Equation (9). Note that in the absence of skin drag, the bed-tangential component of the basal traction is, by definition, always equal to zero. Given the forward 3D Stokes model, we apply an inverse method to estimate the spatial distribution of the basal slipperiness, C (as defined through Equation 9), at the ice-bed interface. Given surface velocity data u obs , the inverse problem involves minimizing the misfit between the velocity observations and horizontal model output velocities at the surface, u H = (u, v, 0) ∂ S , to infer the slipperiness field that allows the best fit of observations to data. As in previous work (e.g., Petra et al., 2012;Kyrke-Smith et al., 2017), this is formulated as a non-linear, least-squares minimization problem of the cost functional: where κ = ln(C), and γ and β are parameters governing the relative size of the Tikhonov-style regularization terms and the misfit term. Without any regularization the problem is illposed. The first Tikhonov term, J reg1 , enforces smoothness of the control variable; this is the same approach as in many other studies (e.g., Petra et al., 2012;Goldberg and Heimbach, 2013;Morlighem et al., 2013). It defines a length scale over which we expect variations in κ to occur. This is important so as not to get variations in κ on length scales that are less than those which can be resolved given surface observations (e.g., Gudmundsson, 2003). The size of γ therefore governs the relative importance of the data misfit (from J mis ) and imposing smoothness (from J reg1 ). J reg2 is only needed due to code implementation issues; κ has to be defined throughout the 3D domain and so the term acts to regularize κ toward zero away from the basal boundary. The coefficient of this term is several orders of magnitude smaller than that on J reg1 (Table 1) and it therefore does not affect behavior at the boundary. Details of the numerical solution using FEniCS (Logg et al., 2012;Farrell et al., 2013;Alnaes et al., 2015) are given in Kyrke- Smith et al. (2017). BACKGROUND AND "TOY PROBLEM" The ability to retrieve different aspects of basal properties through an inversion of surface data is dependent on several factors, of which, broadly speaking, surface data quality and the bed-to-surface transfer characteristics are the most important (e.g., Langdon and Raymond, 1978;Balise and Raymond, 1985;Gudmundsson, 1997Gudmundsson, , 2003Raymond and Gudmundsson, 2005). Spatial variability over short wavelengths tends to have a relatively smaller impact on the surface than variations over longer wavelengths. The transfer does improve as the slip ratio of the ice increases (e.g., Gudmundsson, 2003;De Rydt et al., 2013), meaning transfer is strongest on ice streams. Nevertheless, for a given surface data quality, errors in the estimation of basal properties will depend on wavelength, with errors generally being larger for short wavelength features. The ability to jointly invert for both bed topography and basal slipperiness without appreciable mixing effects also depends on data quality but can be done given sufficiently comprehensive and accurate surface measurements (Gudmundsson and Raymond, 2008;Raymond and Gudmundsson, 2009). Simultaneous inversion will inevitably lead to lower detail of the inverted slipperiness fields compared to inverting for slipperiness when bed heights are already known; to separate effects from unknown topography we do not present any simultaneous inversions in this study. We show results from a "toy" problem, illustrating the separation of effects due to basal topography and slipperiness. Ice flows down a 5 × 5 × 1 km 3 uniformly inclined plane, with a Gaussian perturbation applied to the basal elevation ( Figure 1A) and a constant slipperiness field ( Figure 1B). Solving the forward 3D nonlinear Stokes (n = 3) problem with periodic boundary conditions in x and y gives a velocity field. The horizontal components of the surface velocities ( Figure 1C) are then used as the surface observations, u obs , to carry out a surface-to-bed Stokes inversion over the same uniformly inclined slab of ice, but without the bed perturbation resolved (i.e., we invert for the slipperiness field over the uniformly inclined bed illustrated in Figure 1D). Figure 1E shows the resulting slipperiness field recovered from the inversion and Figure 1F the corresponding surface velocity. Detailed discussion about the choice of parameters such as the regularization coefficient, γ is provided in Kyrke- Smith et al. (2017). We notice that the model cannot match the surface observations well (i.e., Figures 1C,F show noticeable differences). This is due to the observations being produced from a setup with a perturbation in bed elevation that is not resolved in the domain in which the inversion is conducted. A varying slipperiness field is derived in an attempt to improve the fit to observations. While the slipperiness adjusts in an attempt to compensate for incorrect representation of the bed, the clear structure of the velocity difference does suggests that something is more fundamentally wrong i.e., in this case the basal elevation is incorrect. Ideally the solution should be rejected and the problem identified. However, in reality there are so many irregularities that such solutions would usually be accepted for large-scale ice-sheet inverse models. This example provides a clear illustration of the need to consider the resolution of bed topography data under PIG, and its influence on results from surface-to-bed inversions. Do synthetic features arise in the derived slipperiness fields if the topography is under-resolved? The Data The data used for the inverse modeling consists of bed elevation, surface elevation and surface velocity measurements of PIG. Together with BEDMAP2 bed elevation data (Fretwell et al., 2013), we use newly-acquired high-resolution bed and surface measurements from iSTAR fieldwork (Bingham et al., 2017). This data covers six 10 × 15 km 2 patches on PIG tributaries; scientists used DELORES (Deep-Looking Radio Echo Sounder) radar (King et al., 2016) over these sites, acquiring twenty-two 15 km radar profiles orthogonal to ice flow, with a 0.5 km spacing between profiles to resolve the bed more fully than it ever had been before. Figure 2 shows the velocity map of PIG with the areas where the DELORES radar was used marked on. The velocity fields, high-resolution bed data and high-resolution surface data are shown for each site. Many of the features seen in the topography fields are not evident in BEDMAP2 data at all. In this study we choose to focus on iSTARt1 and iSTARt7. This is because of their interesting topographical features; on iSTARt1 there is a defined bump (drumlin-like), the effect of which is reflected in the surface data. Over iSTARt7 the sharp transition in bed elevation is not seen in BEDMAP2; it is completely smoothed out. We want to investigate how much these stark differences in bed affect recovered fields in a surface-to-bed inversion (Figures 3, 4). This will allow us to discuss the importance of the separation of form drag and skin drag. We also have full sets of results from all other sites, which can be found in Appendix 1 (Supplementary Material); we will comment briefly on these as well. Inverse Model Output With Different Bed Resolutions With the surface velocity and elevation fields in Figure 2, we carry out a surface-to-bed 3D Stokes inversion as detailed in Section 2. This involves minimizing the misfit between the velocity observations and horizontal model output surface velocities to infer the slipperiness field that allows the best fit of observations to data. We carry this out over three different versions of the domain at each site: 1. Bed elevation defined by DELORES high-resolution data (Bingham et al., 2017), 2. Bed elevation defined by BEDMAP2 data (Fretwell et al., 2013), 3. Bed elevation defined as flat. All use the same high-resolution surface elevation data, as this is easily acquired in comparison with the bed elevation data. This allows us to compare the basal stress fields that result from inverting the surface data over a domain defined with three different bed resolutions. The averaged basal shear stress over each site is also calculated. Considering first iSTARt1 (Figure 3), it is clear that the maxima and minima in slipperiness/basal stress are present in the same locations whether or not the topography is resolved to a high resolution. The bed is slippy to the side of the grid with the lowest basal elevations. The highest basal shear stress values (i.e., low slipperiness) are at the center of the grid, coinciding with the highest elevation point of the bump. The location of the maximum in basal stress is independent of the resolution of bed topography defining the domain. However, the detail of it does change as the topography is varied. In particular the maximum becomes more locally concentrated when the bump is less fully resolved (evident when looking at the difference fields in the fourth column, which highlight the areas where the form drag is not well resolved without the topography included). Over iSTARt7 (Figure 4) the results are once again consistent using all three different basal topography options. The maximum in basal shear stress overlies the sharp transition in bed elevation, whether or not the domain is defined with this resolved in the topography. The spatial structure of the derived slipperiness and resulting basal shear stress is consistent with all three topography maps, but the values of the slipperiness and the resulting basal shear stress are not consistent. Finally, as one further test, we also carry out a surface-to-bed inversion over a larger 100 km 2 area of PIG. Results are shown in Figure 5 for the case where the domain is defined with bed elevation data from BEDMAP2 (first row) (Fretwell et al., 2013) FIGURE 1 | Solutions from a 3D Stokes model set up over a 5 × 5 × 1 km 3 inclined slab with periodic boundary conditions. The (A-C) of plots shows the bed perturbation, a constant slipperiness field and then the resultant surface velocity from solving 3D Stokes over this setup. The (D-F) of plots then shows the retrieved slipperiness field and resultant surface velocity when using an inverse method to try and match the surface observations but over a flat bed. (Fahnestock et al., 2016), together with the bed and surface elevation fields (plotted relative to sea level). The thin black dashed box outlines the 10 × 15 km 2 area over which the DELORES elevation data were collected; the data is merged with BEDMAP2 elevations outside of this box (Fretwell et al., 2013). and the case where the BEDMAP2 data is filtered to only include the long wavelength variation (second row). The high and low slipperiness/basal shear stress areas are found to be in consistent locations. The largest differences in values are found in regions where the most noticeable smoothing has occurred e.g., around the margins of the ice stream. Furthermore, comparing the mean values of basal shear stress calculated over each site, we see a distinct pattern. The averaged basal shear stress is higher when topography is less fully resolved i.e., τ D b < τ B b < τ F b where the superscripts stand for DELORES, BEDMAP2, and FLAT respectively. This result is true across all six sites (see Figures 1-4 in Appendix 1 Supplementary Material) as well as over the larger domain shown in Figure 5. Mean basal shear stress is lower when the inversion is carried out over the more fully resolved version of the bed. Moreover, the relative change in values of the basal shear stress with the change in topographic detail is interesting as it reflects the relative importance of form drag and skin drag. When there is a significant relative change between cases (e.g., between τ b B and τ b F on iSTARt7 in Figure 4) this suggests that the obstacles unresolved in the smoother topography introduce particularly significant form drag into the problem. Slipperiness and Basal Stress Fields As described in the previous section, while the spatial patterning of the slipperiness and resultant basal shear stress fields is similar regardless of the resolution of bed topography used, the spatially averaged value of basal shear stress is persistently lower (i.e., higher slipperiness) when more detailed topographic detail is included in the bed elevation field that defines the domain. This is true across all six of the sites where we have the high-resolution topography from DELORES radar measurements. We propose that this result is due to the form drag that the topography introduces into the problem. Without any topography, all resistance to the flow is through skin drag, which is due to inherent properties of the bed e.g., the presence of water and roughness of sediments. However, topographic variations at the basal boundary induce stresses into the ice, as the ice has to deform around the obstacles. These stresses contribute to the stress balance, providing some of the resistance to the driving stress. It therefore makes sense that the derived skin drag is consistently lower when more detailed bed variations are included in the domain; the form drag transmitted by the bed variations balances more of the driving stress. Fit to Observations Another question we address is how well the surface-to-bed inversion allows the observations to be fitted, and whether this is affected by the resolution of bed elevation data. To consider this question we look at a whole range of results from carrying out the 3D Stokes inversion at each site. The inverse method involves minimizing a cost function, which has a contribution both due to misfit (J mis ) and smoothness of the solution J reg (see Equation 11). We can alter the relative contribution of each; γ 2 is the ratio of the misfit term to the regularization term in this cost function. The L-curve is a plot of the size of each term at the point that the inversion has converged for a whole range of values of γ . A relative larger regularization term (smaller γ ) means enforcing smoothness of the solution is prioritized over decreasing misfit. Misfit values are consequently larger when the regularization term is proportionally larger in the cost function. More details are included in Kyrke-Smith et al. (2017). Figure 6 shows the L-curves for inversions over both iSTARt1 and iSTARt7 with each different bed topography option. Across all domains the misfit reduces further when using a domain with the bed topography resolved more completely; including more details of the bed allows us to fit to the data better. This is true for results from the inverse model with a whole range of values of the ratio of misfit to regularization in the cost function (the choice of the corresponding parameter, γ , and the robustness of results to its variation is discussed in more detail in Kyrke-Smith et al., 2017). It is encouraging that we see this improved misfit at such local scale; given a model that contains the appropriate physical formulations we would expect a setup more similar to what is producing the observations to allow for a better fit to the surface data. This result opens up questions about the importance of reducing misfit further for forward predictive modeling. However, it is not something we are able to address conclusively in this paper. Sun et al. (2014) did consider the sensitivity of a dynamic response to bedrock uncertainties and found that the low-frequency noise was of more importance than high frequency noise of the same amplitude. They used a low-aspect ratio model rather than a full Stokes one. It is important that this question is addressed further, particularly in the context of partitioning form and skin drag. While it seems that correctly separating these effects does improve the misfit, it is also important as skin drag is something that can potentially evolve over time, while form drag is static due to it being topographically controlled. Resolving the two correctly will allow us to know what fraction of basal resistance can potentially change over time. Comparison With Patterns Observed in Previous Work Finally we consider how our results compare with Sergienko and Hindmarsh (2013), who saw riblike patterns of very high basal shear stress when carrying out a 3D Stokes surface-to-bed inversion near the grounding line of PIG, keeping the topography fixed using BEDMAP2 bed data (Fretwell et al., 2013). Figure 7 shows results from our model over an area encompassing that shown in Figure 4 in Sergienko and Hindmarsh (2013). There are four plots in Figure 7, each of which is the result of a surfaceto-bed inversion with varying proportions of regularization to misfit in the cost function Equation (11). We observe similar features to Sergienko and Hindmarsh (2013), particularly when there is less regularization (i.e., larger γ and more emphasis on reducing the misfit), though the high-shear-stress features are slightly wider (independently of resolution). They are at similar angles and locations, though not identical. Increasing the amount of regularization in the cost function suppresses the high-frequency features (see how the plots change as γ decreases in Figure 7). Such an increase in smoothness is expected to be concurrent with the surface observations being less well-matched. In this case the decrease in velocity misfit as a result of enforcing more smoothness on the solution is not particularly significant: misfit increases by 0.3% as γ decreases FIGURE 3 | The basal stress field from inversion of surface data over iSTARt1. Three results are shown using three different resolutions of bed elevation data (left column). Mean basal shear stress for each is given. FIGURE 4 | The basal stress field from inversion of surface data over iSTARt7. Three results are shown using three different resolutions of bed elevation data (left column). Mean basal shear stress for each is given. Frontiers in Earth Science | www.frontiersin.org 7 April 2018 | Volume 6 | Article 33 FIGURE 5 | The basal stress field from inversion of surface data over 100 km 2 of PIG. The result in the top row shows the fields from inversion with BEDMAP2 topography, and in the second row the results are using a smoothed out topography. Mean basal shear stress for each is given. FIGURE 6 | L-curve for results of the model over iSTARt1 and iSTARt7. The value of the regularization term, J reg1 , from Equation (11), is plotted against the value of the misfit term, J mis . Values are plotted forresults of the model with a range of values of the coefficient, γ , which governs the relative contribution of regularization and misfit in the cost function. Every point is labeled with the value of γ and the color corresponds to what bed elevation dataset has been used. It is clear that misfit is reduced further by using more fully resolved bed elevation measurements i.e., DELORES measurements rather than BEDMAP2 or a completely smoothed bed. from 10 −2 to 10 −3 , 0.4% as γ decreases from 10 −3 to 10 −4 and 1.4% as γ decreases from 10 −4 to 10 −5 . The latter is the most significant, and does occur at the point where the high frequency variations are almost completely suppressed. However, we suggest that the presence of the riblike features are not a absolute necessity for a good fit to velocity data, and that more work needs to be done to establish their robustness. We also suggest that areas where ribs are identified from inverse modeling should be prioritized as locations for high resolution bed elevation surveys; inverse methods can recover a mixture of unresolved form drag combined with skin friction effects (as seen in the "toy" problem illustrated in Figure 1E) and it would seem sensible to test whether any of the high shear stress patterns may arise due to unresolved form drag. CONCLUSIONS Previous work has considered the transfer of basal perturbations to the surface and attempted to quantify the wavelengths that are of importance (e.g., Gudmundsson, 1997Gudmundsson, , 2003Gudmundsson et al., 1998;Schoof, 2002). In this paper we considered a related question: given high resolution surface observations, how robust are predictions of basal slipperiness to the resolution of the bed elevation data on PIG? We addressed this by looking at the effect of including small-scale bed features when carrying out 3D Stokes surface-to-bed inversions. While such variations are not necessarily on the lengthscales that transfer information to the surface, they may still introduce significant form drag into the problem. Results of the inverse study showed that the structure of spatial variations in recovered slipperiness is not affected by how finely the topography is resolved (see Figures 3-5). This is not altogether surprising given that transfer studies suggest that both bed and slipperiness variations on small scales do not significantly affect surface velocities. There is therefore no reason for the spatial variability of slipperiness to change significantly when the bed is known to a higher resolution. However, we did find consistently lower mean values of basal shear stress over each site when incorporating the high-resolution bed elevation data into the domain. We suggested that this is because small bed obstacles are more efficient at causing form drag (e.g., Schoof, 2002); more of the driving stress is therefore resisted by the form drag resulting from ice deforming around the topography included in the model. The skin drag derived from the inversion is consequently noticeably lower when more detail is included in the bed. Furthermore, we obtained a better fit to surface velocities using the more accurate bed-topography data set (Section 5.2). This is expected if the model is correct for the system, and was found to be true across a range of regularization to misfit ratios. Finally, we also reproduced high-shear-stress, riblike features like those seen in Sergienko and Hindmarsh (2013). They were especially clear when only a small amount of regularization was applied in the inverse model (Figure 7), and could be suppressed by applying more regularization to the problem. Nevertheless, while the misfit was not affected too significantly, the L-curve approach did suggest that the correct amount of regularization is in the region where we recovered the high-basal-stress features. In conclusion, we suggest that values of basal shear stress derived from a surface-to-bed inverse approach need to be interpreted with caution as they can include drag which is due to the unresolved topography rather than inherent bed and sediment conditions. While we have not explicitly investigated the consequences of this for forward modeling, we believe it could be an important consideration for predictive models. Separation of form and skin drag would be particularly important when including evolution of basal conditions in a model, as the time-independent effects of form drag should be resolved separately to the time-dependent skin drag. Without access to the highest-resolution bed elevation data everywhere, it would seem that inverting simultaneously for both bed topography and basal slipperiness (Raymond and Gudmundsson, 2009;Raymond Pralong and Gudmundsson, 2011) might be the most sensible approach to take to address the problem. AUTHOR CONTRIBUTIONS The work was carried out by TK-S at BAS, with regular help from GG in the form of discussion and advice. PF helped set up the numerical model that is used in the paper.
7,817.6
2018-04-11T00:00:00.000
[ "Geology", "Environmental Science" ]
The Ensemble Mars Atmosphere Reanalysis System (EMARS) Version 1.0 Abstract The Ensemble Mars Atmosphere Reanalysis System (EMARS) dataset version 1.0 contains hourly gridded atmospheric variables for the planet Mars, spanning Mars Year (MY) 24 through 33 (1999 through 2017). A reanalysis represents the best estimate of the state of the atmosphere by combining observations that are sparse in space and time with a dynamical model and weighting them by their uncertainties. EMARS uses the Local Ensemble Transform Kalman Filter (LETKF) for data assimilation with the GFDL/NASA Mars Global Climate Model (MGCM). Observations that are assimilated include the Thermal Emission Spectrometer (TES) and Mars Climate Sounder (MCS) temperature retrievals. The dataset includes gridded fields of temperature, wind, surface pressure, as well as dust, water ice, CO2 surface ice and other atmospheric quantities. Reanalyses are useful for both science and engineering studies, including investigations of transient eddies, the polar vortex, thermal tides and dust storms, and during spacecraft operations. | INTRODUCTION A Mars atmosphere reanalysis provides a comprehensive estimate of the state and temporal evolution of the atmosphere, combining available spacecraft observations to be consistent with the physics and dynamics of a global climate model. A reanalysis consists of an extended, retrospective sequence of analyses, created using data assimilation, of gridded atmospheric fields of variables such as temperature, wind, surface pressure, dust and water ice cloud opacities. Reanalyses have been used extensively for terrestrial applications (e.g. Kalnay et al., 1996) using contemporary models and assimilation systems with in situ and remotely sensed observations spanning decades, and include complete earth-system reanalyses (Bosilovich, Rixen, & Chaudhuri, 2013) as well as those assimilating only surface pressure data (Compo, Whitaker, & Sardeshmukh, 2006). The Mars Analysis Correction Data Assimilation (MACDA) reanalysis is the first to be created for an extraterrestrial atmosphere (Montabone et al., 2014). EMARS is the first ensemble reanalysis for Mars, spanning multiple years across the observational datasets of multiple spacecraft instruments. Data assimilation for Mars has been previously demonstrated by several studies (Houben, 1999;Lee et al., 2011;Lewis & Read, 1995;Lewis, Read, Conrath, Pearl, & Smith, 2007;Navarro, Forget, Millour, & Greybush, 2014;Navarro et al., 2017;Steele et al., 2014;Zhang et al., 2001). The foundations for EMARS were laid with the observing system simulation experiments (OSSEs) of Hoffman et al., (2010), which demonstrated that simulated Martian observations can constrain the atmospheric state via ensemble data assimilation. Greybush et al. (2012) demonstrated this with real spacecraft observations, tuning the data assimilation system for optimal performance. Further studies demonstrated the synergy between reanalysis development and science investigations. Zhao, Greybush, Wilson, Hoffman, and Kalnay (2015) studied thermal tides in a predecessor to EMARS and found that tuning the assimilation window was essential for avoiding spurious resonances. Waugh et al. (2016) compared polar vortices across reanalyses and motivated the inclusion of a topographic gravity wave drag parameterization to more faithfully reproduce the polar vortex. Greybush, Gillespie, and Wilson (2019) studied transient eddies in reanalyses and examined the robustness of convergence upon unique synoptic states. Section 2 describes the spacecraft observations, Mars global climate model and data assimilation used in the preparation of the dataset. Section 3 describes the dataset and its formatting in detail. Section 4 discusses access to the dataset and its visualization. Section 5 describes current and potential uses for the dataset. Section 6 outlines expected future developments for the dataset. | Description of the EMARS observations The Ensemble Mars Atmosphere Reanalysis System uses atmospheric observations remotely sensed by two instruments on spacecraft orbiting Mars. The first instrument is the Thermal Emission Spectrometer (TES), which operated on the Mars Global Surveyor (MGS) from MY 24-27, or 1999MY 24-27, or -2004. The TES nadir retrievals (Smith, Pearl, Conrath, & Christensen, 2001), available from the Planetary Data System (PDS), provide twice-daily (2 a.m. and 2 p.m. local time in the tropics) coverage of temperature, dust and water ice column opacity. Vertical coverage is from the surface to ~40 km in altitude on up to 21 vertical levels, but there are only 2-5 effective vertical degrees of freedom in the profiles with decreasing resolution at higher altitudes, as estimated from averaging kernels as in Eluszkiewicz et al. (2008). The PDS retrievals have unrealistic jumps in temperature climatology, associated with the change in spectral resolution. Instrument data quality is further discussed in Pankine (2015; Zurek & Smrekar, 2007) since MY 28 (2006. Unfortunately, there is no temporal overlap between TES and MCS observations. However, the two datasets have been found to show good agreement at seasons with little interannual variability (Shirley et al., 2015). MCS provides limb retrievals of temperature, dust and water ice vertical profiles (Kleinböhl et al., 2009). Profile retrievals of temperature typically make use of co-located limb and nadir or off-nadir measurements in order to improve vertical coverage in the lower atmosphere, and reach over 80 km in altitude. The original along-track observation strategy provides twice-daily observations (3 a.m. and 3 p.m. local time in the tropics). Starting in 2010, cross-track observations (Kleinböhl, Wilson, Kass, Schofield, & McCleese, 2013) were added, providing 6 local times of day coverage. From 2010 to 2014, intervals of multi-track (along + cross-track) observations alternated with intervals of along-track only sampling; after 2014, multi-track sampling was used continuously. In order to avoid changes in reanalysis climatology due to changes in observing patterns, EMARS v1.0 only assimilates along-track observations. MCS retrievals are provided on 105 vertical pressure levels, whereas the vertical weighting functions indicate an effective vertical resolution of 5 km. MCS version 5 retrievals (Kleinböhl, Friedson, & Schofield, 2017), that consider a 2D geometry to provide superior capabilities in regions of sharp temperature gradients, are used. MCS retrievals, due to their limb geometry, have reduced sensitivity to the lowest 5-10 km of the atmosphere, which has impacts on the ability to resolve lower atmosphere transient eddies compared to TES (see Section 5.2). | Description of the EMARS Model The Ensemble Mars Atmosphere Reanalysis System uses the Geophysical Fluid Dynamics Laboratory (GFDL) Mars Global Climate Model (MGCM) for the numerical weather prediction component of the reanalysis. This model, using a dynamical core originally developed for the atmosphere of Earth, has been adapted to operate with Mars atmosphere physics (Wilson & Hamilton, 1996) and adapted to work in a data assimilation framework (Greybush et al., 2012;Hoffman et al., 2010). The GFDL MGCM has been used to examine tides and planetary waves (Hinson & Wilson, 2002;Hinson, Wilson, Smith, & Conrath, 2003;Wilson & Hamilton, 1996), the water cycle Richardson, Wilson, & Rodin, 2002), the dust cycle (Basu, Richardson, & Wilson, 2004;Basu, Wilson, Richardson, & Ingersoll, 2006;, the influence of topography and cloud radiative effects (Hinson & Wilson, 2004;Kleinböhl et al., 2013;Wilson, 2011;Wilson, Lewis, Montabone, & Smith, 2008;Wilson, Neumann, & Smith, 2007). The dynamical core is finite-volume (Lin, 2004); EMARS uses the latitude/longitude geometry, with grid spacing of 6 degrees longitude by 5 degrees latitude (60 × 36). The model contains 28 vertical levels, with 13 of these levels being in the lowest scale height (~10 km) of the atmosphere. The vertical coordinate is a hybrid sigma-pressure coordinate, with terrain-following sigma levels near the surface transitioning to pressure levels above 2 Pa. The vertical grid spacing increases substantially with height. Model physics were adapted to the Martian atmosphere. The representation of dust is controlled by three radiatively active tracers, with particle radii of 0.3, 1.2 and 2.5 microns, that undergo advection and sedimentation. Radiatively active water ice clouds are employed in the MGCM. The MGCM has an active, multi-phase CO 2 cycle. When temperatures are projected to be below the (pressure-dependent) CO 2 critical temperature in the atmosphere, the gaseous CO 2 mass needed to generate the appropriate latent heating is removed from the atmosphere and placed on the surface as CO 2 snow. There is no explicit CO 2 cloud microphysics. A parameterization for sub-grid-scale topographic gravity wave drag is employed, as in Waugh et al. (2016). | Description of the EMARS data assimilation system The Ensemble Mars Atmosphere Reanalysis System uses an ensemble-based data assimilation system, the Local Ensemble Transform Kalman Filter (LETKF; Hunt, Kostelich, & Szunyogh, 2007), developed at the University of Maryland and coded by Takemasa Miyoshi (https ://github.com/takem asa-miyos hi/letkf ). Data assimilation systems combine a background, or first guess, with observations to produce an analysis, which represents the best estimate of the state of the atmosphere. The update to the background (analysis increment) depends on the differences between the background and observations (observation increment); calculation of the observation increment is described in Section 2.3.2. The relative weighting of background and observation errors also determines the magnitude of the analysis increment; the details of this are described in Section 2.3.3. The spatial pattern of the analysis increment and the impact of one variable on another are described in Section 2.3.4. The advantage of an Ensemble Kalman Filter (EnKF) is that the background error covariance is sampled from a dynamical ensemble of simulations, and is therefore flow dependent (Kalnay, Li, Miyoshi, Yang, & Ballabrera, 2007). Further details on the equations used for data assimilation can be found in Greybush et al. (2012). | Observation preprocessing Temperature profile observations are then prepared for assimilation. Temporally, observations are collected from the 1-hr interval centred on each hour, which is the time of the analysis. In order to match the scales resolved by the observations to those resolved by the model and reduce errors of representativeness and random instrumental errors, the raw TES and MCS observations are first preprocessed to create 'superobservations' (e.g. Alpert & Kumar, 2007). In the horizontal, observations are binned to the nearest model grid point, and the superobservation consists of the mean observation value, latitude and longitude in each bin. Horizontal resolution of observations, particularly TES, is greater than that of the model in the along-track direction, whereas the superobservations have similar resolution. In the vertical, TES observations have only ~2-5 degrees of freedom in the vertical and MCS observations ~20. Therefore, the raw temperature profiles of 21 (TES) and 105 (MCS) vertical levels are averaged to reflect this (as in Montabone et al., 2014), effectively performing a vertical superobservation. Observation errors include instrument measurement error, forward model error and errors of representativeness. As estimates of the observation uncertainties were not provided with the TES observations in the PDS, EMARS v1.0 assigns an observation error of 3.0 K to the superobservations. MCS does include uncertainty estimates with its retrievals; these are used by EMARS. | Observation operator The observation operator (forward operator) maps (i.e. converts) the model background to simulated observations in 'observation space' (at the same locations, variable types as the observations). The observation increments are then the actual observations minus the simulated observations. In this version of EMARS, only temperature observations are directly assimilated, and therefore, the observation operator maps from model temperatures to retrieved temperatures. Model temperature fields are horizontally interpolated to observation locations. Model vertical profiles are then interpolated to the same pressure levels as the observations and averaged vertically in the same manner. As a rough quality control check, superobservations with increments that are more than seven times the observation error are rejected. This condition can be triggered by either large observation errors or large model errors, and prevents unrealistically large updates to the assimilation system. | Inflation and ensemble design Background error variances are calculated from the ensemble; ideally, the ensemble spread should accurately represent uncertainty in the background field. Ensemble members should capture growing unstable modes of the atmosphere (Greybush, Kalnay, Hoffman, & Wilson, 2013); however, some parts of the Martian atmosphere are principally forced by aerosol heating, and the uncertain quantity is aerosol distribution. Therefore, the magnitude of the dust opacities and water ice cloud radiative properties is varied among the 16 ensemble members. Dust opacity increases uniformly from 0.7 to 1.3 times the amount specified by the tracers across the 16 members; water ice cloud radiative properties are multiplied by a scaling factor that alternates from 0.1 to 0.3 to 0.5. Finally, the background ensemble spread, which is typically underestimated because the data assimilation system does not account for model error, is increased. Spatially varying adaptive inflation (Miyoshi, 2011) is used to modify the ensemble spread to enforce the spread/skill relationship outlined in Desroziers, Berre, Chapnik, and Poli (2005) that ensemble variance plus the observation error variance matches the variance of the observation increments. The tuning parameter for the background spread standard deviation, which controls how quickly the adaptive inflation values change in time, is set at 0.04. | Localization The impact pattern of an observation upon the analysis is shaped by the structure of the background error covariance. In an EnKF, this is sampled from ensemble perturbations. Due to the limited ensemble size, these patterns are subject to sampling error. Localization assumes that correlations between distant points are due to sampling error, and smoothly truncates the patterns as a function of distance. Here, R-localization is employed (Greybush, Kalnay, Miyoshi, Ide, & Hunt, 2011), with a half-length of 600 km in the horizontal and 0.4 log P in the vertical. In the LETKF, temperature observations update temperature, wind and surface pressure state variables. | Dust Horizontal dust distributions are derived from the Mars Climate Database version 5 gridded dust scenarios (Montabone et al., 2015), which are kriged composites of multiple spacecraft dust sources, mainly TES column opacities and MCS profiles that have been extrapolated to the surface to derive an estimated column opacity. As in Kahre, Wilson, Haberle, and Hollingsworth (2009), the model equations for the lowest model levels (the boundary layer) include a source/sink term for dust that relaxes the model column opacities towards the observed column opacities. Otherwise, the three dust tracers are advected by the model winds, and the vertical profile is driven by advection and sedimentation. | Special considerations Special consideration must be given to Mars atmospheric phenomena such as thermal tides and CO 2 condensation during assimilation. Zhao et al. (2015) found that a 6-hr | 141 GREYBUSH Et al. assimilation window caused a spurious enhancement of the thermal tides; this was corrected by using a 1-hr assimilation window instead. Analysis increments of surface pressure are scaled globally to conserve atmospheric mass (Greybush et al., 2012). As TES observations fall below the CO 2 critical temperature by several degrees (Colaprete, Barnes, Haberle, & Montmessin, 2008) which would lead to excess CO 2 deposition, observations below the critical temperature are modified to match the critical temperature (Greybush et al., 2019). Wave-0 and wave-1 bandpass filters are applied to the mass (poleward-most latitude) and wind (2 poleward-most latitudes) fields, respectively, for geographically consistent increments near the polar singularity. A low-pass filter is applied to the wind fields near the poles (third and fourth latitude circle), and a Shapiro low-pass filter is applied to the analysis increments throughout to remove spurious high-frequency noise. | Timekeeping on Mars With a Martian sol (day) approximately equal to 24 earth hours and 40 earth minutes, and a Martian year approximately equal to 668.6 sols, a different timekeeping system is required for Mars. For EMARS purposes, hours are Martian hours, which are 24 equal divisions of the Martian sol. The convention of Clancy et al. (2000) is to label Martian years (MY) consecutively since 1955. Solar longitude (Ls), or areocentric longitude, values of 0°, 90°, 180° and 270° mark the Northern Hemisphere vernal equinox, summer solstice, autumnal equinox and winter solstice, respectively, and provide a convenient seasonal index. The Martian perihelion occurs at Ls 250.66. The Mars24 tool (Allison, 1997;Allison & McEwen, 2000) is used to convert MY and Ls directly to earth calendar dates. As the reanalysis temporal resolution is hourly, a calendar of Martian sols and hours is a practical time labelling. For timekeeping, EMARS follows the conventions of Montabone et al. (2015), which proposes a system of leap sols with years of lengths 669, 668, 669, 668 and 669 repeating successively. To address the Martian analemma, observations are assimilated using the local time attribute provided by the instrument teams. EMARS times should be interpreted as hour 12 corresponding to solar noon at longitude 0. Finally, 'MGCM sols' are labelled continuously since the MY22 perihelion, which predates all TES and MCS observations. A table of Mars time conversions is provided along with EMARS. | File naming and formats EMARS version 1.0 spans MY24 Ls 103 to MY27 Ls 102 for TES (the complete TES period of record) and MY28 Ls 112 (the start of the MCS period of record) to MY33 Ls 105 for MCS. This represents approximately 3 TES years and 5 MCS years. EMARS was produced in separate 'streams' of approximately 1 Mars year in length; this approach has also been used for some earth reanalyses (Poli et al., 2016). Changes in stream occur at Ls 105, with the switch occurring at the start of the sol that contains Ls 105. Users can expect slight discontinuities at this point, although this point was selected to be at a time in the Martian year with reduced variability. The dataset is provided in NetCDF format. NetCDF is a self-describing file format (metadata is stored along with actual data) common in the atmospheric science community for model output, and tools for reading and writing are readily available online. Each reanalysis file type is divided into 12 segments per year, corresponding to 30 degrees of Ls. This corresponds to file sizes between 1 and 5 GB, which are a compromise between too many small files and too large of an individual file. The total size for the complete EMARS dataset is estimated to be on the order of 2 TB. With EMARS, we provide three types of files (Table 1): 'analysis' files, 'background' files and 'control' files. Analysis files represent the model restart files that have been directly updated by the data assimilation system. They therefore should be used for any direct reanalysis-observation comparison studies, as the model state variables will be closest to the observations. However, the format is not as convenient, with variables stored on a 'D-grid' (Arakawa , 1977) and only the 'T' (temperature), 'U' (zonal wind), 'V' (meridional wind), 'ps' (surface pressure) and 'Surface_geopotential' fields provided. Background files are created from 1-hr MGCM forecasts, using the analysis files as initial conditions. These are more convenient, as there are many additional diagnosed variable types and all are provided on a common grid. Control files are from a freely running MGCM (without data assimilation) using the same horizontal dust distribution and model settings as EMARS; format is otherwise identical to background files. For analysis files, the ensemble mean and ensemble spread (standard deviation) are provided. For background files, the ensemble mean and a representative ensemble member are provided. For control files, a representative ensemble member is provided. ' (background) or 'cntl' (control). Member type can be 'mean' (the ensemble mean), 'sprd' (the ensemble standard deviation) or 'memb' (a representative ensemble member; here, member 008 which has the median amounts of dust and water ice cloud forcing). Two sample filenames would be the following: emars_v1.0_anal_mean_MY25_Ls060-120.nc. emars_v1.0_back_memb_MY26_Ls300-360.nc. | Description of variable types The dimensions of the dataset in x, y, z and time are described in Table 2. The corresponding static variables for spatial and temporal extent are described in Table 3. For the horizontal coordinate system, values for latitude and longitude for the standard 'A' grid in which all variable types are co-located, along with the 'D' grid variables used only for winds in analysis files, are included. The hybrid vertical coordinate is uniquely defined by a surface pressure field, as well as a k and b k coefficients that describe the pressure and sigma (terrain-following) portions, respectively. Pressure at each interface p_i between two vertical levels k is given as. The pressure at the centre of the corresponding layer p k is given as. Sample pressures at levels and interfaces are also provided, given a reference surface pressure. Note that in EMARS, levels are numbered from 1 to 28 going from the top of the atmosphere to near the surface. Height (above MOLA zero elevation datum) is provided at level interfaces; it can be calculated at level centres using the hydrostatic relationship. A variety of time variables are included to facilitate conversion between earth times, solar longitude and Mars calendars employed for EMARS and MACDA. Table 4 describes the reanalysis variables found in 'analysis' files, whereas Table 5 describes the variables found in 'background' and 'control' files. Variables describe the thermal field, wind field, aerosol fields (both column and profile information) and surface fields. Finally, we have added a file 'emars_v1.0_obscount', which shows the number of daytime and night-time temperature superobservations available for assimilation at each hour, which (like Figure 1) is helpful for determining when the reanalysis is constrained by observations. | DATASET ACCESS AND VISUALIZATION The EMARS dataset is archived and available for download via the Penn State Data Commons, which is a publicly accessible, centrally managed, long-lived resource available to Penn State University investigators (http://www.datac ommons.psu.edu/). The Data Commons has the capability to generate a Digital Object Identifier (DOI) for datasets, as well as create sufficient, searchable documentation (metadata) for all hosted data. A landing page for the data can be found at ftp://ftp.pasda.psu.edu/pub/commo ns/meteo rolog y/ greyb ush/emars-1p0/a_landi ng_page.html, and the data can be accessed via: ftp://ftp.pasda.psu.edu/pub/commo ns/meteo rolog y/greyb ush/emars-1p0/data. The Ensemble Mars Atmosphere Reanalysis System can be visualized in multiple ways. An EMARS plotter allows the visualization of seasonal average (over 30° Ls) statistics for key variables such as zonal mean temperature, winds and column dust opacity. Figure 2 shows a sample interface and image generated by the plotter. Synoptic states of EMARS, that is the state of the atmosphere at a specific instant in time, can be visualized as well. The EMARS plotter can be accessed at http://www.meteo.psu.edu/~sjg21 3/emars_plott er/. Figure 3 shows a snapshot of an animation depicting transient eddies in EMARS; the video for the animation is available as supplemental material for the paper, and the methods for calculating the transient eddy fields are found in Greybush et al. (2019). | DATASET USE AND REUSE Martian reanalyses, such as EMARS, have a variety of uses to the scientific community. A feature-based evaluation of a reanalysis assesses the appropriateness of the dataset for the study of specific aspects of the Martian atmosphere. This complements a forecast-based evaluation using tools favoured in the data assimilation community, such as examining short-term forecasts minus observations and ensemble spread. Such feature-based studies also provide the opportunity to advance our understanding of these phenomena, as well as encourage improvements to modelling, observations and data assimilation procedures to better represent them in reanalyses. The following paragraphs examine several such aspects, describe actual uses of the dataset in the study of that feature and provide recommendations for potential future use. Greybush et al. (2012) computed zonal mean temperature biases and root mean square errors between short-term forecasts and (independent in time) TES observations. Biases were found to be small, and RMSEs were generally less than 5 K. The largest differences are in the vicinity of the sharp temperature gradients of the polar vortex. For MCS observations, the largest systematic temperature differences are above 40 km in altitude. In agreement with GCM simulations, in EMARS the global Hadley circulation has ascending air in equatorial regions (spring/fall) or the extratropics (summer hemisphere), and descending air aloft causing polar warming above the vortex; there are some differences in the exact orientation of this warming (McCleese et al., 2017). Waugh et al. (2016) examine polar vortices in both MACDA and EMARS, and reveal steep PV gradients near the poleward edge of westerly jets, and an annular structure in potential vorticity around the winter poles. Figure 2 and the EMARS plotter provide views of the zonal mean state of EMARS at various seasons and years. eddies, which may be due to instrument differences rather than interannual variability. TES eddies generally show a smaller ensemble spread around unique synoptic states than MCS eddies. Zhao et al. (2015) showed that data assimilation can potentially have a detrimental effect on the representation of thermal tides, with a six-hour assimilation window causing a spurious amplification of the diurnal tide. The use of a 1-hr assimilation window greatly improved tidal features in EMARS, with these features generally comparable to those of the control simulation. Navarro et al. (2017) pointed out that the global nature and forcing of the tides make them difficult for the assimilation to correct. The effective use of observations from multiple local times, as well as the vertical distribution of aerosol heating, to improve the representations of tides in reanalyses is still a work in progress. | Dust cycle The horizontal dust distributions are constrained by the Montabone et al. (2015) gridded products, and a detailed evaluation of these distributions is contained within that work. Of note are systematic differences between TES and MCS opacities near the polar cap edges. While the vertical dust distributions in EMARS are subject to advection and sedimentation, lifting mechanisms for detached dust layers (Heavens et al., 2014) are not yet part of the MGCM, and this version of EMARS does not explicitly consider MCS vertical profiles of dust. Zhao et al. (2015) demonstrated the improvement to reanalyses of the inclusion of radiatively active water ice clouds. This version of EMARS also includes an improved MGCM control simulation, with a water cycle spun up to more closely resemble the TES water vapour record (Smith, 2002). However, this version of EMARS does not explicitly assimilate water ice cloud opacities from TES and MCS. Therefore, the clouds in EMARS may be subject to GCM biases, such as thick cloud layers over the winter poles (McCleese et al., 2017). Assimilation of clouds may be challenging (Navarro et al., 2017) due to the role of tides in cloud formation (Benson et al., 2010;Lee et al., 2009) and errors in model representation of cloud physics. | CO 2 cycle While EMARS does not explicitly assimilate the locations of seasonal CO 2 ice caps, informal comparisons with MOC observations and the Titus (2005) database show reasonable agreement. Precise tuning of a CO 2 cycle to match lander surface pressure records, including local-scale effects that are not well represented in a relatively coarse global model, can be challenging. While EMARS does not have explicit CO 2 microphysics, CO 2 latent heating reveals the locations of likely CO 2 clouds over the winter poles. | Modelling studies and predictability Mars reanalyses have potential uses for modelling studies, such as providing boundary conditions for high-resolution regional simulations, trace gas estimation and transport, and facilitating the development of improved model physics to more closely match observations. The EMARS system, with additional development, could be used in a near-real-time setting for Mars numerical weather prediction. However, NWP for Mars is still F I G U R E 2 Sample screenshot from the EMARS plotter | 147 GREYBUSH Et al. in its early stages, and predictability is limited due to baroclinic/barotropic error growth (Greybush et al., 2013;Newman, Read, & Lewis, 2006) and forcing errors including suboptimal representation of aerosol heating (Zhao et al., 2015). | Engineering studies and spacecraft operations Mars reanalyses are also useful for engineering studies for future Mars robotic and human exploration missions. An improved characterization of atmospheric conditions and their spatiotemporal variability that affect the orbital trajectories, aerobraking, aerocapture, descent and landing of spacecraft can allow for a smaller landing ellipse and open additional landing sites for consideration. Similarly improved characterization of dust opacity should benefit studies of solar power availability and surface operations aboard landers and rovers. | FUTURE VERSIONS EMARS is a continually improving product, and innovations currently being developed by the EMARS group are expected to be included in future EMARS versions. These may include • Improvements to the MGCM dynamics, including the switch to a cubed-sphere geometry and increased horizontal and vertical resolution. • Improvements to the MGCM physics, including CO 2 ice cloud microphysics. • Improvements to the LETKF data assimilation scheme, including the use of a hybrid variational-ensemble technique. • Improvements to TES assimilation using interactive retrievals (Hoffman, 2010). • Improvements to the vertical distribution of aerosols via assimilation of MCS dust and ice profiles (e.g. Navarro et al., 2017).
6,568.8
2019-08-23T00:00:00.000
[ "Environmental Science", "Physics" ]
Synthesis and Characterization of Photo-Responsive Thermotropic Liquid Crystals Based on Azobenzene A series of new thermotropic liquid crystals (LCs) containing azobenzene units was synthesized. The structures of the compounds were characterized by means of NMR and FTIR spectroscopy. Their mesomorphic behaviors were investigated via differential scanning calorimetry (DSC) and polarizing optical microscopy (POM). Based on the POM and DSC measurements, the optical properties of the Razo-ester were tested using UV-vis spectroscopy. The azobenzene side chain displayed a strong ability to influence the formation of thermotropic LCs. Introduction Liquid-crystalline materials have been applied for optical information displays and storage devices, as the modulation of the liquid-crystalline phase's molecular alignment can be controlled by external stimuli, such as heat, electric fields, and light [1][2][3].Azobenzene and its derivatives are photoisomerizable materials that undergo a reversible transformation between trans and cis isomers in the presence of light.Liquid crystals (LCs) containing an azobenzene moiety, either of low molar mass or polymeric in nature, have attracted tremendous attention as a result of their light-induced, photo-switchable, and elastic properties [4][5][6].The introduction of reversible trans-cis photoisomerization of azobenzene can exert large effects on the properties of LCs [7][8][9][10]. In recent times, study of LC's incorporation of azobenzene moiety has been developing [11,12].For example, Tomczyk synthesized a new class of star-shaped, liquid-crystalline functionalized with photochromic azobenzene and mesogenic groups, and investigated its light-induced anisotropy [13].Zupančič studied the phenomenon of laser beam propagation in azobenzene liquid crystals (azo LCs) in waveguiding configuration [14].Yamamoto investigated the viscoelastic and photoresponsive properties of composite gels through the cis-trans photoisomerization of the azobenzene compound doped into the host LCs [15]. Although a variety of azobenzene compounds are now used in the formulation of the thermochromic LC, the influence of the side chain of the azobenzene moiety, which ensures the stable formation of LCs remains rare [16,17].Ahmed reported the mesophase behavior of 1:1 mixtures of 4-n-alkoxy phenylazo benzoic acids bearing terminal alkoxy groups of different chain lengths. As we know, the properties of an LC depend on the chemical structure of the azobenzene and other segments [18].With those objects in mind, we designed and synthesized a new series of azobenzene LC compounds, containing a flexible chain of C m H 2m (where m = 4-12).In addition, a sequence of very interesting phase changes were observed when the compounds were assigned to different terminal groups, such as alkyloxy, alkoxy, ester, and hydroxyl.The phase behaviors of these compounds were characterized by differential scanning calorimetry (DSC) and polarizing optical microscopy (POM).The compounds of these series exhibited different phases when changing temperature.This investigation revealed that the kind of chain at the side of the azobenzene moiety plays a more important role in the formation of LC phases than has previously been understood. Chemicals and Characterization All chemicals were purchased from Sigma-Aldrich (New York, NY, USA) and Merck (Berlin, Germany), and used without further purification.All the chemicals were characterized by 1 H and 13 C NMR spectroscopy on a 400 MHz Bruker Ultrashield Spectrometer (Bruker, Germany) in CDCl 3 .Accurate mass spectra were obtained on a Thermo Scientific Q Exactive FTMS, employing an ASAP probe (Thermo Fisher Scientific, Waltham, MA, USA).Photoisomerization was measured on a Cary 50-Bio UV-Vis Spectrophotometer (Varian, Santa Clara, CA, USA) against a background of solvent in a quartz cuvette.Trans to cis isomerization was induced by an EXFO Acticure 4000 light source via a liquid light-guide working at 365 nm wavelength.The UV light intensity was about 3.8 mW/cm 2 .Cis to trans isomerization was induced by visible light at 12.1 mW/cm 2 .UV-vis data was collected every 30 s. POM was conducted using a Nikon Eclipse 80i (Nikon, Tokyo, Japan).Phase transition temperatures and enthalpies of discogens were investigated using a TA-DSC Q100 instrument (TA Instruments, New Castle, DE, USA) under N 2 atmosphere with heating and cooling rates of 10 • C min −1 . Synthesis of Target Compounds Compound 1:8.9 g of ethyl p-aminobenzoate was dissolved in 80 mL of 12% solution of hydrochloric acid.Solution was cooled to 0 • C in an ice bath.Then 4.5 g of sodium nitrite in 30 mL of distilled water was added within 30 min.6.2 g of phenol was added.After 1 h, 200 mL of a saturated solution of sodium carbonate was added.Resulting mixture was cooled.The precipitate was filtered off on a Büchner funnel.The product was isolated through recrystallization with ethanol.The powder obtained had a red-orange intense color, and was dried in a vacuum. Compound 2a: Compound 1 (9.2g), anhydrous K 2 CO 3 (9.4g) and KI (0.18 g) were dissolved in 120 mL of acetone, under argon atmosphere.After 10 min stirring, 1-bromobutane (5.6g) was added dropwise via syringe while the solution was refluxing.The mixture was stirred overnight under reflux.After cooling down to room temperature, 200 mL of water was added.The product was extracted with dichloromethane.The organic layer was dried with MgSO 4 and filtered, and then the solvent was evaporated.The product was recrystallized with ethanol.Yield: 86%.The product was an orange powder. Yield: 86%.Mp: 87 Compound 2b-d was synthesized according to the same method used for compound 2a. 1 H NMR data was shown in Figure 1.Compound 3a-c was synthesized in accordance with the literature and our previous works.Data are not shown here [17][18][19][20][21]. Synthesis and Characterization The synthetic route for compounds 2a-d with azobenzene moiety and different side chains is outlined in Scheme 1.The chemical structures of these compounds were characterized using a combination of 1 H NMR and FTIR spectroscopy.Protons in the structure of compounds 2a-d were denoted by the letters "a-i" as shown in Figure 1.For the compound 2a with C 4 alkyl chain, the chemical shift around δ = 6.90-8.30ppm corresponds to the proton from the azobenzene moiety, while the peaks at δ = 3.90-4.10and δ = 5.35-5.60correspond to methine protons (Figure 1a).As the alkyl chain of compounds 2a-d increased from C 4 to C 6 , an obvious increase of the chemical shift for the CH 2 proton around 1.3 ppm was observed.As the alkyl chain further increased to C 8 and C 12 for the compounds 2c-d, the corresponding increase of the shift was observed, which proved the compounds had been synthesized successfully.The chemical structures of compounds 2a-d were confirmed by the FTIR results.As shown in Figure 2, the absorption occurs at 1730 cm −1 , which is attributed to the carbonyl group of the compounds.The absorption occurs at 1600 cm −1 corresponding to the benzene skeleton vibration in the spectrum of azobenzene.The FTIR bands at 2930 and 2850 cm −1 represent the anti-symmetrical and symmetrical stretch vibrations and absorption spectra of CH 2 group, respectively.An increase in the absorption intensity of the CH 2 group was observed, which illustrates the difference between the compounds.The presence of the above bands in the graft copolymer gives strong evidence of successful synthesis. Liquid Crystal Properties of the Synthesized Cm-Azo-Ester The structures of obtained compounds used in this paper are shown in Scheme 1.A POM with a heating stage and DSC were used for measuring the melting temperature (Tm) and isotropization temperature (Ti) of the compounds.DSC thermograms for compounds 2a-d upon heating and cooling cycles were shown in Figure 3. Compounds 2a-d exhibited similar phase transition behaviors.For example, in the DSC measurements of compound 2c (Figure 3c), an exothermic peak appeared at 103 • C when the compound was cooled from the molten state, confirming a phase transition from the isotropic phase to the nematic phase.Another exothermic peak at 64 • C was assigned to the crystallization.Further endothermic peaks were observed at 80 • C, which were attributed to nematic-smectic phase transitions.Compound 2c was selected for the phase transition study by POM. Figure 4 shows representative POM micrographs of compound 2c at different temperatures.Birefringent texture for the crystalline phase was observed when the temperature was below 88 • C (Figure 4a).When the temperature was increased to 90 • C, POM texture of the enantiotropic smectic phase was formed (Figure 4b).Upon further heating, the phase transformed into another smectic phase with a typical fan-shaped texture.It should be noted that this transition was not observed by DSC measurement (Figure 4c).After increasing the temperature to 102 • C, the transition from a smectic to a nematic phase was observed (Figure 4d).The isotropic optical property was formed at 106 • C, indicated by the disappearance of brightness under polarized light (Figure 4e).When cooling down from melted isotropic liquid at a high temperature, birefringence emerged again due to the formation of a nematic phase (Figure 4f).Upon further cooling, the phase transformed into a smectic phase (Figure 4g), and finally, a solid crystal phase (spherulite texture) was regained at 64 • C (Figure 4h).According to the results of the POM and DSC, the phase transition temperature results of the compounds 2a-d were listed in Table 1.The results show that all the compounds 2a-d have stable smectic-nematic phases with a broad temperature range, which is important for their potential application to the field of optoelectronics.The correlation between the phase transition temperature and side chain length was summarized in Figure 5.One of the notable features was that when the side chain length changed from C 4 H 9 to C 12 H 25 , the nematic phase temperature range was initially increased from C 6 H 13 to C 8 H 17 .After that, a descending trend was observed when the spacer changed from C 8 H 17 to C 12 H 25 .By contrast, in comparison to the temperature range of the nematic phase, the smectic phase temperature range had little influence on the side chain length from C 4 H 9 to C 12 H 25 .In order to study the effect of molecular structure on the formation of LCs, we synthesized some compounds which have different functional groups on sides of azobenzene (Scheme 2).It has been reported that the molecular shape and the intermolecular force were important to the formation of liquid-crystalline phases [22].Ester and ether on both sides of azobenzene can provide appropriate molecular interactions, such as hydrophobic interactions, π-π interactions, and van der Waals forces, which will promote the bonding between the donor and acceptor of the rigid parts, and hence increase the effective length of mesogenic groups.The increased effective length of rigid mesogenic groups can lead to the easy formation of liquid-crystalline phases.By contrast, when the type of groups on sides of azobenzene were changed, structure-property relationships in LC were observed.DSC and POM results showed that no liquid-crystalline phases were detected in compounds 3a-c.This is clearly demonstrated by the fact that the covalent incorporation groups on sides of azobenzene play an important role in stabilizing the LC phase.Scheme 2. Molecular structure of compounds 3a-c and compound 2c. UV-Visible Spectra The early work has reported that LCs incorporating azobenzene moiety exhibit trans-cis photoisomerization behavior upon UV-Visible (UV-Vis) light irradiation [23,24].Figure 6 shows the UV-induced trans-cis isomerization of compound 2c.Its reversible photoresponsive properties were well-demonstrated by in situ POM characterization.By irradiating the compound with UV light, it can be transformed from the anisotropic state to the isotropic state owing to the isomerization of azobenzene.Switching to visible light induced the expected return to the anisotropic phase.UV-Vis spectra were employed to evaluate the isomerization of compound 2c in solution during the reversible changes under alternating UV and visible light irradiation.For example, the UV-Vis spectra of compound 2c in CH 2 Cl 2 solution are shown in Figure 7 (the concentration was 0.002 wt %).The data were obtained at a constant frequency of 30 s/time under UV light (365 nm, 3.8 mW/cm 2 ) or visible light (12.1 mW/cm 2 ).The compound exhibited an absorption maximum at 360 nm and weak shoulders at 440 nm, which were related to p-p* and n-p* transitions of the azobenzene trans-cis configuration, respectively [25]. Discussion A series of new LCs based on azobenzene was synthesized and their properties were studied through DSC and POM.The results show that the alkyloxy chains (m = 4-12) in the homologue lead to different phase changes at different temperatures.The compounds 2a-d have clear smectic-nematic phases within a broad temperature range, making them potentially applicable to the field of optoelectronics.Three other compounds which contain different substituent groups in the two sides of azobenzene have also been studied.DSC and POM studies revealed that these compounds do not exhibit any liquid crystallinity. In addition to our findings, the results obtained here will act as feedback to assist us with ongoing research.We are continuing our synthesis work to find out if we can synthesize other examples of LCs based on azobenzene.Furthermore, we are developing methods in order to more efficiently predict materials with desirable applications. Figure 5 . Figure 5.A plot showing the transition temperature as a function of spacer length on cooling.
2,948.4
2018-03-26T00:00:00.000
[ "Chemistry", "Materials Science" ]
High-Performance Color-Converted Full-Color Micro-LED Arrays : Color-converted micro-LED displays consisting of mono-blue-colored micro LED arrays and color-conversion materials have been used to achieve full color while reliving the transfer and epitaxial growth of three di ff erent-colored micro LEDs. An e ffi cient technique is suggested to deposit the color-conversion layers on the blue micro LEDs by using a mixture of photo-curable acrylic and nano-organic color-conversion materials through the conventional lithography technique. This study attempts to provide a solution to fabricate full-color micro-LED displays. These results indicate that the blue and green dyes can be used as the color-conversion layer for full-color micro LEDs based on color-converted blue the leakage of blue light was significantly decreased. These results show it is possible to use a mixture of photocurable acrylic and green/red dye as the color-conversion layer for micro if it is to deposit enough of We found that the color-conversion e ffi ciency of red dyes was slightly lower than that of green dyes. This result can be attributed to low absorbance of the green dye of blue light compared to that of green dye. With deposition over the thickness of 7 µ m, the leakage of blue light was significantly decreased. These results show it is possible to use a mixture of photocurable acrylic and green / red dye as the color-conversion layer for full-color micro LEDs if it is possible to deposit enough thickness of the color-conversion layer. Introduction The micro-LED display is a technology that has expanded rapidly in recent years because of its outstanding features such as low power consumption, quick response, long lifetime, and wide color gamut [1][2][3][4]. However, the fabrication of full-color micro-LED arrays is still under the process of channeling. The most commonly used routes to fabricate these devices involve epitaxial growth of red, green, and blue (RGB) active materials on different substrates followed by chip fabrication, wafer dicing, and pick-and-place robotic manipulation into individually packaged components for interconnection by bulk wire bonding. However, dicing becomes more difficult as the chip size decreases beyond 100 µm × 100 µm, and integrating them on the same thin-film transistor-based glass substrates through mass transfer, which requires precise alignment for each pixel, becomes a challenge [5][6][7]. Moreover, the light-emission efficiencies and degradation rates of RGB micro LEDs are different; as a result, a complicated driving circuit may be needed to maintain the color-rendering index during operation. A simpler method to achieve full color without mass transfer of individual R, G, B chips is to utilize the color filters to achieve RGB subpixels while generating white light by deposition of a yellow color converter on blue micro-LEDs [5,6]. In this device configuration, the color downconversion layer does not need to be pixilated; thereby it becomes more feasible for manufacturing. However, the color filters would absorb two-thirds of the outgoing light. In addition, color crosstalk would occur, owing to scattering of the color-conversion layer. Another approach to avoid the mass-transfer process is the fabrication of full-color micro LEDs based on blue or UV micro LEDs and color-conversion materials of blues, greens, and reds. For color-conversion materials, nanophosphors and quantum dots (QDs) can be used [8][9][10][11]. For example, Han et al. [9] deposited RGB QDs on top of a UV LED array using the aerosol jet printing method to form individual subpixels. This configuration is advantageous in that it does not require a color filter and offers a possibility of achieving high efficiency and a wide color gamut. However, a thick layer of QDs with high optical density is required for high color downconversion efficiency. In addition, longer processing times and special equipment are required. So, there is a need for a method that can deposit a large amount of color-conversion materials on blue pixels in a single process with precise alignment for each pixel. Photolithography is a useful method to achieve this purpose. However, there is no report on the fabrication of a micro-sized color-conversion layer based on photolithography. In this paper, we propose an efficient technique to fabricate full-color micro-LED pixels using the mixture of photocurable acrylic and nano-organic color-conversion materials by conventional lithography technique. We can significantly control the position, size, and thickness of the color-conversion layers to fabricate full-color micro LEDs. This study attempts to provide a solution for fabricating full-color micro-LED displays. Figure 1 illustrates the device structure of color-converted full-color micro LEDs based on mono-blue-color micro LEDs. It consists of bottom blue micro-LED arrays and a layer of two color (green and red)-conversion materials. The total array size is 100 × 100 pixels in a chip area of 10 × 10 mm 2 . The chip size of the blue micro LED is 60 × 100 µm 2 and the pitch length of each pixel is 300 µm. A GaN-based LED structure (operating at a dominant emission peak of 450 nm) was grown on a sapphire substrate using metal-organic chemical vapor deposition. The LEDs consisted of a 2 µm Si-doped n-GaN layer, a molecular quantum well (MQW) active layer consisting of five periods of undoped InGaN wells and undoped GaN barriers, and a 0.15 µm Mg-doped p-GaN layer. To fabricate the LEDs, the p-GaN layer was etched using an inductively-coupled plasma (ICP) that utilized Cl 2 and BCl 3 source gases until the n-GaN layer was exposed for n-type contact. Further, the LEDs were fabricated using a 200 nm-thick indium tin oxide (ITO) layer as a transparent current spreading layer. Layers of Cr and Au with thicknesses of 30 nm and 100 nm, respectively, were deposited by e-beam evaporation as the n-and p-pad electrodes, respectively. The SiO 2 passivation layer was coated on the wafer for array electrode fabrication. Further, the SiO 2 layer on the n-and p-pad was selectively etched by buffer oxide etchant (BOE) until the n-and p-pad electrodes were exposed to the array electrode. Layers of Cr and Au with thicknesses of 100 nm and 500 nm, respectively, were deposited by e-beam evaporation as the n-and p-array electrodes, respectively. To avoid the color-crosstalk effect, black matrix was deposited between the blue LEDs by using simple lithography techniques. Experimental Section To fabricate conformal color-conversion layers, green, and red perylene bisimide were mixed with organic insulator material, a solution of acrylic resin, and a photoactive compound (PAC) with a positive tone of diazonaphthoquinone (DNQ). Perylene bisimide is known to have strong absorption in the visible region and high fluorescence quantum yields (QY) in diluted solutions, and high photochemical stability [12][13][14][15]. N,N -bis (4-bromo-2,6-diisopropylphenyl)-1, 6, 7, 12-tetra [4-bromophenoxy] perylene-3, 4, 9, 10-tetracarboxylic diimide and N,N -bis(4-bromo-2,6-diisopropylphenyl)-1, 6, 7, 12-tetra [4-Iodophenoxy] perylene-3, 4, 9, 10-tetracarboxylic diimide were used as the color-conversion layers for green and red emissions, respectively [12][13][14][15]. With the conventional lithography process, red and green color conversion layers (CCLs) were successfully deposited on blue LED. Figure 2 details the transmittance of a 10 μm-thick photo-curable acrylic, where the thickness depends on the spin-coating rate of the photo-curable acrylic and the UV-visible absorption and photoluminescence (PL) spectroscopy for green and red dyes. Figure 2a,b indicates that the photocurable acrylic had high transmittance in the visible range and the thickness could be increased by decreasing the spin-coating rate. For CCLs, we initially attempted to mix conventional phosphors such as β-SiAlON:Eu 2+ and CaAlSiN3:Eu 2+ for green and red emissions. However, in terms of agglomeration of phosphor in acrylic, selectively patterning on blue LEDs was not possible, as shown by the Supplementary Figure S1a. This result was obtained due to the strong absorption of UV light by the aggregated phosphor particles. Results and Discussion Therefore, understanding the dispersion of color-conversion materials was important. We determined that nano-size organic materials were suitable for mixing with photo-curable acrylic, as shown in Supplementary Figure S1. With patterned CCLs, it is possible to fabricate full color arrays as shown in Supplementary Figure S2. Before mixing the green and red nanosize dyes in photocurable acrylic, we tried to measure transmittance of the photocurable acrylic. The transmittance was almost 100% in the visible range. This result indicated that the photocurable acrylic could not absorb green and red light produced by color conversion. The thickness of acrylic materials could be increased by decreasing the spin coating rate. To confirm the possibility of color conversion of green and red dyes by blue light, we tried to measure the absorption and PL spectra using a 365 nm excitation. The result shows that blue light could be absorbed in green, and that green dyes and green and blue dyes emitted different peaks at 550 nm and 620 nm, respectively, as shown in Figure 2c,d. These results indicate that the blue and green dyes can be used as the color-conversion layer for full-color micro LEDs based on colorconverted blue LED. Figure 2 details the transmittance of a 10 µm-thick photo-curable acrylic, where the thickness depends on the spin-coating rate of the photo-curable acrylic and the UV-visible absorption and photoluminescence (PL) spectroscopy for green and red dyes. Figure 2a,b indicates that the photo-curable acrylic had high transmittance in the visible range and the thickness could be increased by decreasing the spin-coating rate. For CCLs, we initially attempted to mix conventional phosphors such as β-SiAlON:Eu 2+ and CaAlSiN 3 :Eu 2+ for green and red emissions. However, in terms of agglomeration of phosphor in acrylic, selectively patterning on blue LEDs was not possible, as shown by the Supplementary Figure S1a. This result was obtained due to the strong absorption of UV light by the aggregated phosphor particles. Results and Discussion Therefore, understanding the dispersion of color-conversion materials was important. We determined that nano-size organic materials were suitable for mixing with photo-curable acrylic, as shown in Supplementary Figure S1. With patterned CCLs, it is possible to fabricate full color arrays as shown in Supplementary Figure S2. Before mixing the green and red nanosize dyes in photocurable acrylic, we tried to measure transmittance of the photocurable acrylic. The transmittance was almost 100% in the visible range. This result indicated that the photocurable acrylic could not absorb green and red light produced by color conversion. The thickness of acrylic materials could be increased by decreasing the spin coating rate. To confirm the possibility of color conversion of green and red dyes by blue light, we tried to measure the absorption and PL spectra using a 365 nm excitation. The result shows that blue light could be absorbed in green, and that green dyes and green and blue dyes emitted different peaks at 550 nm and 620 nm, respectively, as shown in Figure 2c,d. These results indicate that the blue and green dyes can be used as the color-conversion layer for full-color micro LEDs based on color-converted blue LED. To identify the optimum conditions for CCL, we attempted to determine the optimum mixing ratios between photocurable acrylic and dye, and thickness. To find the optimum mixing ratio between acrylic and dye, the conversion efficiency of blue to red or green light was measured as shown in Figure 3a,b. With increasing the mixing rate to 0.6 wt%, the conversion efficiency was improved by 47% and 35% for green and red dyes, respectively. However, when mixing rate was further increased, conversion efficiency was significantly decreased, owing to the aggregation of dye. The aggregation of dye can be attributed to the excimer formation and concentration induced quenching. [16] To find optimum thickness for color conversion, we tried to measure the PL spectra of green and red dyes, depending on the thickness of the mixture of photocurable acrylic and green/red dyes. Figure 3c,d shows the PL spectra of green and red dyes with the emission of blue LED. This spectrum shows that the blue leakage was decreased while green or red emission was increased with increasing thickness of the color-conversion layer. The full with half maximum (FWHM) of green and red dye was 78 nm and 52 nm, respectively. Figure 3e,f shows the normalized PL intensity of blue light and green or red light for the green and red dyes, respectively. When the thickness of CCLs is 3 μm, there was leakage of blue light as shown in Figure 3c,d. When increasing the thickness of the color-conversion layer, leakage of blue light was significantly decree sed and green or red emission was increased as shown in Figure 3e,f. We found that the colorconversion efficiency of red dyes was slightly lower than that of green dyes. This result can be attributed to low absorbance of the green dye of blue light compared to that of green dye. With deposition over the thickness of 7 μm, the leakage of blue light was significantly decreased. These results show it is possible to use a mixture of photocurable acrylic and green/red dye as the colorconversion layer for full-color micro LEDs if it is possible to deposit enough thickness of the colorconversion layer. To identify the optimum conditions for CCL, we attempted to determine the optimum mixing ratios between photocurable acrylic and dye, and thickness. To find the optimum mixing ratio between acrylic and dye, the conversion efficiency of blue to red or green light was measured as shown in Figure 3a,b. With increasing the mixing rate to 0.6 wt %, the conversion efficiency was improved by 47% and 35% for green and red dyes, respectively. However, when mixing rate was further increased, conversion efficiency was significantly decreased, owing to the aggregation of dye. The aggregation of dye can be attributed to the excimer formation and concentration induced quenching. [16] To find optimum thickness for color conversion, we tried to measure the PL spectra of green and red dyes, depending on the thickness of the mixture of photocurable acrylic and green/red dyes. Figure 3c,d shows the PL spectra of green and red dyes with the emission of blue LED. This spectrum shows that the blue leakage was decreased while green or red emission was increased with increasing thickness of the color-conversion layer. The full with half maximum (FWHM) of green and red dye was 78 nm and 52 nm, respectively. Figure 3e,f shows the normalized PL intensity of blue light and green or red light for the green and red dyes, respectively. When the thickness of CCLs is 3 µm, there was leakage of blue light as shown in Figure 3c,d. When increasing the thickness of the color-conversion layer, leakage of blue light was significantly decree sed and green or red emission was increased as shown in Figure 3e,f. We found that the color-conversion efficiency of red dyes was slightly lower than that of green dyes. This result can be attributed to low absorbance of the green dye of blue light compared to that of green dye. With deposition over the thickness of 7 µm, the leakage of blue light was significantly decreased. These results show it is possible to use a mixture of photocurable acrylic and green/red dye as the color-conversion layer for full-color micro LEDs if it is possible to deposit enough thickness of the color-conversion layer. Figure 3. PL intensity of conversion efficiency of (a) green and (b) red dyes depending on mixing rate between acrylic and dye, and PL spectra of (c) green and (d) red dyes depending on thickness of colorconversion layer and normalized PL intensity of blue and green or red light of (e) green dyes and (f) red dyes. Figure 4 presents the optical microscope image of the RGB array and its emission image, and the dependence of the electroluminescence (EL) spectra on the input current and color coordinate variations. As shown in Figure 4a, the red and green dyes were only selectively deposited on the blue LED. Figure 4b shows the emission image of full-color micro-LED arrays with an input current of 1 mA for each LED. The EL spectra of full-color micro LEDs are presented in Figure 4c. We found that the blue emission peak was slightly red-shifted. This can be resulted from the joule heating. In addition, with broadening of the spectrum of green and red dyes, there was overlapping of peaks. In addition, it is possible to induce unwanted light emission by absorption of the surrounding light. Although this method has many advantages, the efficiency of color conversion needs improvement. In addition, a visible light filter can be used to avoid absorption of the surrounding light. Approaches such as aggregation-induced emission are needed to improve the efficiency of color-conversion materials [17,18]. . PL intensity of conversion efficiency of (a) green and (b) red dyes depending on mixing rate between acrylic and dye, and PL spectra of (c) green and (d) red dyes depending on thickness of color-conversion layer and normalized PL intensity of blue and green or red light of (e) green dyes and (f) red dyes. Figure 4 presents the optical microscope image of the RGB array and its emission image, and the dependence of the electroluminescence (EL) spectra on the input current and color coordinate variations. As shown in Figure 4a, the red and green dyes were only selectively deposited on the blue LED. Figure 4b shows the emission image of full-color micro-LED arrays with an input current of 1 mA for each LED. The EL spectra of full-color micro LEDs are presented in Figure 4c. We found that the blue emission peak was slightly red-shifted. This can be resulted from the joule heating. In addition, with broadening of the spectrum of green and red dyes, there was overlapping of peaks. In addition, it is possible to induce unwanted light emission by absorption of the surrounding light. Although this method has many advantages, the efficiency of color conversion needs improvement. In addition, a visible light filter can be used to avoid absorption of the surrounding light. Approaches such as aggregation-induced emission are needed to improve the efficiency of color-conversion materials [17,18]. Conclusions We demonstrated the full-color LEDs using the conventional lithography technique based on a solution of acrylic resin and PAC, and green and red perylene bisimide dyes. With monolithic blue LEDs, it is possible to fabricate full-color RGB micro-LED arrays. It is simple to apply mass production in a controlled manner using the conventional lithography technique. This study provides a solution toward fabricating full-color micro-LED displays. Although this method has many advantages, the overall intensity and color-conversion efficiency are low, and the full-width of half maximum of red and green emissions is broad to use in a full-color display. However, we believe that these problems can be solved with improvement of the efficiency of the micro blue LED, with optimization of internal quantum efficiency, extraction efficiency, current spreading, and a decrease in heating, In addition, the color-conversion efficiency can be improved using the approaches of aggregation-induced emission and synthesis of high-efficiency fluorescent materials with a narrow emission spectrum. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: Optical microscope images after lithography with (a) photocurable acrylic and β-SiAlON: Eu 2+ and (b) photocurable acrylic and green dye. Figure Conflicts of Interest: The authors declare no conflict of interest.
4,414.2
2020-03-20T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Transcriptional responses in the hepatopancreas of Eriocheir sinensis exposed to deltamethrin Deltamethrin is an important pesticide widely used against ectoparasites. Deltamethrin contamination has resulted in a threat to the healthy breeding of the Chinese mitten crab, Eriocheir sinensis. In this study, we investigated transcriptional responses in the hepatopancreas of E. sinensis exposed to deltamethrin. We obtained 99,087,448, 89,086,478, and 100,117,958 raw sequence reads from control 1, control 2, and control 3 groups, and 92,094,972, 92,883,894, and 92,500,828 raw sequence reads from test 1, test 2, and test 3 groups, respectively. After filtering and quality checking of the raw sequence reads, our analysis yielded 79,228,354, 72,336,470, 81,859,826, 77,649,400, 77,194,276, and 75,697,016 clean reads with a mean length of 150 bp from the control and test groups. After deltamethrin treatment, a total of 160 and 167 genes were significantly upregulated and downregulated, respectively. Gene ontology terms “biological process,” “cellular component,” and “molecular function” were enriched with respect to cell killing, cellular process, other organism part, cell part, binding, and catalytic. Pathway analysis using the Kyoto Encyclopedia of Genes and Genomes showed that the metabolic pathways were significantly enriched. We found that the CYP450 enzyme system, carboxylesterase, glutathione-S-transferase, and material (including carbohydrate, lipid, protein, and other substances) metabolism played important roles in the metabolism of deltamethrin in the hepatopancreas of E. sinensis. This study revealed differentially expressed genes related to insecticide metabolism and detoxification in E. sinensis for the first time and will help in understanding the toxicity and molecular metabolic mechanisms of deltamethrin in E. sinensis. Introduction Pyrethroids are synthetic pesticides, which were first derived from the extract of the flowers of Chrysanthemum cinerariaefolium, and they are widely used for agricultural and residential control of pest insects because of their selectivity and low toxicity against non-target organisms PLOS such as mammals and birds [1][2][3][4]. However, pyrethroids are highly toxic to fishes: LD 50 for fishes is 10 to 1000 times less than the corresponding values for mammals and birds [5][6][7][8]; this means that pyrethroid contamination can make aquatic animals the targets of pyrethroid intoxication. Pyrethroids have 2 subgroups because of their chemical structure: type I and II [9]. Deltamethrin,[(S)-cyano-(3-phenoxyphenyl)-methyl](1R,3R)-3-(2,2dibromoethenyl)-2,2-dimethyl-cyclopropane-1-carboxylate, is a type II pyrethroid. Because of its short half-life in the environment and organisms, deltamethrin has become one of the most commonly used pesticides [10][11][12]. Many studies on the ecotoxicology of deltamethrin in fishes have been conducted [13][14][15][16][17][18][19], and the World Health Organization [20] determined that the lethal concentration of deltamethrin in fishes exposed for 96 h is 0.4-2 μgÁL -1 . Crustaceans are more sensitive to pyrethroids than fishes [21], and comparative tests have shown that deltamethrin is most toxic to crustaceans [22]. The 96-h LC 50 value of deltamethrin for the pink shrimp (Penaeus duorarum) was 0.35 μgÁL -1 [23], and Smith and Stratton [24] concluded that shrimps and lobsters are susceptible to all pyrethroids. The Chinese mitten crab, Eriocheir sinensis, is an important crustacean species in China, and its culture in facilities was started in the early 1980s [25]. Because of an increasing demand for E. sinensis in the food market, commercial production is rapidly expanding, with an annual output worth approximately US$ 4 billion in Jiangsu Province, China [26]. The wide use of deltamethrin has posed a threat to crab breeding, as pyrethroids may be used for pond cleaning [27]. Biodegradation of pyrethroids in crustaceans has been reported [28]; however, there is currently no information on the effects of deltamethrin on E. sinensis, and the exact mode of action of deltamethrin in E. sinensis is unknown. Since alterations in gene expression after external stimuli are rapid, transcriptional responses to pharmaceutical drugs may help us to understand how an organism responds to a particular drug [29]. High-throughput RNAsequencing (RNA-Seq) has become the conventional and highly effective technology for analyzing gene expression and identifying novel transcripts and differentially expressed genes [30]. RNA-Seq has been widely used to study various invertebrates, such as E. sinensis [31,32], Litopenaeus vannamei [33], and Crassostrea gigas [34]. The hepatopancreas is not only a digestive gland but also an immune organ, and it plays an important role in the innate immune system and metabolism of xenobiotics. It is the primary site for synthesizing and excreting immune molecules, such as lectin or lectin-related proteins [35], antibacterial peptides [36] and beta-1,3-glucan-binding protein [37]. Transcriptome analysis of L. vannamei and Litopenaeus setiferus by expressed sequence tag analysis and novel gene discovery has revealed that the hepatopancreas plays a vital role in nonspecific immunity, and the cDNA library for the hepatopancreas is more representative than the library for hemocytes [35]. Li et al. [38] used RNA-Seq to construct a non-normalized cDNA library for the hepatopancreas in E. sinensis and identify immune-associated genes after bacterial infection. Huang et al. [39] assembled and annotated a comprehensive de novo transcriptome of E. sinensis hepatopancreas and characterized differentially expressed genes (DEGs) enriched at different molting stages. These studies were performed without a reference genome. However, Song et al. [40] performed, for the first time, genome sequencing, assembly, and annotation for E. sinensis. All the above-mentioned studies provide valuable information for studying important processes in E. sinensis or molecular mechanisms underlying the adaptive changes made by E. sinensis after stimulation with xenobiotics. The present study was conducted to analyze the transcriptome of the hepatopancreas in E. sinensis exposed to deltamethrin by using Illumina sequencing and bioinformatic analysis with the reference genome. The objective of this study was to annotate functional genes by using transcriptome analysis, analyze the short-term molecular effects of deltamethrin on E. sinensis, and evaluate the transcriptomic response of E. sinensis after exposure to deltamethrin. Our study helped in understanding the biological functions of the hepatopancreas of E. sinensis and provided a reference for further research on E. sinensis. Characterization of immune molecules and analysis of insecticide metabolism are crucial for healthy crab breeding and establishing guidelines for farmers to avoid inappropriate use of insecticides. Statement of ethics This study was strictly performed in accordance with the guidelines on the care and use of animals for scientific purposes established by the Institutional Animal Care and Use Committee of Shanghai Ocean University, Shanghai, China. Maintenance and treatment of E. sinensis In October 2016, healthy mature male crabs (average weight, 114.04 ± 9.14 g) were collected from the crab breeding base (32.864256110˚N, 119.866714447˚E) of Tao Huadao Agricultural Development Co., Ltd. in Anfeng Town, Xinghua City, northern Jiangsu Province, China. The crabs were transported to the laboratory in polystyrene boxes filled with cultivation water, which was aerated during transportation with an aeration pump. Upon arrival, the crabs were cultured in experimental tanks (capacity, 90 L) with 20 L of natural water under controlled temperature (23-25˚C), pH (7.25 ± 0.25), and dissolved oxygen (6.85 ± 0.15 mgÁL -1 ) for 2 weeks. During the acclimation period, the crabs were fed with a commercial crab feed (37% crude protein) 2 times a day and were starved 1 day before the experiment. After 2 weeks of acclimation, 60 healthy crabs were separated into 6 groups (3 test and 3 control groups); the crabs in the test groups were given a 40 min bath in 3 ppb of deltamethrin (95% purity, Sigma; dissolved in acetone to prepare a stock solution of 25 mgÁL -1 ), and the crabs in the control groups were exposed to the same concentration of acetone as that of deltamethrin in the test groups. Each experiment was performed in triplicate. After 40 min, the crabs were dissected on ice, and the hepatopancreas was collected in liquid nitrogen and stored at -80˚C for RNA extraction. RNA isolation, cDNA library construction, and sequencing Total RNA was isolated from the hepatopancreas with TRIzol reagent (Invitrogen, USA), according to the manufacturer's instructions. DNA contaminants were removed using RNasefree DNase I (TaKaRa Biotechnology, Dalian, China), and total RNA was eluted in 100 μL of RNase-free MilliQ H 2 O. Then, the RNA was stored at -80˚C before the next process. The RNA quality was checked with the NanoDrop ND-1000 UV-vis Spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA), and RNA integrity was determined using the RNA 6000 Nano LabChip kit (Agilent Technologies) and Agilent 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA). We isolated poly(A) mRNA with oligo-dT beads and Oligotex mRNA kits (Qiagen). The mRNA was treated with fragmentation buffer, and the cleaved RNA fragments were used as templates to synthesize the first-strand cDNA with reverse transcriptase and random hexamer primers. The second-strand cDNA was synthesized with RNase H and DNA polymerase I. These double-stranded cDNA fragments were end-repaired with T4 DNA polymerase, Klenow fragment, and T4 polynucleotide kinase, followed by the addition of a single "A" base with Klenow 3 0 -5 0 exopolymerase. Then, the fragments were ligated with an adapter or index adapter by using T4 quick DNA ligase. The adaptor-ligated fragments were selected on the basis of their size by using agarose gel electrophoresis. The desired cDNA fragment was excised from the gel, and PCR was performed to amplify the fragments. After validation using the Agilent 2100 Bioanalyzer and ABI StepOnePlus Real-Time PCR System (ABI, California, USA), the cDNA library was finally sequenced and constructed on a flow cell by using the high-throughput mode on the Illumina HiSeq 2500 unit (Illumina, San Diego, USA). Illumina sequencing, data processing, and quality control The low-quality reads were filtered out and 3 0 adapter sequences were removed using Trim Galore. We cleaned the obtained reads by using FastQC software (http://www.bioinformatics. babraham.ac.uk/projects/fastqc/), and content and quality of the remaining clean reads were evaluated. Then, we performed a comparative analysis with the reference genome of E. sinensis. For each sample belonging to the test and control groups, sequence alignment with the reference genome sequences was conducted using TopHat [41]. Identification of DEGs To evaluate the expression levels of the transcripts in different groups, we used RSEM software with default parameter settings [42] to estimate the expression level (relative abundance) of a specific transcript with fragments per kilobase of transcript per million fragments mapped (FRKM) [43]. The expression level of each transcript was transformed using base log 2 (FPKM +1). We used DESeq software to screen the DEGs and calculate the fold change of the transcript [44]. Two-fold changes were used to investigate the expression, and p-values < 0.05 were considered statistically significant. GO functional annotation and enrichment analysis for DEGs To analyze the potential functions of the DEGs, we first annotated the DEGs against the Uni-Prot database (http://www.uniprot.org/). Then, we analyzed the functional annotation by using gene ontology terms (GO; http://www.geneontology.org) with Blast2GO (https://www. blast2go.com/) [45]. All the DEGs were mapped to GO terms in the GO database, and the number of genes for every term was calculated. The ultra-geometric test was used to detect the enriched GO terms of DEGs when compared with the transcriptome background. The formula used was as follows: where, N represents the number of genes with GO annotation added; n is the number of DEGs in N; M is the number of genes annotated to specific GO terms; and m is the number of DEGs in M. The calculated p-value was subjected to Bonferroni correction. A corrected p-value of 0.05 was determined as the threshold of statistical significance, and GO terms were considered significantly enriched in the DEGs when the corrected p-value was <0.05. Pathway analysis of DEGs Pathways of the DEGs were annotated using blastall (http://nebc.nox.ac.uk/bioinformatics/ docs/blastall.html) against the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. We identified the enriched DEG pathways with the same formula as that used for the GO analysis. N represents the number of genes added with KEGG annotation; n represents the number of DEGs in N; M represents the number of genes annotated to specific pathways; and m is the number of DEGs in M. Quantitative reverse-transcription PCR verification We used quantitative reverse-transcription (qRT)-PCR to verify the expression levels of DEGs identified using RNA-Seq analysis. The primers were designed with Primer 5 software, and βactin of E. sinensis was used as the internal control to normalize the expression levels. All the experiments were performed in triplicate. The reaction was performed using a 25 μL volume comprising 2 μL of cDNA, 12.5 μL of SYBR Premix Ex Taq (TaKaRa), 9.5 μL of RNase-free H 2 O, and 0.5 μL of the forward and reverse primers (10 μmolÁL -1 ). The thermal cycling program was as follows: 95˚C for 30 s, followed by 40 cycles of 95˚C for 5 s, 60˚C for 30 s, and 72˚C for 30 s. The melting curve analysis was conducted at the end of qRT-PCR to confirm PCR specificity. The expression levels were analyzed with the 2 -ΔΔ CT method. Results Illumina sequencing and quality assessment Analysis of DEGs To investigate the DEGs of E. sinensis exposed to deltamethrin, the Cuffdiff program was used to generate the gene expression profiles (Figs 1 and 2). This program identified 160 significantly upregulated DEGs and 167 significantly downregulated DEGs. The results indicated that deltamethrin affected gene expression in E. sinensis. KEGG pathway analysis of DEGs The DEGs were mapped to the KEGG database to further investigate the biological functions and important pathways on the basis of the whole transcriptome background; 327 unigenes were mapped to 262 pathways. Some genes were found in multiple pathways, and some genes were restricted to a single pathway. The first 20 most-enriched KEGG pathways of DEGs were noted; metabolic pathways (31 genes) were the most significantly enriched pathways, followed by glycine, serine, and threonine metabolism (8 genes); ribosome biogenesis in eukaryotes (6 genes); chemical carcinogenesis (5 genes); pyruvate metabolism (4 genes); drug metabolismcytochrome P450 (3 genes); metabolism of xenobiotics by cytochrome P450 (3 genes); and glycerolipid metabolism (3 genes) (Fig 4). Transcriptional responses of E. sinensis exposed to DLM DEGs involved in metabolic pathways Metabolic pathways, biosynthesis of amino acids, and biosynthesis of secondary metabolites are all metabolism-related biological pathways. In our study, the lipid metabolism-related pathways included glycerophospholipid metabolism (1 DEG), glycerolipid metabolism (3 DEGs), steroid biosynthesis (2 DEGs), fatty acid biosynthesis (1 DEG), fatty acid elongation (2 DEGs), biosynthesis of unsaturated fatty acids (2 DEGs), synthesis and degradation of ketone bodies (1 DEG), fatty acid degradation (2 DEGs), fat digestion and absorption (1 DEG), and steroid hormone biosynthesis (2 DEGs). The DEGs involved in the lipid metabolism-related pathways were downregulated, except for VN_GLEAN_10002420|K11262 (fatty acid biosynthesis) and VN_GLEAN_10004144|K13368 (steroid hormone biosynthesis). The DEGs involved in glutathione metabolism (3 DEGs) were all downregulated. Besides the effect of deltamethrin on lipid and glutathione metabolism, we found that deltamethrin treatment affected carbohydrate metabolism, including glycolysis/gluconeogenesis (3 DEGs), pentose phosphate pathway (1 DEG), amino sugar and nucleotide sugar metabolism (3 DEGs), and fructose and mannose metabolism (2 DEGs). We should point out that the DEGs of the pentose phosphate pathway were upregulated, which may indicate that the metabolism of deltamethrin is an energy consumption process. We also found that deltamethrin had an effect on protein metabolism: the DEGs involved in ubiquitin-mediated proteolysis (2 DEGs) were downregulated, while the DEGs involved in protein digestion and absorption (2 DEGs) were upregulated. The drug metabolic pathways were composed of metabolism of xenobiotics by cytochrome P450 (3 DEGs) and drug metabolism-other enzymes (2 DEGs). Interestingly, all the DEGs were downregulated. These findings indicated that deltamethrin treatment not only affected material (carbohydrate, lipid, protein, and other substances) metabolism but also detoxification mechanism of the hepatopancreas. DEGs involved in signal transduction We found that many DEGs were associated with signal transduction pathways. The DEGs involved in the calcium signaling pathway (1 DEG), axon guidance (3 DEGs), MAPK signaling Deltamethrin is neurotoxic, and it affects the nervous system by modifying the Na + channels to remain open, resulting in prolonged Na + transport along the membranes of nerve cells. The DEGs of axon guidance were all upregulated and may be associated with the toxic mechanism of deltamethrin. The MAPK signaling pathway can be found in all eukaryotes, and mitogen-activated protein kinase signaling cascades include extracellular signal-regulated kinase, MAPK kinase, and MAPK kinase kinase. Most of the information on the MAPK signaling pathway is focused on vertebrates. In vertebrates, this pathway is significant for anti-stress, inflammation, cell development, reproduction, and differentiation. The MAPK signaling pathway transducts the extracellular signal to the cytoplasm and nucleus by substrate phosphorylation, and the physiological process is controlled by a conserved kinase cascade [46]. However, there is limited information on the MAPK signaling pathway in aquatic invertebrates. To date, Lin et al. [47] have found that the MAPK signaling pathway influenced Trichomonas vaginalis-induced proinflammatory cytokines in the shrimp Penaeus monodon; Feld et al. [48] found that the MAPK signaling pathway participated in neural plasticity in the crab Chasmagnathus; Zhu et al. [49] reported that the MAPK signaling pathway was involved in enrofloxacin metabolism in E. sinensis; and Li et al. [38] found that the MAPK signaling pathway played an important role in microbe-challenged E. sinensis. In our study, we found that the MAPK signaling pathway was associated with deltamethrin metabolism, and the expression levels of the genes involved in the pathway were upregulated. The results showed that the MAPK signaling pathway is involved in xenobiotic metabolism; however, the mechanism of action needs to be studied further. Differential expression verification of DEGs qRT-PCR was used to verify the gene expression profiles, and 15 genes were suggested to be related to the deltamethrin treatment after GO and KEGG analyses. The primer sequences for all the examined genes are listed in Table 3. The qRT-PCR verification results for the 15 genes were consistent with the results of RNA-Seq, except for 1 gene (VN_GLEAN_10005348; Fig 5). These results indicate that the RNA-Seq results are generally reliable, and further experiments are required to verify the results of this study. Discussion Deltamethrin (3 ppb) is often applied using a 30-40 min bath [10,50]. We designed this study to evaluate the effects of deltamethrin on E. sinensis, which is a very economically important crab species in China. Transcriptome sequencing has become a powerful technique for studying the mechanisms underlying changes in the biological characteristics of an organism. When compared with the traditional methods of analyzing the hepatopancreas of E. sinensis, our study produced more sequencing reads [51,52] and was useful in yielding genomic resources and molecular information on E. sinensis exposed to deltamethrin. Detoxification mechanism of the hepatopancreas Deltamethrin is a type II pyrethroid used against ectoparasites. Biodegradation of pyrethroids is principally catalyzed by P450 enzymes, carboxylesterases, and glutathione-S-transferase (GST) [10]. Biodegradation of deltamethrin occurs through hydrolysis of the central ester via carboxylesterases [9] and oxidation by cytochrome P450 enzymes [53]; however, the metabolites are different in different species [54]. Wheelock et al. [55] found that carboxylesterase activity was important for removing pyrethroid toxicity and could be used as an indicator for pyrethroids during toxicity identification. Biodegradation of pyrethroids through oxidation by increasing the levels of reactive oxygen species (ROS) has been reported in crustaceans and fishes [15,28]. Hence, elimination of ROS can alleviate oxidative stress and subsequently decrease toxicological damage [56]. Transcriptomic expression of the genes related to the antioxidant system, including non-enzymatic and enzymatic antioxidants, plays an important role in protecting organisms from damage. It is important to note that GST belongs to the group of transferases responsible for the biodegradation of pesticides, and the activity of this enzyme is associated with the metabolism of insecticides [57]. These transferases catalyze the conjugation of electrophilic compounds to the thiol group of reduced glutathione, producing more soluble products for excretion and, hence, directly decreasing the levels of this antioxidant [58]. We found that the expression levels of CYP450 and glutathione metabolism genes were downregulated, which means that CYP450 and glutamathione are relevant to deltamethrin metabolism. Our results were consistent with those of previous studies: Banka et al. [59] found that CYP450 enzymes were inhibited by deltamethrin in vivo and in vitro, and Abdel-Daim et al. [60] revealed that deltamethrin could induce lipid peroxidation and glutathione reduction. However, Erdogan et al. [61] found a significant increase in CYP450 in the muscle of rainbow trout exposed to deltamethrin, and Olsvik et al. [62] reported that the levels of non-enzymatic antioxidant glutathione were increased in all the tissues exposed to deltamethrin. The reasons for these discrepancies are unclear, but differences in exposure duration, deltamethrin concentration, and different species and tissues may explain the findings. Candidate genes involved in energy and material metabolism In our study, we found that the gene involved in the pentose phosphate pathway was upregulated, which indicated that deltamethrin disrupted carbohydrate metabolism. Abdel-Daim et al. [60] showed that serum glucose and cholesterol levels were significantly increased because of deltamethrin toxicity in the Nile tilapia, and El-Sayed & Saad [63] found that a reduction in hepatic glycogen levels accompanied by an increase in plasma glucose levels was a Transcriptional responses of E. sinensis exposed to DLM common reaction against toxic insult followed by metabolic stress in fishes. The hyperglycemic effect after pyrethroid treatment in the Nile tilapia suggests the effects of deltamethrin on the glycolytic and glycogenesis pathways [64]. Our study also confirmed that deltamethrin interfered with carbohydrate metabolism. Lipids are stored in droplets and consist of a triglyceride core surrounded by a layer of phospholipids and embedded proteins, and they represent a major component of the fat body and main source of metabolic fuel [65]. Phospholipids are a major component of biomembranes, and most pesticide preparations are characterized by high lipophilicity. The cell membrane is the principal site where pesticides act, and Cengiz et al. [66] found that deltamethrin increased the peroxidation of structurally important unsaturated fatty acids within the phospholipid structure of the membrane, resulting in cell membrane damage. In the enrichment analysis in our study, "Glycerophospholipid metabolism" and "Glycerolipid metabolism" were the important pathways related to lipid metabolism in E. sinensis, and the genes involved in the 2 pathways and biosynthesis of unsaturated fatty acids were all downregulated. We found that deltamethrin not only disrupted lipid storage and mobilization but also influenced protein metabolism. The genes involved in protein digestion and absorption were upregulated, and the genes involved in ubiquitin-mediated proteolysis were downregulated. Thus, deltamethrin disrupted carbohydrate, lipid, and protein metabolism, which would induce abnormalities in metabolism and nutrient absorption. Effects on signal transduction in deltamethrin The main mechanism of deltamethrin as a pesticide is believed to result from its binding to a specific receptor site on voltage-dependent Na + channels, leading to prolonged Na + transport along the membranes of nerve cells and neural hyperexcitation [5]. In this study, we found that the genes of the Na + -and Cldependent GABA transporter were upregulated. Besides, the genes involved in glutamatergic synapse, calcium signaling pathway, and cholinergic synapse were all upregulated. Previous studies have revealed that many target sites other than the Na + channel may be linked to the toxicity of deltamethrin. The voltage-dependent Clchannel has been proposed as a target of pyrethroids [67]. Voltage-sensitive Clchannels are found in many tissues, and their function is to control membrane excitability. Na + and Clconductance has reciprocal effects on membrane excitability [68]. A previous study has shown that pyrethroids can act on GABAgated Clchannels [69], and this effect probably contributes to the seizures that accompany severe type II poisoning. Several other studies have suggested the role of the GABA A receptorionophore complex in type II pyrethroid toxicity [70,71]. Glutamate is an important excitatory neurotransmitter; L-glutamate activates several subtypes of receptors, leading to an increase in intracellular calcium-ion concentration. Many responses to glutamate in the central nervous system can be directly attributed to these changes in membrane polarization and calcium ions. Radcliffe & Dani [72] reported that strong, brief stimulation of nicotinic acetylcholine receptors enhanced hippocampal glutamatergic synaptic transmission on 2 independent time scales and altered the relationship between consecutively evoked synaptic currents. In this study, we also found that deltamethrin could increase glutamatergic synaptic transmission. Acetylcholinesterase activity (AChE) has also been used as a biomarker for pyrethroids [73]; AChE plays a dominant role in cholinergic neurotransmission, hydrolyzing the neurotransmitter acetylcholine at the cholinergic synapses [74]. Tu et al. [28] reported inhibition of AChE in the black tiger shrimp exposed to deltamethrin; we also found that the genes involved in cholinergic synapse were upregulated. Above all, deltamethrin affected the nerve impulse transmissions. Pyrethroids primarily act on the central nervous system. Researchers may be confused about their exact mechanism of action because pyrethroids interact with not only receptors but also a wide range of ion channels. Nevertheless, it is clear that pyrethroids are likely to interact reversibly with ion channels by their phosphorylation state, and the Na + channels are a major target. Conclusions Because of the rapid metabolism and low toxicity of deltamethrin in non-target animals and humans, it has been widely used in many countries against pests [18,56]. However, because of its properties, deltamethrin can cause enormous damage to aquatic animals. Alterations in the expression levels of CYP450, carboxylesterase, and GST could disturb the metabolization of deltamethrin, which may cause imbalances in detoxification processes and modifications in the synthesis and degradation of endogenous molecules with key biological activities. In the present study, deltamethrin suppressed the gene expression levels of the antioxidant enzymes and glutathione. Deltamethrin is neurotoxic, and it also disrupted the metabolism of carbohydrates, lipids, and proteins in E. sinensis, affecting nutrient absorption and metabolization and making E. sinensis vulnerable to pathogen infections and environmental changes. In this study, we have elucidated the genes that were regulated when E. sinensis was exposed to deltamethrin; however, further studies are required to reveal the specific mechanism underlying deltamethrin detoxification, especially with respect to post-transcriptional processes. Data archiving The sequencing reads are available in the NCBI SRA database (SRP105235). Supporting information S1
5,968.2
2017-09-14T00:00:00.000
[ "Biology", "Environmental Science" ]
Multi-Sensor Platform for Indoor Mobile Mapping : System Calibration and Using a Total Station for Indoor Applications This paper addresses the calibration of mobile mapping systems and the feasibility of using a total station as a sensor for indoor mobile mapping systems. For this purpose, the measuring system of HafenCity University in Hamburg is presented and discussed. In the second part of the calibration, the entire system will be described regarding the interaction of laser scanners and other parts of the system. Finally, the preliminary analysis of the use of a total station is presented in conjunction with the measurement system. The difficulty of time synchronization is also discussed. In multiple tests, a comparison was made versus a reference solution based on GNSS. Additionally, the suitability of the total station was also considered for indoor applications. Introduction In recent years, mobile mapping systems have become a field of research in geodesy.The mobile LiDARsurvey system makes it possible to acquire time-saving and fully-equipped environment data using laser scanners from moving platforms (car, plane, ship).All available mobile mapping systems consist of multiple sensors and can be classified according to two functions: the acquisition of environment data and of trajectories.For the acquisition of the environment, preferably one or more laser scanners are used.These scanners are mounted either perpendicular to the main direction of movement or slightly tilted with two scanners.This latter version is called a butterfly arrangement, because of the orientation of the profile planes.For trajectory acquisition, a system can be equipped with three sensors.A GNSSantenna is deployed to provide the absolute position over time.The inertial sensors measure the spatial position and the highly dynamic motion of the system at a very high data rate.The third widely-used sensor is the odometer.An odometer allows precise determination of the traveled distance.Together with the inertial sensor, the odometer is used to compensate for short-term GNSS measurement gaps.Recent studies on MLSsystems and their accuracy can be found in El-Sheimy [1], Kaartinen et al. [2], Brenner [3], Haala et al. [4], Hassan and El-Sheimy [5] or Puente et al. [6] An almost complete overview of the state-of-the-art of mobile mapping systems can be found in Petrie [7] or Graham [8].Research and development on multisensor mobile mapping systems has been performed since the early 1990s.The development of this system has greatly accelerated in recent years, in part because of technologies, such as Google and its Street View program.One of the latest application trends is to deploy such systems in indoor environments. If mobile mapping systems are used in buildings, GNSS measurements will no longer be available.As a result, no absolute position can be provided directly by the system itself.This drawback has a serious impact on the inertial measurement unit (IMU), because the pure inertial navigation solutions tend to drift.The magnitude and speed of drift will depend on the quality of the measurement system.This circumstance has led our researchers and developers to consider other aiding sensors and measurement techniques that can substitute the GNSS receivers.At HafenCity University, Hamburg, a modular test platform is being developed that allows for the study of various new aiding sensors. This platform serves not only as a means to explore new sensors for indoor applications, but also to ensure that a system is suitable for outdoor use, which is the current state of the art.In this paper, the first results using a total station are reported.In addition, the procedure for calibration, which takes place outdoors, will be described. Measurement System To enable the fusion of proven and new sensors, a measurement platform is currently under development that can integrate a variety of sensors.Due to the modular design used, the measuring system can be adapted to face individual challenges.As a research priority, our focus is on the use of a car outdoors and a small trolley indoors.The core sensor in any scenario is always a high-quality IMU sensor (the IMARRQH1003 is used in this research), which enables data recording of various other sensors (odometers and GNSS modules) and the synchronization of all sensors using a clock (pulse-per-second (PPS)).The IMU is topologically the central point of the system, and its body coordinate system can be used to define the platform coordinate system in which the X-axis is in the moving direction of the vehicle, the Y-axis is toward the left and perpendicular to the X-axis, and the Z-axis is perpendicular to both the X and Y axes to form a right-handed coordinate frame.The position and the boresight angles of the other sensors can be defined with respect to the IMU body frame.Table 1 shows the deployable hardware modules.The systems may differ according to the working environment or application.Here, the system differs for the car (outdoors) and the trolley (indoors). Outdoor The current configuration for outdoor applications includes an IMU sensor and a GNSS receiver, as is usual for MMS, an approved off-road odometer and a laser scanner.The odometer's position is determined via a calibration process using a total station.The data recording and synchronization of the individual sensor modules are automatically controlled by the IMU.The GNSS module is linked to the IMU through the lever arm of the antennas, which can be determined with respect to the IMU center using a total station.The GNSS data are stored by the IMU.In addition, the PPS from the GNSS is used to stabilize the timing system.The PPS can be passed on to the other sensors by the IMU.The commonly used laser scanners from Zoller + Fröhlich allow for two different types of temporal synchronization.Therefore, it is important to know that the scanner saves its data line by line.One rotation of the scanner's head is a line, and each measured point corresponds to a pixel in this line.In a variant of the synchronization, the PPS is passed to the scanner.The scanner then marks the corresponding point in the relevant line by a flag at the time the PPS arrives.This procedure is relatively complicated, because each line must be searched for appropriate flags. The second option is much easier.The scanner itself generates a pulse for each complete rotation of the scanner's head.This pulse can be transferred to the IMU and is chronologically referenced and stored.Thus, each line can be assigned a start time tag and an end time tag.The position and orientation of a laser scanner in the vehicle coordinate system is discussed in Section 3.This calibration is always performed with the outdoor configuration, because it allows for long distance measurements. The plan also includes a camera module for various applications, such as the detection of lane markers. Indoors The indoor configuration, Figure 1b, contains the main IMU module, a Wachendorff odometer and a laser scanner.Because the GNSS module cannot be used indoors, several other modules can be integrated to combat the drift of the IMU.The integration of the additional aiding sensors is currently being planned or has already been implemented and evaluated for suitability.One choice is to use the stereo camera as an aiding sensor.A horizontal measuring laser scanner by SICKusing the simultaneous localization and mapping (SLAM) method can also be used.Because only one high-end laser scanner is available, it is mounted perpendicular to the direction of travel.The possibilities of the system should be explored and demonstrated in this project.For very economical use, two scanners in butterfly mode should be used. The use of SLAM appears promising, which is not only due to the many developments in the field of robotics, but also to the combination of two laser scanners, one low-cost and one high-end.These sensors are used to determine the position and to capture the environment.This configuration option is still at the conceptual stage. The use of a total station is currently being investigated as another alternative.In this system, a 360 • degree prism is mounted to the platform with its position predetermined in the platform coordinate system.The prism is followed by a total station in the tracking mode during the movement of the platform.A total station, as one of the most precise geodetic instruments, can be set up at various known positions along with the known orientations, so that it always has an optimal view of the platform.The total station can function in the same way as a GNSS receiver.The first results of the investigation are shown in Sections 4 and 6. Calibration This section discusses the calibration of the whole system and the interaction of environment and trajectory acquisition.Because the platform coordinate system is identical to the IMU body coordinate system, the positions of most modules can be determined in this coordinate system using their lever-arms to the IMU.However, three boresight angles are needed for a laser scanner as a 3D object with respect to the IMU body.The calibration determines the position and attitude of the four-dimensional coordinate system of the laser scanner (x, y, z, t) (shown in blue in Figure 2) with respect to the four-dimensional IMU platform system (shown in red in the Figure 2).The position of the anchor point of the blue system to the reference system of the IMU can be described by a lever arm with ∆x, ∆y and ∆z.The orientation, i.e., the rotation of the axes, is given by the angles, φ, θ and ψ.Thus, φ is the rotation around the X-axis of the IMU, θ around the Y-axis and ψ around the Z-axis.The fourth dimension is time.For this purpose, the time systems of the laser scanner and the IMU must be synchronized. Figure 2. Schematic diagram of the measuring system with the two main components of the laser scanner and the inertial measurement unit (IMU) sensor.The coordinate system of the IMU is drawn in red and defines the body coordinate system of the platform.The location of the blue laser scanner coordinate system must be known in the red system. Estimation of the Accuracy To decide with what accuracy a calibration should be performed, certain considerations are necessary.If it is assumed that the trajectory would be error-free, then only the accuracy of the laser scanner and its lever arm and boresight angles are decisive.The laser scanner has an angular accuracy of a few ten-thousandths of a degree and a distance measurement accuracy of a few millimeters (Table 1).These are the criteria for the required accuracy of a calibration.Of course, the trajectory is not error-free.Because the accuracy of the trajectory is approximately two to three centimeters [9], the calibration accuracy needs to be far less than the trajectory accuracy to avoid significantly influencing the final result.Theoretically, it should be below the accuracy of the scanner, but this is considered uneconomical, due to the time required and the unknown stability of the calibration.The geometrical considerations or variance propagation demonstrate that a determination of the translation ∆x, ∆y and ∆z is enough for one to two millimeters.The angles, φ, θ and ψ, should be determined to approximately 0.005 • degrees, which corresponds to an accuracy of 4 mm for a typical measurement distance of 50 m meters in an urban area.For the temporal reference, an accuracy of 0.1 ms is desired, which results in a 2 mm deviation at a speed of 72 km/h [10].Various methods can be used to satisfy the accuracy requirement.These methods are presented in the following section. Internal Position and Orientation Determination For internal position and orientation determination, the scanner itself can be used as a measuring system.A sufficient number of targets must be attached to the platform and spatially distributed to maximize the system calibration.Their positions are determined using a total station or the photogrammetric method in the platform coordinate system.The scanner can detect the targets by a simple 360-degree scan.The conformal transformation process between two different coordinates of each target will deliver the parameters, i.e., the lever arm and boresight angles of the laser scanner.The targets can be spherical or black and white targets.The scanner has to be rigidly connected to the platform to avoid suffering any possible rotation. External Position and Orientation Determination The external position and orientation determination includes all of the procedures determined by the lever arm, an additional measuring system and the position angle.Various instruments can be used for this purpose, such as total stations, photogrammetry, fringe projection systems or laser trackers.For the platform at the HCU, a fringe projection system is used, because its size is only 1 m by 0.4 m.Whether these instruments are suitable for the position and orientation determination in terms of accuracy will be discussed through two examples.In the first example, it is assumed that the lever arm of the laser scanner ∆x, ∆y or ∆z was determined at an accuracy of 1 mm and that the vehicle moves on a road that has a slope of 4 • degrees in the traveling direction.The Figure 3 shows that the correct lever arm, T LS0, leads to a fault point, T LS0 , due to the error, H.With a slope of α, the points shift toward T LS and T LS . Figure 3. Impact of a specifically biased lever arm; when the lever arm is incorrectly determined by the amount of H and the system is rotated by the angle, α, the wrong lever arm can result in an error of Cand S, causing the actual measured point, P , to be at the wrong point, P . This relation can also be described by Equations ( 1) and ( 2).This results in an error for Sand C, respectively, which can be ignored.The second example illustrates the error effect of the boresight angles.It is assumed that the scanner has a measurement deviation of 4 mm at a distance of 50 m.This results in an angular error of 0.005 • .As clarified in Section 3.1, this is the expected accuracy for the calibration.To determine the installation angles on the platform, only the body of the scanner can be used.The outside dimensions of the Zoller+ Fröhlich laser scanner are approximately 286 mm.An angle change of 0.005 • on a short basis can cause a deviation of 24 µm.This deviation is difficult to determine, even using a laser tracker.Furthermore, it should be noted that only the position of the laser scanner's housing on the platform can be determined, not the required orientation of the mirror in the laser scanner body.The external determination for the lever arm, therefore, should be considered.Determination of the angles, however, can be difficult using this method. Field Procedure The field procedure is similar to the procedure with the calibration method for multibeam sonar systems [11,12].With the multi-sensor system, static objects (buildings) are scanned.The faulty installation angles produce typical errors in the point cloud.In the following, possible errors on the roll, pitch and yaw angles will be discussed.A false roll angle generates a point cloud of a tilting facade, which was supposed to be vertical.If the measuring system passes twice from the opposite direction to the facade, the first facade tilts toward the vehicle, and the second facade tilts away from the vehicle.The angle between the two planes corresponds to the double roll angle error and can be determined.For an error-free point cloud, the measurement results have to be averaged (Figure 4a).The situation is similar with an error on the pitch angle, which causes an incorrect installation angle of the tilting facade edges.The facade edge tilts in the traveling direction or in the opposite direction.To determine this error, a straight line is placed in the edge each time.The angle between the two straight lines corresponds to the double pitch angle error (Figure 4b).The residual error on the azimuth can be determined by passing by and surveying a circular object from two sides.As a result, an offset to the actual position of the object is determined.From the distance between the two trajectories and the distance between the two circular objects, the angle correction is given (Equation ( 3)) (Figure 5b). To determine the residual time error with the synchronization of the laser scanner with respect to the system, the same trajectory is traversed twice at different, but constant, speeds.This eliminates the error effect, due to the installation angles (Equation ( 4)), because they affect the traveled distance in the same way (Figure 5a).In summary, a combination of the external determination of the position and the orientation through the field procedure constitutes the most useful method.The calibration provides the lever arm, while the field procedure provides the angles and the time.A disadvantage of the field procedure is that it is not independent on the determination of the trajectory.In addition, significant latency can be determined through the field procedure.Another method to observe the latency between the laser scanner and the IMU was described in [13].For this purpose, the measurement system has to be rotated around its axis uniformly and in both directions.This results in a shift for a distant sphere from both the right and left sides of the sphere.The latency can be calculated from the difference of the two sphere centers.Whether this type of latency calibration comes into question is still under investigation. Total Station To process the data with the outdoor configuration, a commercial software product called the Inertial Explorer, by Novatel, is used.However, this software is unsuitable for determining the position and orientation of the platform indoors.Furthermore, there is no software utility available that can combine all already planned and implemented sensors.Accordingly, an in-house software utility was developed with a Kalman filter as its core.Its main features will be discussed in Section 5.One of the first modules in this software was for the total station.The total station system used was the Leica TPS1201+ with a 360 • degree prism as a target and an adapter to mount the prism to the platform.To make the use of the Kalman filter with kinematic measurement applications convenient, a uniform time base is given.Normally, this time system is defined by the GNSS PPS signal.Because the total station cannot be connected to the IMU and has no trigger capability, an alternate solution must be found for the indoor system.Obviously, the IMU time system allows all connected systems to be referenced in a timely manner.The IMU time system can be synchronized with the GPS time outdoors.If this synchronization cannot be realized, the system time will drift slowly.This error does not have further effects, because all sensors are synchronized via the IMU time.Furthermore, the total station has a time system that differs from the system platform.The time system for the total station was considered in [14].To synchronize the sensors with little effort using an additional laptop or special measurement configuration, the following linear relation is proposed. where k IM U and k T P S are the same points in time, m is a scale that is generated by the different time drifts of the system and b is the offset between the two time systems.First, it is assumed that both systems have only one offset and a stable scale m = 1.To determine the offset, b, two velocity profiles are needed.The first can be the velocity profile of the odometer.The second can be calculated from the coordinates based on the measurements by the total station.The time offset of the two time systems can be determined using the cross-correlation.However, residual errors will always remain.At the same time, it is not known which time stamp used is associated with the measurements taken by using the total station.In practice, the angle reading takes place immediately, while the distance measurement requires a few milliseconds.This circumstance leads to an incorrect position in kinematic application in terms of time using the total station.Distance and angle measurements cannot be tagged using the same instant of time.The manufacturers have overcome this difficulty.Leica calls this method "Syncrotrack".However, it remains undetermined in this research which time stamp is ultimately stored for the total station measurements. Sensor Fusion To integrate all the measurement data for an optimal trajectory, the extended Kalman filter (EKF) ( [15]) was chosen.The Kalman filter algorithm consists of a set of equations realized in two steps: prediction (time update) and correction (measurement update) [16]. Prediction The predicted state vector is given as follows: in which the state vector consists of the following values: where φ is the attitude angle vector, ω is the 3D angular velocity vector, p is the 3D position vector, v is the 3D velocity vector and a is the 3D acceleration vector.The matrix, Q, in Equation ( 6) models the process noise.f (x k,k ) is a nonlinear function that models the motion.After the linearization by Taylor expansion, Equation ( 7) becomes: in which the term, A, is the system transition matrix, W is the Jacobian matrix of partial derivatives of f with respect to w and w-ist, the process noise.To compute the position rotation matrices (C x , C y , C z ) with time-dependent angles, the measured velocities and accelerations in the platform coordinate system are transformed into a superordinate reference system.The rotation matrices are as follows: where the indices, x, y and z, represent which axis each rotation is centered on and φ, θ and ψ are the rotation angles.In compliance with the rotation sequence, the total rotation matrix is given by: The transition matrix for the Kalman filter can be described by: Correction The measurement equation can generally be described as follows: The equation for the linearization of h(x k,k , v k,k ) to H is given as: in which the term, H, is the Jacobian matrix of partial derivatives of h with respect to x, V is the Jacobian matrix of partial derivatives of h with respect to v, v is the measurement noise and z is the measurement. For the available measurement data from the IMU (angular velocities and accelerations) and velocities from the odometer, the corresponding design matrix, H, is as follows: with three angular velocities, one velocity in the direction of travel and three accelerations; so H IM U is a matrix with 7 × 15.Because the measurements from the IMU and the total station are available at different times, a second matrix, H T P S (3 × 15), for the coordinates determined by the total station is given as: The final solution from the Kalman filter is: where K k,k is the Kalman-Gain-Matrix.For a more detailed description, refer to [16]. Smoothing To improve the filtering results, a Rauch-Tung-Striebel smoother ( [17]) is applied.This fixed-interval two-pass implementation is used to provide the fastest fixed-interval smoother.The first (forward) pass uses a Kalman filter, but saves the intermediate results x k,k ,¿x k+1,k , P k,k and P k+1,k at each measurement time, k.The second pass runs backward in time in a sequence from the time of the last measurement, computing the smoothed state estimate from the intermediate results stored on the forward pass.The new smoothed stat vector is obtained recursively [18]: with: Results and Discussion The first results of the use of a total station as a sensor module are presented here.Two scenarios were tested, one indoors and one outdoors.Multiple experiments were performed indoors under real application conditions.To obtain an independent review, a second scenario was investigated outdoors with the outdoor configuration. Outdoor Test The independent review of the total station solution requires measurements under excellent GNSS conditions.The position of the total station was pre-determined using known points in a WGS84/UTM-system using a free station approach, which is necessary to link the existing coordinates of the trajectory to this system and also shows the advantage of using the total station module. The commercial software product, Novatel Waypoint Inertial Explorer, was used to integrate the GNSS data with the IMU and the odometer data to deliver the best estimate of the trajectory.Figure 6a shows the top view of the trajectory.During the test drive, the platform was tracked by the total station.The (actual trajectory) was created according to the Kalman filter (Section 5) with the aid of the coordinates derived from the total station measurements with the same IMU and odometer data.The timing offset between the total station and the GNSS system was determined using a correlation of the velocity profiler (as in Section 4). Figure 6b shows the correlation and the velocity profiles of both the odometer (blue) and the total station (red) after the correction of the time offset.The displacement was 57.188 s at a correlation of 98%. Figure 7 shows a comparison of the position coordinates, the difference between the nominal (GNSS) and actual (total station) position.Clearly, a large variation existed at the beginning, during the stationary period, which must have resulted from the GNSS measurement.As a result, the review by GNSS must be assessed carefully.If the variation during standstill is taken as the measurement accuracy (2 cm (95%)), only a few significant deviations can be detected.Conversely, this means that the total station can achieve the same accuracy.Moreover, the total station improves the kinematic measurements with IMU.However, the review of the results is difficult, as the GNSS type used was insufficient.Other possibilities, such as the use of a laser tracker, need to be analyzed.The authors of [19] demonstrated a trajectory verification using a Leica AT901 laser tracker and described how the non-existent 360 degree reflector can be replaced by observation from above.The recent developments of the AT901as a long-range system (80 m radius) would make it possible to use a much more accurate measurement system as a reference.It must also be examined whether the use of GeoCom-commands via a serial interface can enhance time allocation.The detailed analysis of the smoothed trajectory suggests that the estimated offset is not quite constant.Again, the activation of the GeoCom-interface might be helpful. Indoor Test To investigate the usefulness of the measurement system indoors, several tests were performed within an HCU building.The indoor area was different in some aspects from the outdoor area.The space was limited, so the total station was operated at close range.Furthermore, there was not always a clear view of the target because of objects, such as railings and plants.One advantage was the level floor, resulting in less vibrations than the ground outdoors.Figure 8 presents two different trajectories.In Figure 8a, the hallway was surrounded.The observation of the total station was therefore interrupted by a large obstruction.Figure 8b shows the second obstruction, in which the measurement platform had been moved.In this case, an attempt was made to interrupt the observation as little as possible.First, trajectory 1 in Figure 8a should be examined more closely.The storage times of the total station were used as time stamps, with an interval of one second.After the completion of the measurements, the time offset of the system was determined by cross-correlation as 7.645 s at a correlation of 96%.These values are variable, as they are dependent on the activation time of the systems.Therefore, the time offset for each measurement has to be redefined.Because GNSS is not an option as an absolute time system in an indoor environment, it is referenced only to the clock of the IMU.After the time systems were aligned to the same time base, only every second point of the total station was used as a control point.These control points are not taken into consideration in the Kalman filter.Thus, the estimated positions from the filter were compared with the control points to perform the analysis of their achievable accuracy, which is summarized in Table 2. Figure 9 shows the coordinate differences in the X and Y directions with all subsidies in Case 3. The control points are correlated with the measurement system over time.It can be concluded that an incorrect time alignment is reflected in the positional errors.Various supporting data were tested for the filter.In addition to the IMU, the odometer and the total station were added sequentially, but all other parameters remained unchanged, including the initialization of the filter, which was stationary because of the total station.The deviations shown in Table 2 have to be discussed for Cases 2 and 3.It is clear that the mean value from Case 2 is higher than the standard deviation.This result indicates a systematic error in the measurements, which can be explained by the drift of the system.The evenly driven trajectory became larger during the course of the measurements. The mean value of Case 3 is smaller than the standard deviation.It seems that the systematic error components decreased through the addition of the total station.The residual errors were characterized as random errors, and this point remains the subject of further investigation.To avoid being dependent on an uncertain time stamp for the data storage, a small Java application was created to send measurement commands via the GeoCom interface to the total station.The command: \%R1Q,2167 provides a time tagged measurement, which is used to reference the exact time of the measurements.Control via Geo-Comhas the advantage that each measurement can be provided with a code that allows conclusions to be drawn about the measurement's quality.Figure 10 shows trajectory 2 once more.All measurements were color-coded.Code zero indicates that the measurement was "OK".Code 1284 indicates that the total station could not satisfy the usual accuracy [20].This seems to be a problem, especially at close range.A possible explanation could be that the prism fills the field of view of the sensor chip for positioning, which would complicate the median point determination.A further problem seems to be visual interruptions.Even after a short time, the accuracy could not be ensured after recovery.The velocity profiles of trajectory 2; the odometer in blue and the total station in green. It seems that the trajectory conforms well to the measurements from the IMU and odometer.However, if the solution is reviewed again in more detail, the time offset can clearly be determined through the correlation calculation (Figure 11).The calculated velocities from the total station data were subject to large scatter.Consequently, the velocity profiles of the total station and odometer do not match. The correlation coefficient of the blue curve is correspondingly small, namely 51% (Figure 11).It is interesting that the points labeled by the total station as code 1284 and the points classified as "OK" both suffered this problem.To filter out the incorrect measurements, the deviation of the odometer velocity was calculated using the first correlation calculation from each velocity.All the measurements with a deviation higher than 50% were removed.With the remaining 78% of the measurements, the correlation calculation was increased to 89% (red curve).Then, the trajectory was estimated with these data by using Kalman filtering.Only every second point determined by the total station was used for control purposes.Figure 12 and Table 3 show the coordinate differences between the filter solution and the control points.In Figure 12, one can clearly observe certain abnormal behaviors other than the white noise, which has to be discussed.The moving direction of the platform during the test was mainly in the X-direction (east).After it crossed the starting point, the platform moved in the Y-direction (north).A strong correlation between the direction of motion and the magnitude of the coordinate differences could be observed.The coordinate differences are clearly significant along the track direction of the trajectory, but are reasonably small along the cross-track direction.The cause of this phenomenon is still under investigation.On the one hand, this may have resulted from an incorrect time apportionment, because the time could have a much greater impact on the trajectory estimation along the track direction than along the cross-track direction.On the other hand, the distance measurements from the total station can be problematic.This problem also affects the along-track direction, due to the location used.The deviations shown in Table 3 are slightly higher than those in Case 3, when all measurements were considered.However, with the GeoCom interface, a significantly high data rate could be produced.In Case 3, the measuring frequency was 1 Hz, as the data rate in Case 4 was, on average, 8 Hz.At a higher data rate, the plausibility of the measurements can be checked.Movements of more than 10 cm within 100 ms are unlikely to result in a maximum speed of 1 m s .If these measurements are not removed, a deviation of 1.5 cm could occur.Finally, a laser scanning scene recorded indoors by the used system is shown in Figure 13.The estimated trajectories from the in-house developed Kalman filter were combined with the scanned data.Then, the transformation of all the scanned points was performed using a specifically developed program.The results were satisfactory.The plane objects showed deviations within >10 cm, but if the point cloud is considered in greater detail, errors are clearly visible.When the total station's aiding module is exposed, there may be cracks. Conclusions In this paper, the question was whether it is possible to use a mobile mapping system in the interior of buildings.The biggest problem here is the absence of GNSS.For this purpose, a suitable replacement is sought.This is necessary to prevent the drift of the system.In this paper, it was shown that this replacement with a total station can be delivered.Difficulties arise as a result of the time synchronization of both systems.The cross-correlation is a useful tool for estimating the offset of the different time systems.However, the full potential of this tool has not yet been exhausted.With the approach presented here, however, at least one of the trajectory accuracies can be achieved, which is comparable to GNSS.The tests have shown that the examination is no longer possible with GNSS.The next step is to find a suitable examination for the measuring system to verify the results.A solution is found with the laser tracker, as it offers both high temporal and spatial resolution.Control via GeoCom still has not yielded the desired improvement.However, the results are very promising, with the exclusion of obvious measurement error problems.In this case, an approach should be developed for automatic outlier detection.For the measurement platform, however, the preliminary results already represent the first success.In summary, it can be ascertained that a quick survey of indoor areas is possible using an indoor mobile mapping system.The loss of GNSS can be compensated for, by using a total station.In order to expand the system, it will be necessary to develop new modules.After the upcoming improvement of the module "total station", a stereo camera system is integrated into the complete system as the next module.In the next step, opportunities of simultaneous localization and mapping are investigated by means of one or several laser scanners. Figure 1 . Figure 1.The mobile mapping system; the left shows the system mounted on a van outdoors; the right shows the indoor set-up, where the GNSS-antenna can be replaced with a prism.(a) Outdoor configuration; (b) indoor configuration. Figure 4 . Figure 4. Configuration for the calibration of pitch and roll angles; for both angles, when the vehicle passes along an object, the difference of the two passes can be used to estimate the roll and pitch angles.(a) Roll; (b) pitch. Figure 5 . Figure 5. Configuration for the calibration of time offset and yaw angle offset.The vehicle is driven by an object at two different speeds to observe the timing error.The timing error is then derived from the geometric errors.To determine the azimuth error, a round object should be passed by from two sides.The offset between the two objects is then used to estimate the azimuth error.(a) Time; (b) yaw. Figure 6 . Figure 6.(a).The traveled trajectory with the positions of the receiver and the total station.Due to the spatial conditions, the trajectory is slightly elongated.(b).The corresponding velocity profiles mapped by the odometer and the total station are shown in the right bottom figure of (b).The cross-correlation of the two velocities is shown in the top-right figure of (b). Figure 7 . Figure 7. Differences between the Kalman filter solution with the IMU and total station and the solution with GPS and IMU.The variations in the east and north are presented in red and blue, respectively.The temporal resolution is one second.The cyan area represents the GPS variance (95%) calculated by the Novatel Inertial Explorer. Figure 8 . Figure 8.The two measured indoor trajectories.On the left, a round-trip through the HCUbuilding, with the total station's line of sight interrupted by a visual obstruction.The right image shows a trip without any interruption of the total station observation.(a) Trajectory 1; (b) trajectory 2. Figure 9 . Figure 9. Differences between the control points of the total station without the GEOCOMconnection and the solution from the Kalman filter with trajectory Case 3. Figure 10 . Figure 10.Trajectory 2: all measured points are instantly highlighted by their indexes returned from the total station (0 = OK; 1,284 = accuracy cannot be guaranteed).As expected, code 1284 shows a close range and signal breaks. Figure 11 . Figure 11.(Top) The correlation with outliers in blue and without outliers in red; (Bottom)The velocity profiles of trajectory 2; the odometer in blue and the total station in green. Figure 12 . Figure 12.Coordinate differences between total station control points and the Kalman filter from trajectory 2. The variations in X and Y are presented in red and blue, respectively.It is striking that the differences are greater in the track direction. Figure 13 . Figure 13.Images (a-c) show a point cloud measured with IMU, scanner and total station.(a) Overview of trajectory 1, as in Figure8a; (b,c) show that a generally good point cloud could be created; (c) shows some irregularities. Table 1 . Modules of the measurement system. Table 3 . Quadratic deviation of control points; Case 4.
8,636.6
2013-11-06T00:00:00.000
[ "Engineering", "Environmental Science" ]
Modeling of wastewater treatment processes with hydrosludge The pressure for Water Resource Recovery Facilities (WRRF) operators to efficiently treat wastewater is greater than ever because of the water crisis, produced by the climate change effects and more restrictive regulations. Technicians and researchers need to evaluate WRRF performance to ensure maximum efficiency. For this purpose, numerical techniques, such as CFD, have been widely applied to the wastewater sector to model biological reactors and secondary settling tanks with high spatial and temporal accuracy. However, limitations such as complexity and learning curve prevent extending CFD usage among wastewater modeling experts. This paper presents HydroSludge, a framework that provides a series of tools that simplify the implementation of the processes and workflows in a WRRF. This work leverages HydroSludge to preprocess existing data, aid the meshing process, and perform CFD simulations. Its intuitive interface proves itself as an effective tool to increase the efficiency of wastewater treatment. | INTRODUCTION In recent years, initiatives related to the protection of natural resources such as water have undergone significant growth worldwide. When it comes to water, the quality of the effluent to be discharged by Water Resource Recovery Facilities (WRRF) is becoming more and more restrictive. It is necessary to maintain the pace of improvement and efficiency of the different processes in WRRF. Notably, biological treatment is one of the most critical tasks in a WRRF (Grady et al. 2011). The biological treatment involves biological reactors and the Secondary Settling Tanks (SST). For this reason, it can represent around 50% of the whole plant's total energy consumption, depending on the configuration and the aeration system used (Ovezea, 2009). Mathematical modeling and numerical simulation have become essential for most of the physical and biochemical processes in WRRFs. These techniques are adopted as great potential tools during the design, operation, optimization, and process control of a WRRF (Makinia, 2010). The main reason for using models and simulations is to enhance efficiency and save costs on resources exploring the dynamic behavior of the process units under different conditions. Software platforms such as Biowin, GPS-X, WEST, Simba, or SUMO implement different numerical methods and are well accepted in the water sector because they are relatively simple to manage and provide global results quickly. However, their hydrodynamics modeling approach does not account for hydraulic phenomena in real performance: geometry of the tanks, internal elements, gradient of the state variables, and the spatial distribution inside the tanks. Furthermore, they cannot detect mixing problems such as short-circuiting, dead volumes, or stratification/ inhomogeneity, which commonly occur in real tanks and play a vital role in the pollutant removal efficiency (Samstag et al. 2016). Computational Fluid Dynamics (CFD) represents the most sophisticated mechanism of simulation able to reproduce in detail the fluid behavior and to analyze specific hydraulic troubleshooting in 3D. Concretely, many of the most essential characteristics in wastewater treatment modeling can be included in a CFD, for instance, multi-phase flow, biokinetics, and submodels such as aeration, sedimentation, or non-Newtonian fluids. Despite the notable increase in scientific production related to CFD modeling since its adoption into wastewater treatment (Glover et al. 2006), the main bottleneck to widespread the use of the CFD in this field is the learning curve and the cost of the commercial software license. To address these issues, we have designed, developed, and evaluated a software named HydroSludge. Notice that users should still be aware of which fluid model they need for their simulation. HydroSludge stems from the necessity of providing a specific simulation software for WRRF management. For this purpose, HydroSludge is presented as an easy-to-use framework for the water sector. With HydroSludge, authors aim to extend the use of CFD technologies among water treatment professionals by reducing the operation's complexity. HydroSludge provides holistic tools for the CFD workflow (data pre-processing, geometry, meshing, solver, and post-processing). Besides, it is designed to leverage parallelism and provide higher performance by leveraging all the underlying resources available on the host computer. The rest of the paper is structured as follows: Section 2 introduces and describes the methods implemented in HydroSludge. Section 3 provides software details and outlines the modules that compose HydroSludge. Section 4, presents a series of use cases to test and evaluate the platform. Finally, Section 5 contains the conclusions of this research and development. | METHODS In this section, the most relevant methods provided by HydroSludge are introduced and detailed. The section is split into two parts. On the one hand, the tools for filtering and checking the input data are described in Section 2.1. On the other hand, the means for building a mesh and performing simulations are detailed in Section 2.2. | Preliminary study To provide appropriate input to a CFD simulation, the collected data must follow quality standards. In this regard, HydroSludge facilitates the filtering and checking operations for the input data. | Filtering This operation is composed of three independent stages: • Settling The drift flux model (Brennan, 2001) is one of the most popular among SST-CFD studies in the literature. It is commonly used to calculate the two-phase mixture of the mixed liquor. For this reason, HydroSludge provides a tool to calculate the activated sludge solids settling velocity through the experimental batch settlings data. In order to reproduce the settling velocity for clarifiers and fit the constant terms automatically, HydroSludge includes the model of Vesilind (1968) Dapelo & Bridgeman, 2018), to provide to the user a straightforward mechanism to check the dimensions of the SST and its components, such as scraper or input well. • State Point and Ten Layers Models: One-dimensional models are still useful techniques for a preliminary evaluation (Ekama et al. 1997). Concretely, users can check the SST performance according to 1D models. Accordingly, if the SST working conditions failed at these models, the user may check these conditions before proceeding to the CFD model development. HydroSludge implements the two most widespread models to reproduce the SST performance. On the one hand, the State Point Model allows the global performance of the operation to be calculated through a mass balance: the mass of solids entering and exiting the tank considering the external recycling rate (Ekama et al. 1997). From this, the optimum design return activated sludge recycle rate, at which SST is in a critically loaded condition corresponding to a stable steady state sludge blanket level, and control strategy are most accurately determined. On the other hand, the Ten Layers Model (Kynch, 1952) shows the distribution of the total suspended solids inside the SST. It allows the sludge blanket height and total suspended solids distribution to be calculated under specific settling velocity conditions. | Simulation This section details the differentiating features of HydroSludge for assisting the user in developing a WRRF CFD model with minimum effort. | Submersible mixers Submersible mixers, flow boosters, or impellers are a type of equipment commonly found in WRRFs. They enhance mixing, prevent the formation of dead volumes, or just direct the flow in a specific direction. HydroSludge permits to include impellers in the CFD simulations by using the so-called equivalent momentum source . In this approach, the complex impeller geometry is substituted by a cylindrical region that applies a momentum source that mimics the hydrodynamic effects of impellers at a reduced computational cost. Note that including the full impeller geometry would result in a huge increase in the number of nodes to resolve the complex geometry of the blades and their corresponding boundary layers. Furthermore, the rotation of the blades requires highly demanding computational techniques such as deformable meshing or the frozen rotor approach. First, the volumetric momentum source term, S m , is introduced into the RANS equations via the swak4Foam library, being M p the volumetric momentum modulus andû C,i the director vector components in Cartesian coordinates (i = x, y, z) for the impeller axis. The modulus for the volumetric momentum source is given by where D b is the actual diameter of the blades, ρ the fluid density, and V M the volume of the source region. The propelled flow rate, q, can be calculated as In this equation, F o represents the thrust force for the design rotational speed ω o of the propeller. These two parameters are usually included in the technical specifications supplied by propeller manufacturers. In some applications, the actual rotational speed, ω, differs from the design one, so the effective flow rate must be modified accordingly. Second, the application of this term is limited within a cylindrical region, the so-called source region, which surrounds the actual propeller. The axis of this cylindrical region is coincident with the actual propeller axis, its diameter equals the diameter of the blade, and its center is located at the rotation center for the blades. The length of the cylinder can be set at will, but it usually spans about one fifth of the blade diameter to permit an adequate meshing of this region. Figure 1 illustrates the basic geometry for the cylindrical propeller subvolume. The center of the cylinder is given by the coordinates vector r Þand its axis orientation by the unit vectorû C . The radius and axial length of the cylinder are represented by R and L, respectively. To check if a given point P, with coordinates r ! P ¼ x P ,y P , z P ð Þ , is included within the momentum source subvolume, one must verify the following two conditions: (a) The distance between the point and the cylinder axis, d Pu , must be smaller or equal than the radius. This distance can be calculated as (b) Point C and vectorû C define a plane, π C . The distance between this plane and the point P, d Pπ , must be smaller or equal than L/2, being | Surface selection and meshing HydroSludge provides a user interface to select and determine surfaces and boundaries from CAD geometries. The surfaces and boundaries are sets of faces that users directly define in the geometry. These objects are populated to the meshing tool and ultimately to the CFD simulation. These surfaces are automatically defined in the meshing process to proceed to their fitting and refinement. This geometry-to-mesh tool is one of the cornerstones of the HydroSludge effort to facilitate the work to F I G U R E 1 Coordinate system of the momentum source volumetric region its users. Once the surfaces and boundaries are named and configured, HydroSlduge divides the meshing into four stages: • Blocking: cube-shaped hexahedral mesh generation around the imported geometry. Hexahedral cells can be adjusted in size to achieve more or less mesh refinement. A finer mesh implies more accurate results but also a higher computation time in both meshing and simulation processes. • Sculpting: refinement of cells, vertices, and edges. This operation, employing an iterative process, divides the cells into hexahedrons. At this point, a first hexahedral approximation of the geometry is ready. • Snapping: projection of cells vertices on the surface of the geometry to removing areas with sharp edges. In other words, smoothing pronounced edges in the geometry. • Layering: addition of extra layers for refining and improving the quality of the hexahedral mesh, especially on the outer faces of the geometry. HydroSludge adds a user-defined number of layers with an expansion ratio in the declared surfaces. Finally, the OpenFOAM checkMesh tool is also included in HydroSludge, allowing users to know the quality of the meshes. This tool returns, as a result, the total number of cells, points, and hexahedrons of a mesh, as well as the quality of the mesh in terms of aspect ratio and skewness metrics. pimpleFoam Large timestep transient solver for incompressible turbulent single-phase flow: This solver is based on the PIM-PLE algorithm which is a combination of pressure implicit split operator (PISO), used in transient problems, and semi-implicit method for pressure-linked equations (SIMPLE), used in steady-state simulations. These iterative algorithms solve equations for velocity and pressure. driftFluxFoam Relative motion between two incompressible phases solver: This solver is mainly used in SSTs which have a continuous phase and a solid dispersed phase. Transient analysis can be performed, introducing a dynamic influent flow to calculate the evolution of the Sludge Blanket Height (SBH) and the Sludge Concentration Distribution (SCD) within the SST. ASM1Foam Biochemical behavior model after obtaining the hydrodynamic results: ASM1 (Henze et al. 1999) model is included in HydroSludge through an extension of OpenFOAM solver scalarTransportFoam. At this solver, the state variables are introduced as k scalars with the turbulent transport equation: being ϕ k the concentration of the k scalar, U the fluid velocity, D T the molecular diffusion coefficient divided by the fluid density, ν t the kinematic viscosity, Sch the Schmidt Number, and S k the source term for k state variable of the biological model. | SOFTWARE FEATURES This section thoroughly depicts the software design, library dependencies, and implementation details to tackle the previously presented methods. The section ends with a description of the usage modes provided by HydroSludge. | Implementation HydroSludge core is written in C++ and is empowered by the following widely used open source libraries with highly active communities 1,2,3 : • Qt widget toolkit 4 for the GUI (Graphical User Interface). • OpenCascade Technology (OCCT) software development platform 5 for 3D computer-aided design (CAD). • Visualization Toolkit (VTK) software system 6 for 3D computer graphics and visualization. Furthermore, HydroSludge includes a virtualized subsystem that leverages Docker for executing OpenFoam operations. Docker uses an operating system (OS) level virtualization through packages named containers. Within these containers, a user can deploy customized instances of an OS with the necessary dependencies. This container runs with complete independence and in isolation on the host OS. In this regard, HydroSludge installation binaries include a Docker image with the software packages required in order to provide the necessary tools to perform all the operations brought to the users by the GUI. Concretely, The container released with HydroSludge is based on Ubuntu 7 18.04 LTS and satisfies the dependencies for running the following software, also packed in the image: • OpenFoam 8 v19.06 • Swak4Foam 9 • MPICH 10 3.3.2 Summarizing, Figure 2 lists the libraries leveraged by HydroSludge and depicts the virtualized environment for running the simulations. HydroSludge GUI modular design is represented in Figure 3. In the figure, five modules can be appreciated. They are described in the following. | Data analysis This module implements two methods for processing the input data described in Section 2.1. The first method, "outliers-removal," scans the input data selecting small subsets of n elements. The algorithm continues with the calculation of the mean and standard deviation for the subset data. Then, every element in the subset is compared to the subset mean and replaced by it if the element lies outside the 1.96 standard deviation range. The second method, "moving-average," scans every element in the dataset and replaces it with the mean value of the n closest neighbors. Leveraging VTK versatility, users are capable of preparing the data, configuring, and visualizing a series of charts included for this purpose. In this regard, HydroSludge makes use of the VTK filtering methods for reshaping and fitting the inflow. Likewise, it also provides adjustment mechanisms for calculating the rheological properties necessary for the afterward simulation. 7 https://ubuntu.com/containers 8 https://openfoam.com 9 https://openfoamwiki.net/index.php/Contrib/swak4Foam 10 https://www.mpich.org F I G U R E 2 Overview of third-party software integration in HydroSludge F I G U R E 3 HydroSludge modules and operations 3.1.2 | CAD HydroSludge explores an imported STEP-File 11 with the shape of the object to mesh. Leveraging OCCT, each face is identified and labeled in its center of mass. In this regard, a user is enabled to select faces and group them to determine areas. These areas are crucial to define boundary conditions in further steps. Furthermore, at this point, areas are devised as patch or wall depending on their role in the simulation. Surfaces are automatically meshed following the user's guidelines of areas and specifications. The linking process between the geometry and the mesh is the creation of a series of .stl files corresponding to the surfaces and boundaries defined by the user. In this process, the initial mesh is built from the associated shapes of the geometry concerning their correctly triangulated parts. This operation is based on the OCCT method: RetrieveMesh. The .stl files provide OpenFOAM the flexibility of applying mesh refinement methods to a specific boundary. This underlying operation is crucial for a satisfactory and productive user experience. For this reason, HydroSludge acts as a black box where users are only responsible to determine the level of refinement in each surface via a series of input fields in the GUI. | Mesh CFD simulations rely on meshes. OpenFOAM provides tools for defining and refining a mesh, which is a sequential process that HydroSludge handles. For this purpose, a critical aspect is that OpenFOAM needs to load the geometry in a supported format, such as STL, 12 which describe an unstructured triangulated surface by unit normal and vertices in a 3D Cartesian system. For this reason, the HydroSludge meshing process internally includes the tailoring of STL files ready for OpenFOAM. As previously introduced (see Section 2.2), OpenFOAM meshing involves a series of iterative steps so that HydroSludge provides an interface divided into four stages: (i) blocking, (ii) sculpting, (iii) snapping, and (iv) layering. HydroSludge, in the first stage, leverages the OpenFOAM command blockMesh in order to perform the blocking. This command reads the configuration from the dictionary file blockMeshDict. In this operation, OpenFOAM registers surfaces' names and types previously defined by the user from the geometry faces. Once the initial mesh has been generated with blockMesh, OpenFOAM provides the command snappyHexMesh in order to perform the remaining stages. snappyHexMesh reads the execution parameters from snappyHexMeshDict file, which depending on its configuration can process the sculpting, the snapping, or the layering. However, the user will remain agnostic to the OpenFOAM technical details. The sequential nature of the meshing process allows the user to refine each stage or come back to a previous stage until reaching a satisfactory result. Finally, users can check the mesh quality by clicking a button that executes the OpenFOAM command checkMesh. During the whole process, HydroSludge leverages the VTK library to visualize the results in a viewer. HydroSludge also provides an Advanced Control option to further fine-tune the meshing parameters. This feature enables the direct edition of the OpenFOAM configuration file. Advanced users, with high experience in modeling, may make the most of this option to increase the quality of the resultant mesh. | CFD Once the mesh is ready, the CFD module assists the user to configure the solver parameters (importing them from the data analysis stage, if needed). Furthermore, the boundaries defined in the geometry are presented accordingly to the selected solver characteristics. Depending on the solver or equipment included in the case, HydroSludge handles internally the necessary files (i.e., fvOptions, fvSchemes, fvSolution, and transportProperties). Finally, the rest of the control simulation parameters are also configured. HydroSludge leverages OpenFOAM MPI multiprocess capabilities to accelerate the simulation in multicore environments. Concretely, HydroSludge decomposes the domain in a given number of processes (up to the number of CPUs assigned to Docker) and defines a processor topology by editing the OpenFOAM file decomposeParDict and running the command decomposePar. Notice that parallel executions are also available for the meshing process. | Post-processing A series of predefined operations are included for the post-processing stage. HydroSludge provides tools for defining planes, creating isovolumes, contouring surfaces, and plot probe lines through the object. This set of operations are based on VTK methods; besides, they have been tailored to the most common WRRF operator necessities and present a reduced configuration complexity. Every single OpenFOAM utility (either launched with MPI or not) via the command docker run via the Windows PowerShell. HydroSludge enables real-time visualization of the simulation results using the VTK library. In this regard, HydroSludge reads on-the-fly the timesteps and provides a series of simplified tools for scientific visualization of structures. Notice that these features are also compatible with parallel execution. In this case, HydroSludge internally performs the reconstruction operation for each new solved timestep with the OpenFOAM command reconstructPar, completely transparent to the user. | Usability HydroSludge design principles arose from the need to provide useful and efficient management from a facility technician to the manager of the WRRF. In this regard, it is not mandatory to make comprehensive use of all the features provided by HydroSludge. Instead, HydroSludge allows flexible usage by providing different entry and output points. Figure 3, apart from the modules priorly described (blue boxes), also depicts HydroSludge input points (white files), data dependencies among modules (blue arrows), and intermediate or final results (gray arrows pointing to "Results"). Although HydroSludge is conceived as a single tool, its modular design allows it to use it as a toolbox of connected components. For this purpose, HydroSludge also includes independent procedures that lead to different types of results. Figure 3 depicts with primary-colored squares the three operations that can be independently performed. 1. Users are capable of conducting a preliminary study through the Data analysis module (yellow square). Left to right, Figure 3 shows this first module, which independently can obtain results, and it expects a series of .csv files with data of the flow and the settling performance. Particularly, this module provides tools for exploring the appropriateness of the characteristics of the SST physical design, the transient inflow data, the settling models, the rheology of the target case, and, finally, how the influent and the external recycling flow rates affect the SST performance. These methods are thoroughly described in Section 2.1. 2. HydroSludge can be used as a standalone tool for generating a mesh from CAD geometries. The red square in Figure 3 highlights the modules involved in the meshing operation. By importing a step geometry in the CAD module, users are provided with the necessary instruments to create and refine a mesh in the MESH module. 3. From a holistic point of view, HydroSludge comprehends a complete CFD simulation. The blue square in Figure 3 groups the necessary modules for configuring and running a simulation (CFD module) and its visualization (Post-processing module). Furthermore, optionally, Data analysis module can also take part in the simulation process. | RESULTS In this section, two study cases are presented and developed. The study of these cases serves as a proof of concept and correct operation of HydroSludge. | Study of a secondary settling tank The full-scale SST is 18.5 m in diameter and 5 m in height. With more detail, Figure 4 depicts the exact dimensions of the studied SST. The figure also indicates the location of the inlet, which is the influent flow of activated sludge, and the two outlet flows, one for clarified water, and the other for the external recycling. | Data analysis The study of this case starts with a preliminary study leveraging the tools included in the data analysis module (see Section 2.1). Filtering With the Settling tool, users calibrate the settling model parameters. For this purpose, first, experimental data of the batch settling test are introduced importing a .csv file or manually. Next, the linear equation is fitted to the linear part of the experimental settling curve for each activated sludge concentration. When all linear curves are fit to the experimental results, HydroSludge calculates the settling model parameters (see Figure 5a). This procedure is not mandatory so that, in this case, the settling model is introduced manually. The Inflow tool allows users to obtain a smooth transient flow from monitored transient influent flows. First, a .csv file with the evolution data of the flow through the time is imported. Users are provided with a series of tools to apply filters, such as moving averages or outlier removal (see Figure 5b). The filtered transient flow can be used at the CFD simulation afterward and exported. Finally, the Rheology tool saves the rheological model selected for this case and transfers the configuration to the CFD module, if needed. Checking With the Design tool, users check if the dimensions of the SST agree with bibliography design requirements. HydroSludge indicates with a check-mark or a warning sign whether the dimension is acceptable (see Figure 6a). More information can be obtained clicking on the information button in order to get a dialog with recommended design information and their reference. All in all, the filters and the previous check can be used by the state point model and the 10 layers model to test whether the activated sludge from the biological reactor settles at the SST. This tool is presented with three areas (see Figure 6b). The first area, found at the top, is a hydraulic diagram of the secondary treatment of a WRRF with a bioreactor and an SST. The second area at the bottom corresponds to the 1D models to represent the SST performance. The models are configured using the data introduced, such as the diameter and area of the SST, the influent and recirculated flow (Qi and Qr, respectively), number of settling tanks, overflow velocity (VA), and the activated sludge concentration (TSS). According to those parameters, HydroSludge calculates the underflow and overflow lines and the Ten Layers Model. The State Point of the WRRF is the underflow-overflow intersection. In this regard, the user can modify and explore the limits of its State Point depending on the given parameters. Thus, the boundary conditions can be checked before proceeding with the CFD model. | Boundaries definition HydroSludge assists the complete operation from the geometry to the mesh and its refinements, as previously stated. Thus, the first step for the construction of the CFD model in HydroSludge is to import a geometry with the CAD module. For this purpose, utilizing a .step file users will have the geometry faces at their disposal. Following this, users can declare the mesh surfaces and their roles in the simulation. For instance, the SST presented below illustrates the power of this meshing tool. Table 1 depicts an example of how faces are grouped into surfaces, and these, in turn, are labeled with a name and identified as patch or wall according to its boundary type (see Figure 7 and Table 1). Non-grouped faces are added to the Default surface and identified as wall automatically. Thereupon, the mesh is ready to be generated in the Mesh module. At this stage, users have access to the Base, Sculpt, Snap, and Layers meshing tools to refine the areas or surfaces of their most interest. When the mesh is ready, it is time to configure the simulation parameters to start with the calculations of the CFD model (see Section 2.2.3). This configuration is done at the CFD module. | Evaluation The evaluation of the developed model in this study case is performed using driftFluxFoam solver. The simulations start with a pre-initialized state that defines the sludge layer inside the SST. Accordingly, the user specifies the height and concentration of the sludge at the initial time of the simulation. The last step before launch the simulation is to provide the control configuration parameters such as the number of time-steps, writing interval, or time steepness, which are defined at this stage. Furthermore, the parallel execution of the case can be enabled. For this purpose, the desired quantity of CPUs involved in the simulation, as well as, the processes topology, have to be introduced during the control configuration. The results obtained by Hydrosludge are comparable to the ones calculated in a commercial CFD platform such as ANSYS. 13 To illustrate this, Figure 8 and Figure 9 depict the sludge concentration at a vertical plane of this study case obtained with HydroSludge and ANSYS 19.2. While Figure 8 illustrates the initial timesteps of the simulation, Figure 9 shows the results obtained when a more steady state is reached. Both simulations show a similar sludge height blanket and similar influent flow behavior. Additionally, both simulations showed a similar sludge concentration distribution with the presence of settled solid particles at the bottom of the clarifier being evacuated through the recirculation outlet by gravity. | Study of an anoxic reactor The second study case is a full-scale anoxic reactor from a Modified Ludzack-Ettinger biological reactor, which was extensively analyzed in (Climent et al. 2018). Particularly, this case considers two influent flows, the influent wastewater flow, and the internal recirculation flow; the outflow; and two submersible mixers. 13 https://www.ansys.com/products/fluids F I G U R E 8 Sludge concentration inside the clarifier at the initial timesteps of the simulation | Equipments HydroSludge provides a tool for easily including equipment, such as mixers into the CFD model (see Section 2.2.1), so that two submersible mixers are inserted into the reactor. The new equipment can be configured with the following options: identification; center point; vertical inclination; azimuth inclination; radius size; and full width. Figure 10a showcases both created mixers inserted into the structure before meshing. Once the surfaces, boundaries, and equipment are defined, it is time to proceed to the meshing procedure and the refinement of the resultant mesh, if needed. Figure 10b contains the representation of the mesh with refinement at the submersible mixers. OpenFOAM solver pimpleFoam is selected for the simulation of the hydrodynamics taking place inside the anoxic reactor. The configuration mechanism is similar to the previous study case, but each solver has its intrinsic parameters which HydroSludge is responsible to present to the user. Concretely, this solver only needs the configuration of the rheological model and the flow rate (or velocity) for each inlet or outlet boundary. PimpleFoam sets water as the default Newtonian model. Since this case counts with submersible mixers, their simulation features have to be defined. Specifically, HydroSludge expects the Manufacturer Coefficient and Power for each impeller. Usually, manufacturer coefficients are provided in clean water tests; hence, it may be modified to match CFD results with real data measured in process water, as it is done in Climent et al. ((2018), (2019)). Finally, apart from the configuration of the simulation control, the execution can be parallelized (see Section 4.1.3). At the end of this step, the model will count with the hydrodynamic solution, which can be leveraged to apply transport equations for models such as the biological one described in the next section. | Biological model To apply the biological model implemented in the solver ASM1Foam (see Section 2.2.3), a time-step of the previous hydrodynamic solution has to be chosen. In this regard, the transport equation of the biological model will be applied to the selected time, and all the data related to the mesh or submersible mixers are F I G U R E 1 1 Post-processing result examples F I G U R E 1 2 Velocity profile results at three different (a) location A, (b) location B, and (c) location C inside the Anoxic reactor; in black the measured data at the WRRF, in blue the results obtained by CFD, and in red the results obtained by Hydrosludge automatically shifted to this stage. If HydroSludge does not detect hydrodynamic results, ASM1Foam cannot be calculated. Next, the kinetic stoichiometric parameters of the ASM1 model must be defined. By default, HydroSludge loads the values established in Henze et al. (1999). Additionally, the concentrations of all ASM1 state variables are set for every inflow boundary, as well as the initial conditions of the reactor. After initialization, the case is ready to be simulated. | Post-processing As soon as the CFD simulation yields results, users can explore them interactively. For instance, Figure 11a shows three planes coloring "Sno" variable for the timestep ' 1,1601.1 that corresponds to the second ' 193.4 of the transient simulation. Another example of the tools that HydroSludge provides is a probe line, which is deployed inside the domain to study the behavior of a given variable. Figure 11 depicts "Snh" distribution along a line in the Z dimension. Furthermore, the viewer and graph can also show the evolution in time of the variable thank to the media commands for playing the sequence. Lastly, in order to validate experimental data collected at the WRRF, Figure 12 depicts the velocity profiles at three different locations obtained with Hydrosludge and ANSYS 19.2. The results obtained by Hydrosludge showed a similar adjustment to the data measured at the WRRF. | CONCLUSIONS This paper describes, implements, and evaluates a set of tools for the specific case of the study and analysis of wastewater treatment processes. The toolbox methods are included in a software platform, named HydroSludge, to facilitate the evaluation and simulation of secondary clarifiers and conventional activated sludge processes in a WRRF. Future work places HydroSludge in the cloud as a web application instead of as a desktop application as new research developments indicate Costa-Maj o et al. (2021). HydroSludge is also opened to be extended with other type of mixers such as aerators, vertical shaft, or floating surfaces. Likewise, other ASM-like models can be incorporated in further versions. HydroSludge is a complementary WRRF modeling tool to Biowin, GPS-X, and so forth. Users may continue using plant-wide modeling to reproduce the entire operation of the facility. HydroSludge is based on one of the most widespread and leading CFD simulation software, OpenFOAM. However, HydroSludge prevents the user from getting involved in a complex process of configuring an extensive list of files, bringing a more wizard-like and intuitive usage. This feature makes HydroSludge an ideal tool for wastewater treatment industry users with low knowledge of CFD. Furthermore, HydroSludge includes the nomenclatures and procedures names used in the water sector. ACKNOWLEDGEMENT This work was supported by project RTC-2016-4560-7 from MINECO. Researcher S. Iserte was supported by the postdoctoral fellowship APOSTD/2020/026 from GVA and ESF. Authors want to thank the anonymous reviewers whose suggestions significantly improved the quality of this manuscript.
7,879.2
2021-11-09T00:00:00.000
[ "Environmental Science", "Engineering" ]
Optimizing the thermoelectric performance of zigzag and chiral carbon nanotubes Using nonequilibrium molecular dynamics simulations and nonequilibrium Green's function method, we investigate the thermoelectric properties of a series of zigzag and chiral carbon nanotubes which exhibit interesting diameter and chirality dependence. Our calculated results indicate that these carbon nanotubes could have higher ZT values at appropriate carrier concentration and operating temperature. Moreover, their thermoelectric performance can be significantly enhanced via isotope substitution, isoelectronic impurities, and hydrogen adsorption. It is thus reasonable to expect that carbon nanotubes may be promising candidates for high-performance thermoelectric materials. Introduction As it can directly convert waste heat into electric power, thermoelectric material is expected to be one of the promising candidates to meet the challenge of energy crisis. The performance of a thermoelectric material is quantified by the dimensionless figure of merit ZT = S 2 σ T κ e + κ p , where S is the Seebeck coefficient, s is the electrical conductivity, T is the absolute temperature, and e and p are the electron-and phonon-derived thermal conductivities, respectively. An ideal thermoelectric material requires glass-like thermal transport and crystal-like electronic properties [1], i.e., one should try to improve the ZT value by increasing the power factor [S 2 s] and/ or decreasing the thermal conductivity ( = e + p ) at an appropriate temperature. Such a task is usually very difficult since there is a strong correlation of those transport coefficients according to the Wiedemann-Franz law [2]. Low-dimensional or nanostructure approaches [3,4], however, offer new ways to effectively manipulate electron and phonon transports and thus can significantly improve the ZT value. As an interesting quasi-one-dimensional nanostructure with many unusual properties, carbon nanotubes [CNTs] have attracted a lot of attention from the science community since their discovery [5]. However, few people believe that CNTs could be promising thermoelectric materials. This is probably due to the fact that although CNTs can have much higher electrical conductivities, their thermal conductivities are also found to be very high [6][7][8][9][10][11]. As a result, the ZT values of CNTs predicated from previous works [10,12] are rather small (approximately 0.0047). Prasheret al. [13] found the so-called 'CNT bed' structure could reduce the thermal conductivity of CNTs. However, the random network of the samples may weaken the electronic transport, and the room temperature ZT value is estimated to be 0.2. Jiang et al. [14] investigated the thermoelectric properties of single-walled CNTs using a nonequilibrium Green's function approach [NEGF]. They found that CNTs exhibit very favorable electronic transport properties, but the maximum ZT value is only 0.2 at 300 K. The possible reason is the neglect of nonlinear effect [15] in the phonon transport, and the corresponding thermal conductivity was overestimated. If the thermal conductivity can be significantly reduced without many changes to their electronic transport, CNTs may have very favorable thermoelectric properties. In this work, we use a combination of nonequilibrium molecular dynamics simulations and NEGF method to study the thermoelectric properties of a serial of CNTs with different diameters and chiralities. They are the zigzag (7,0), (8,0), (10,0), (11,0), (13,0), (14,0) and the chiral (4,2),(5,1), (6,2), (6,4), (8,4), (10,5), and all are semiconductors in their pristine form. By cooperatively manipulating the electronic and phonon transports, we shall see that these CNTs could be optimized to exhibit much higher ZT values by isotope substitution, isoelectronic impurities, and hydrogen adsorption. It is thus reasonable to expect that CNTs may be promising candidates for high-performance thermoelectric materials. Computational details The phonon transport is studied using the nonequilibrium molecular dynamics [NEMD] simulations as implemented in the LAMMPS software package (Sandia National Laboratories, Livermore, CA, USA) [16]. The Tersoff potential [17] is adopted to solve Newtonian equations of motion according to the Müller-Plathe algorithm [18] with a fixed time step of 0.5 fs. We carry out a 300-ps constant temperature simulation and a 200-ps constant energy simulation to make sure that the system has reached a steady state. The nanotubes are then divided into 40 equal segments with periodic boundary condition, and the first and twenty-first segments are defined as the hot and cold regions, respectively. The coldest atom in the hot region and the hottest one in the cold region swap their kinetic energies every hundreds of time steps, and then temperature gradient responses and thermal flux maintain via atom interactions in neighboring segments [19,20]. The electronic transport is calculated using the NEGF method as implemented in the AtomistixToolKit code (Quantum Wise A/S, Copenhagen, Denmark) [21,22]. The nanotube is modeled by a central part connected by the left and right semi-infinite one. We use the Troullier-Martins nonlocal pseudopotentials [23] to describe the electron-ion interactions. The exchange-correlation energy is in the form of PW-91 [24], and the cutoff energy is set to be 150 Ry. We use a double ζ basis set plus polarization for the carbon atoms, and the Brillouin zone is sampled with 1 × 1 × 100 Monkhorst-Pack meshes. The mixing rate of the electronic Hamiltonian is set as 0.1, and the convergent criterion for the total energy is 4 × 10 -5 eV. Results and discussions We begin with the phonon transport of these CNTs using the NEMD simulations, where the phonon-induced thermal conductivity [ p ] is calculated according to Fourier's . Here, J is the heat flux from the hot to cold region; A is the cross-sectional area of the system, and ∇T is the temperature gradient. To test the reliability of our computational method, in Figure 1, we plot the NEMD-calculated thermal conductivity of the tube (4,2) as a function of temperature. For comparison, the result using a more accurate Callaway-Holland model [25,26] is also shown. We see that the NEMD result agrees well with that of the Callaway-Holland model when the temperature is larger than 150 K. As molecular dynamics simulation is much faster than other approaches and can handle nonlinearity when dealing with heat transport, we will use it throughout our work as long as the temperature is not very low. For low-dimensional systems, one should pay special attention to the size effect when discussing the thermal conductivity. Both the experiment measurements [27,28] and molecular dynamics simulations [29,30] indicate that the p of CNTs depends on their length, which is different from that of bulk materials. Here, we use a simple approach [20] by calculating the thermal conductivity at different tube lengths and then using a linear fitting according to the formula The corresponding tube diameter (d) and chiral angle (θ) are also given. summarizes the NEMD-calculated room temperature p of a series of zigzag and chiral nanotubes. It should be noted that we have carried out a quantum correction [31] to the thermal conductivity, and the tube length is assumed to be 1 μm for all the CNTs considered. As can be seen from the table, the room temperature p of CNTs are indeed very high which range from several hundreds to more than 1,000 W/m·K. If we focus on the zigzag CNTs, we find that the thermal conductivity decreases as the tube diameter is increased. This is also the case for the chiral CNTs with the same chiral angle (e.g., the (4,2), (8,4), and (10,5) tubes). The reason is that larger diameter CNTs have a smaller average group velocity, and the probability of the Umklapp process is higher [25,32]. On the other hand, if we focus on those CNTs with roughly similar diameters (e.g., (7,0)vs. (6,2), (11,0)vs. (8,4),(13,0)vs. (10,5)), it is interesting to find that the thermal conductivity of the chiral tube is always lower than that of the zigzag one. As these CNTs have a similar average group velocity, we believe that the more frequent phonon Umklapp scattering in the chiral tubes makes a significant contribution to the reduced thermal conductivity. We now move to the discussions of electronic transport using the NEGF approach. Figure 2 shows the calculated electronic transmission function [T(E)] for the above-mentioned zigzag and chiral series. Within the rigid-band picture, E > 0 corresponds to the n-type doping, while E < 0 corresponds to the p-type doping. Here, we focus on the electron ballistic transport and ignore the weak electronphonon scattering. We see that all the investigated CNTs exhibit quantized transmission which can be essentially derived from their energy band structures. The vanishing transmission function around the Fermi level is consistent with the fact that all of them are semiconductors. It is interesting to find that those CNTs with a larger diameter have a symmetrically distributed transmission function near the Fermi level. However, this is not the case for the smaller diameter CNTs such as(7,0), (8,0), and (4,2), where we see that the number of first conduction channel is two for the p-type doping and one for the n-type doping. By integrating [33] the calculated T(E), one can easily obtain the Seebeck coefficient (S), the electrical conductance [G], and the electronic thermal conductance [l e ] within the linear response limit. Here, we choose the zigzag (10,0) and chiral (6,4) as two typical examples and plot in Figure 3 the corresponding transport coefficients at 300 K as a function of chemical potential [μ]. Note that the chemical potential indicates the doping level or carrier concentration of the system, and the n-type doping corresponds to μ > 0, while p-type corresponds to μ < 0. As can be seen in Figure 3a, b, both G and l e of these two CNTs vanish around the Fermi level (μ = 0) since this area corresponds to the band gap of the systems. When the chemical potential moves to the edge of the first conduction channels, there is a sharp increase of G and l e For both the (10,0) and (6,4) tubes, the S shown in Figure 3c is rather symmetric about the Fermi level, which can be attributed to the symmetrically distributed first conduction channels (see Figure 2). The absolute value of the Seebeck coefficient reaches the maximum value at μ ≈ ± B T and then decreases until vanish near the edge of band gap. It should be mentioned that we have used the term 'conductivity' for the phonon transport but 'conductance' for the electronic transport. To avoid arbitrary definition of cross-sectional area in low-dimensional system such as where the phonon-induced thermal conductance (l p ) has been used to replace the original thermal conductivity ( p ). Figure 3d shows the chemical potential dependent ZT value at 300 K for the (10,0) and (6,4) tubes. We see that both of them exhibit two peak values around the Fermi level, which suggest that by appropriate p-type and n-type doping, one can significantly enhance the thermoelectric performance of CNTs. For the (10,0) tube, the maximum ZT value is found to be 0.9, and it appears at μ = ± 0.40 eV. In the case of (6,4) tube, the ZT value can be optimized to 1.1 at μ = ± 0.44 eV. The same doping level for the p-type and n-type doping in the (10,0) or (6,4) tubes is very beneficial for their applications in real thermoelectric devices. Up to now, we are dealing with room temperature, and the corresponding ZT values are still not comparable to that of the best commercial materials. Moreover, a thermoelectric material may be needed to operate at different temperatures for different applications. We thus perform additional transport calculations where the temperature ranges from 250 to 1,000 K. Figure 4 plots the calculated ZT values as a function of temperature for the abovementioned zigzag and chiral series. At each temperature, two ZT values are shown which correspond to the optimized p-type and n-type doping in each tube. Except for the small (4,2) tube with a maximum ZT value at 300 K, we see in Figure 4 that the thermoelectric performance of other CNTs can be significantly enhanced at a relatively higher temperature. The maximum ZT values achieved are 3.5 for the zigzag (10,0) at 800 K and 4.5 for the chiral (6,4) at 900 K. These values are very competitive with that of conventional refrigerators or generators. It is interesting to note that among the investigated CNTs, both the (10,0) and (6,4) tubes have an intermediate diameter (0.7 to 0.8 nm), and those with larger or smaller diameters have a relatively less favorable thermoelectric performance. On the other hand, we see that almost all the zigzag tubes exhibit a peak ZT value at an intermediate temperature (700 to 800 K). In contrast, the peak for the chiral series moves roughly from 300 to 900 K as the tube diameter is increased. Our calculated results thus provide a simple map by which one can efficiently find the best CNT for the thermoelectric applications at different operating temperatures. To further improve the thermoelectric performance of these CNTs, we have considered isotope substitution which is believed to reduce the phonon-induced thermal conductance without changing the electronic transport properties [34][35][36]. Here, we choose (10,0) as an example since it has the highest ZT value among those in the zigzag series, and the zigzag tubes are usually easier to be fabricated in or to be selected from the experiments than the chiral ones. In our calculations, the 12 C atoms in the (10,0) tube are randomly substituted by 13 C atoms at different concentrations. The corresponding lattice thermal conductance as well as the ZT value at 800 K is shown in Figure 5 with respect to the pristine values. Due to the mass difference between 12 C and 13 C, we see that the calculated thermal conductance of the (10,0) tube decreases Figure 4 Optimized ZT values as a function of temperature. These are for a series of (a) zigzag and (b) chiral tubes. The results for the p-type and n-type doping are both shown. Figure 5 Lattice thermal conductance and the ZT value. Calculated lattice thermal conductance (red) and optimized ZT value (blue) at 800 K for the (10,0) tube, where the 12 C atoms are substituted by 13 C atoms with different concentrations. Note that the results are given with respect to those of the pure 12 C tube. with the increasing concentration of 13 Catoms. Of course, if half or more 12 Catoms are substituted, the situation is reversed. The thermal conductance can be well fitted by a double exponential function the concentration of 13 C atoms. For a light isotope substitution ( 12 C 0.95 13 C 0.05 ), the thermal conductance is already reduced by about 9% and the ZT value can be increased to 3.7 from the pristine value of 3.5. If half 12 C atoms are replaced ( 12 C 0.5 13 C 0.5 ), the corresponding thermal conductance reaches the minimum and the ZT value can be as high as 4.2, which suggests its appealing thermoelectric applications. Introducing isoelectronic impurities is another effective way to localize phonon and reduce lattice thermal conductance due to impurity scattering [37]. Here, we choose Si as an example and consider a very low concentration where one C atom in a (10,0) supercell containing three primitive cells is replaced by a Si atom. The resulting product has a nominal formula of C 119 Si and is schematically shown in Figure 6a. As the mass difference between C and Si is even larger, we find that the phonon-derived thermal conductance of C 119 Si is significantly reduced by 45% to 60% compared with that of the pristine(10,0) tube in the temperature range from 300 to 900 K. On the other hand, since C and Si atoms have the same electron configuration, one may expect that Si doping will not change much of the electronic transport properties. Indeed, our calculations only find a small weakening of the power factor (S 2 G). As a result, we see in Figure 6c that there is an overall increase of the ZT value at the temperature range of 300 to 700 K. The Si-doped product has a maximum ZT = 4.0 at T = 600 K compared with the pristine value of 3.5 at T = 800 K. It is worth to mention that in a wide temperature range (450 to 800 K), the ZT values of the Sidoped product are all higher than 3.0 which is very beneficial for their thermoelectric applications. A similar improvement of the thermoelectric performance can be achieved by hydrogen adsorption on the (10,0) tube. As shown in Figure 6b, two hydrogen atoms are chemisorbed on top of a C-C bond along the tube axis, and the product has a concentration of C 40 H 2 . Our calculated results indicate that such hydrogen adsorption causes deformation of the (10,0) tube and reduces both the phonon-and electron-induced thermal conductance while keeping the S 2 G less affected. For example, the calculated l p at 600 K is 0.072 nW/K, which is much lower than that found for the pristine (10,0) tube (0.21 nW/K). The calculated l e also decreases from 0.089 to 0.062 nW/K. At the same time, we find that the S 2 G of the chemisorbed product (9.47 × 10 -13 W/K 2 ) is slightly lower than that of the pristine (10,0) tube (1.28 × 10 -12 W/K 2 ). As a result, the calculated ZT value at 600 K increases significantly from 2.6 to 4.2 which is even higher than the highest value of the pristine (10,0) tube. The chemisorptions of hydrogen also increase the ZT value at other temperatures, as indicated in Figure 6c. It is interesting to note that the temperature-dependent behavior almost coincides with that from Si doping, especially in the temperature region from 400 to 700 K. Summary In summary, our theoretical calculations indicate that by appropriate n-type and p-type doping, one can obtain much higher ZT values for both the zigzag and armchair CNTs, and those tubes with an intermediate diameter (0.7 to 0.8 nm) seems to have better thermoelectric properties than others. With the zigzag (10,0) as an example, we show that the phonon-induced thermal conductance can be effectively reduced by isotope substitution, isoelectronic impurities, and hydrogen adsorption, while the electronic transport is less affected. As a result, the ZT value can be further enhanced and is very competitive with that of the best commercial materials. To experimentally realize this goal, one needs to fabricate CNTs with specific diameter and chirality, and the tube length should be at least 1 μm. This may be challenging but very possible, considering the fact that the (10,0) tube was successfully produced by many means, such as by direct laser vaporization [38], electric arc technique [39], and chemical vapor deposition [40], and can be selected from mixed or disordered samples using a DNA-based separation process [41].
4,347
2012-02-11T00:00:00.000
[ "Materials Science" ]
A Robust CoS ‐ PVNet Pose Estimation Network in Complex Scenarios : Object 6D pose estimation, as a key technology in applications such as augmented reality (AR), virtual reality (VR), robotics, and autonomous driving, requires the prediction of the 3D posi ‐ tion and 3D pose of objects robustly from complex scene images. However, complex environmental factors such as occlusion, noise, weak texture, and lighting changes may affect the accuracy and robustness of object 6D pose estimation. We propose a robust CoS ‐ PVNet (complex scenarios pixel ‐ wise voting network) pose estimation network for complex scenes. By adding a pixel ‐ weight layer based on the PVNet network, more accurate pixel point vectors are selected, and dilated convolution and adaptive weighting strategies are used to capture local and global contextual information of the input feature map. At the same time, the perspective ‐ n ‐ point localization algorithm is used to accu ‐ rately locate 2D key points to solve the pose of 6D objects, and then, the transformation relationship matrix of 6D pose projection is solved. The research results indicate that on the LineMod and Oc ‐ clusion LineMod datasets, CoS ‐ PVNet has high accuracy and can achieve stable and robust 6D pose estimation even in complex scenes. Introduction Object 6D pose estimation, as an important task in the field of computer vision, has many applications in fields such as augmented reality (AR), virtual reality (VR), robotics, and autonomous driving.As shown in Figure 1, by estimating the 6D pose of an object in the camera coordinate system, namely the 3D position and 3D pose, virtual and real objects can be combined in the real environment to enhance people's perception of the real world [1].In addition, in industrial manufacturing, robots can perform precise part positioning and assembly operations through pose estimation.In autonomous driving navigation, cars need to understand their location in the environment in order to plan the optimal path.However, due to the influence of complex conditions such as background clutter and target occlusion in the real environment [2], the 6D object pose estimation is inaccurate, and the robustness is poor.Therefore, accurately and robustly estimating the 6D pose of the target object from complex scenes is crucial for improving the performance of AR, VR, robotics, and autonomous driving [3]. The 6D pose estimation of target objects aims to detect targets and estimate their direction and translation relative to the standard framework [4].The main challenge of traditional 6D pose estimation is to establish a correspondence between the input image and available 3D models and then use the perspective-n-point (PnP) to calculate pose parameters.However, the quality of the correspondence is sensitive to factors such as lighting changes, weak textures, and cluttered backgrounds [5], making it difficult for traditional methods to handle textureless objects and exhibiting poor robustness to severe occlusion and background changes [6].In recent years, deep learning-based methods have shown strong capabilities in handling 6D pose estimation, which can generally be divided into two categories: end-to-end methods based on direct regression and two-stage methods based on object class priors.In end-to-end methods, training a neural network and directly regressing the 6D pose from the input image using the neural network is not as accurate as traditional geometry-based PnP algorithms, although this type of method is highly efficient [7].In two-stage methods, CNN is first used to regress the intermediate representation, establishing 2D and 3D correspondence, and then the PnP algorithm is executed based on this correspondence.However, this type of method usually uses regression and multiple representations to estimate the pose, requiring accurate acquisition of key point information of the target object.It can be seen that the existing mainstream 6D object pose estimation methods model this problem as a regression task, requiring special designs to deal with multiple solution problems when dealing with symmetric and partially visible objects.We propose a deep learning-based CoS-PVNet (complex scenarios pixel-wise voting network) for 6D object pose estimation in complex scenes, which achieves accurate and robust 6D pose estimation of the target object.CoS-PVNet provides support for achieving stable and robust 6D pose estimation in virtual real fusion interactive applications.The main work of this article is as follows: (1) In complex scenes such as cluttered environments and severe occlusion, a CoS-PVNet object pose estimation network framework is proposed, which can enhance the key point feature processing ability of RGB images, accurately filter and predict pixel vectors, and effectively improve the accuracy and robustness of 6D pose estimation in complex scenes.(2) Inaccurate vector field prediction will affect the quality of generating key point assumptions.By adding a pixel-weight selflearning module between the encoder and decoder of the PVNet to predict pixel confidence, it can adapt to more complex image features and changes through learnability, prevent key feature information loss, and make semantic segmentation results more accurate.(3) To improve the quality of key point feature extraction in complex scenes, a pixel-weight layer is added to PVNet to filter out more accurate pixel vectors, and a global attention mechanism is proposed to enhance the extraction of useful key point features while adding contextual information to enhance the performance of CoS-PVNet in extracting weak texture scene features. Related Work A deep learning network model is used to calculate the 6D pose [R,T] of the target object from the image given an RGB/RGB-D image containing the target object and a 3D model of the target object [8], as shown in Formula (1). [ where F is the deep learning model, I is the input image, Model is the 3D model of the object, and  is the model parameter. Deep learning-based 6D object pose estimation typically uses object detection networks or semantic segmentation networks as feature extraction networks to annotate target regions from images and encode pose semantic features [9].However, unlike pixellevel classification based on semantic segmentation, object detection has a faster inference speed and is more in line with the real-time requirements of AR, VR, robotics, and autonomous driving.Therefore, early 6D pose estimation often used object detection networks as feature extraction networks [10].Algorithms such as SSD-6D [11], YOLO-6D [12], and CDPN [13] first calculate the 2D bounding box of the object based on SSD [11], YOLO V2 [14], and Faster R-CNN [15] target detection network, respectively, and then send the 2D bounding box area into the pose calculation branch to estimate the 6D pose of the target object.The inference speeds of YOLO-6D and CDPN are 20 ms and 33 ms, which are much higher than the pose estimation models based on semantic segmentation during the same period.However, due to the fact that the 2D bounding boxes output by the object detection network contain some background or occlusion areas, the features input to the pose calculation module inevitably contain interference features, thereby reducing the accuracy of 6D pose estimation and the robustness of model occlusion.Semantic segmentation is a pixel-level object detection method that accurately segments objects along their contours, eliminating occlusion and irrelevant background regions.Therefore, it is more suitable as a feature extraction network for complex scenes [16].For example, the average accuracy of PoseCNN in occluded scenes in 2017 [17] was 24.9%, much higher than the 6.42% of YOLO-6D [12] in 2018, but the frame rate of PoseCNN was 10 FPS, only 1/5 of YOLO-6D.The semantic segmentation architecture of PoseCNN is similar to FCN [18], with an encoder of VGG [19], gradually encoding semantic features of different dimensions.However, the output high-resolution semantic features severely lack detailed information, and transposed convolution is inefficient.U-Net [20] is a classic semantic segmentation network that adopts a symmetric encoding and decoding structure and fuses detailed features with deep semantic features through skip connections to improve the network's understanding of images.Inspired by the U-Net network, the PVNet [21] uses residual blocks and bilinear interpolation to reconstruct a lightweight U-Net as a feature extraction network with an inference speed of 40 ms.However, inaccurate vector field predictions in complex scenes will affect the quality of generated key point assumptions, and PVNet can lead to difficult and insufficient feature extraction for target objects in complex scenes, affecting the accuracy and robustness of 6D pose estimation. In summary, semantic segmentation-based pose estimation methods are more suitable for 6D object pose estimation in complex scenes, and pose estimation based on different architecture feature extraction networks belongs to multitask learning [22], which not only annotates the target object from the input image but also calculates the 6D pose of target object [23].Therefore, appropriate multitask, self-learning weights help to explore the correlation between object detection tasks, semantic segmentation tasks, and pose estimation tasks and extract sufficient semantic features to distinguish target objects from occluded objects to reduce the impact of occluded areas to improve the capability of semantic feature expression and accurate estimation of 6D pose, and thus improve the performance of 6D object pose estimation network. Overall Framework Structure of CoS-PVNet In response to the difficulty in accurately estimating the 6D pose of objects in complex scenes [24], this paper proposes a two-stage CoS-PVNet pose estimation network based on a single RGB image for PVNet pose estimation.By integrating key point localization into a deep learning architecture, CNN is used to establish the correspondence between the 2D and 3D of the target object, accurately locate 2D key points, and then use the global attention mechanism and voting mechanism to execute the PnP algorithm to solve the 6D pose information of objects, accurately estimating the 6D pose of the target object without any pose refinement. The overall framework structure of CoS-PVNet is shown in Figure 2. Given a single RGB image containing the target object, a weight self-learning module is added between the skip connections of PVNet, and three tasks are performed: constructing semantic labels for predicting pixel directions, constructing unit vectors, and predicting pixel weights.Then, a new global attention mechanism (GAM) is proposed to enhance the extraction of useful features and increase contextual information.Furthermore, the ASPP-DF-PVNet algorithm [25] is used to optimize RANSAC voting for locating 2D key points, filtering out biased votes, and further optimizing the voting results to obtain more accurate 2D key points.Finally, the PnP algorithm is used to solve the 6D pose of the target, and a homogeneous coordinate transformation matrix composed of translation and rotation transformations of the target object coordinate system relative to the camera input coordinates is solved, achieving the transformation of the CoS-PVNet coordinate system. CoS-PVNet Weight Self-Learning Module Structure The weight self-learning module structure consists of a series of residual units.As shown in Figure 2, the overall backbone of the network is a pretrained ResNet-18 [26], followed by a weight self-learning module and several convolutional and upsampling layers, described as a weight self-learning structure.In the network structure, a weight self-learning module is added to the skip connections, and through the weight self-learning module, larger weight information is added to prevent the loss of key information, thereby making the semantic segmentation results more accurate. The weight self-learning module has added conv5-conv10x to the network structure of ResNet-18.Take the image of 3 H W   as input for downsampling until the feature map reaches , and then replace the convolution in the last two blocks of Res-Net-18 with rate = 2 and rate = 4. Subsequently, the feature maps output by the encoder are input into the weight self-learning module to extract dense features.Finally, connect the result feature maps from all branches and feed them to another 1 × 1 convolution to obtain the desired spatial dimension.In the weight self-learning module, the number of output channels is set to 256.After obtaining the feature map processed by the weight self-learning module, upsampling is performed until the size reaches H W  .Assuming there are C object classes and each object has K key points, 1 × 1 convolution is applied on the feature map to output tensors for vector field representations of key points and The semantic labels of the segmented image and the predicted unit vector ( ) k v p of each pixel for K key points are outputted with the same size by inputting an RGB image. ( ) k v p represents the direction of each pixel voted pointing to a key point k X , and ( ) is calculated as the distance difference between the current pixel P and the K-th key point divided by the binomial of the distance difference between the two: The pixel-weight outputs by CoS-PVNet represent the confidence score obtained by each pixel, which is used to filter out outliers and internal pixels for voting before calculating the two-dimensional position of the key points.e I represents pixel weight, which estimates the cosine value between the predicted vector and the target vector ( ) e cos( ( ), ( )) The larger the pixel-weight value, the closer the predicted vector is to the true value.In the process of calculating the key points later, the pixels to be voted on are selected based on the predicted pixel-weight values from the previous ones to ensure the accuracy of pose estimation.The total loss function is: where vec L is the vector field prediction loss function, sem L is the semantic segmentation loss function, and e L is the weight prediction loss function.vec  , sem  , e  represents the corresponding coefficient.The loss function for vector field prediction is defined as follows: CoS-PVNet Global Attention Mechanism To cope with complex scene feature extraction or lack of features and scenes without features, a global attention mechanism is proposed in the CoS-PVNet algorithm to enhance the extraction of useful features and increase contextual information for more effective extraction of input feature maps.As shown in Figure 3, this mechanism adopts dilated convolution and adaptive weighting strategies to capture local and global contextual information of the input feature map.Firstly, a dilated convolutional layer is applied to the input feature map X to capture the local contextual information of the input feature map.The dilated convolutional layer outputs a feature map D, which contains spatial information of the original feature map and contextual information captured through dilated convolution.Next, a global average pooling is performed on feature map D to extract global contextual information.Global average pooling transforms feature map D into a feature vector G that represents global information.In order to achieve an adaptive attention mechanism, the global information feature vector G is utilized and passed to a shared fully connected layer MLP.The fully connected layer MLP outputs a weight matrix W with the same dimension as the input feature map X.Subsequently, the weight matrix W is used to perform weighted fusion on the input feature map X.The weighted fusion operation can be expressed as A = W ⨂ X, where ⨂ represents element-wise multiplication.In this way, a weighted feature map A containing adaptive attention information is obtained. Then, an element-wise addition operation is performed on the weighted feature map A and X to obtain the added feature map.Finally, the final feature map Z is generated through the ReLU activation function.For the given input X, GAM is expressed as: where D is the feature map obtained by dilated convolution, G is the feature vector generated by average pooling, W is the output weight matrix of the fully connected layer, and Z is the feature map generated by the ReLU activation function after addition.Therefore, by integrating local and global contextual information, this adaptive attention mechanism can more effectively extract information from input feature maps.The adaptive weighting strategy enables the model to automatically adjust attention weights based on input feature maps, thereby improving the performance of CoS-PVNet.In addition, since PVNet uses cosine similarity between two vectors to determine voting, the method is more reliable when the key point assumption is consistent with more predicted directions [25].However, when pixels are far from the key point assumption, the small angle between two directional vectors may cause significant voting bias.When the two assumptions are close, it will lead to inaccurate voting.Therefore, this paper uses the ASPP-DF-PVNet algorithm to optimize RANSAC voting for locating 2D key points, in order to obtain more accurate 2D key points and provide support for subsequent accurate target object pose estimation. CoS-PVNet Target Object Pose Estimation After determining the key 2D positions of the target object, CoS-PVNet achieves pose estimation through the PnP algorithm.By calculating the mean k  and covariance matrix of the estimated target object and using the minimum Mahalanobis distance, the 6D pose   , R t is calculated: where k X represents the 3D coordinates of the key points, k X represents the 2D map- ping of 3D coordinates k X , and  is the perspective mapping function.The rotation and translation parameters R and t are initialized using the EPnP (efficient perspective-npoint) algorithm.Due to the uncertainty of the features, the Levenberg−Marquardt (nonlinear least squares algorithm) is used to minimize the remapping error and solve Formula (12).Therefore, based on the voting results, PnP can accurately locate and utilize 2D key points, allowing distance filtering voting schemes to improve the performance of pose estimation further.In addition, in the subsequent experiments of this article, to explore the impact of the number of key points on pose estimation, different numbers of key points are used to compare the results, and K = 8 is taken into account for efficiency and accuracy. CoS-PVNet Coordinate System Conversion Relationship The 6D pose estimation refers to estimating the 3D position and 3D pose of an object in the camera coordinate system.At this time, the coordinate system of the original object itself can be regarded as the world coordinate system, that is, obtaining a homogeneous transformation matrix composed of translation and rotation transformations from the world coordinate system of the original object to the camera coordinate system.As shown in Figure 4, CoS-PVNet registration mainly establishes the transformation relationship between the world coordinate system, camera coordinate system, image coordinate system, and pixel coordinate system.(image coordinate system) involves K (camera reference).Rotation and translation transformations will occur during the camera shooting process.According to the principle of small-hole imaging, the target image is inverted, and the transformation is represented by a homography matrix.The mapping relationship between the homography matrix and the rotation translation matrix , R t is used to calculate and solve various parameters.By inferring the relationship between coordinate systems, it can be inferred that the relationship between point ( , ) u v in the pixel coordinate system of CoS-PVNet and point ( w , , w w X Y Z ) in the world coordinate system is: where represents the internal reference matrix of the camera, which can be obtained by calibrating the camera.When the camera is actually shooting, the camera pose can be solved according to the above Formula ( 14).Therefore, after completing the spatial projection coordinate transformation based on CoS-PVNet, the camera pose parameters can be solved, achieving stable and robust system applications in AR, VR, robotics, and autonomous driving.The following four coordinate systems are involved in image processing: O X Y Z  :Camera coordinate system with optical center as origin o xy  :Image coordinate system, with optical center as the midpoint of the image uv :Pixel coordinate system, with the origin in the upper left corner of the image P :A point in the world coordinate system is a real point in life p :Point is an imaging point in the image, with coordinates in the image coordinate system and in the pixel coordinate system f : Camera focal length, equal to the distance between and , CoS-PVNet Pose Estimation and Application Process The pose estimation and application process of CoS-PVNet are shown in Figure 5.By extracting feature information from the input RGB image and using a weight self-learning module, the model can automatically adjust and optimize weights during the training process, improving the flexibility and adaptability of the model.Then, the CoS-PVNet predicts the key point position of each target object on the feature map and combines the global attention mechanism to enhance the extraction of useful features and increase contextual information in order to extract information more effectively from the input feature map.Subsequently, CoS-PVNet generates a voting vector for each detected key point, using the voting results to estimate the 6D pose of the target object.Finally, the CoS-PVNet coordinate system transformation relationship is solved to further realize applications of AR, VR, robotics, and autonomous driving.The specific steps for CoS-PVNet pose estimation and application are: RGB image input Step 1. Use the camera to input an RGB image containing the target object. Step 2. Feed the input image into a pre-trained ResNet18 convolutional neural network to accurately extract feature information such as the shape, texture, and color of objects in the input image.For different RGB image data, the CoS-PVNet weight self-learning module can balance the focus of the model by adjusting the weights of different categories. Step 3.During the training process, CoS-PVNet updates its weights through a weight selflearning module and backpropagation algorithm to minimize the value of the loss function and generate accurate key point feature maps. Step 4. CoS-PVNet predicts the key point positions of each target object on the feature map, usually the corners, centers, or other prominent feature points of the target object. Step 5. A set of key points is defined on the 3D model of an object with fixed coordinates (X, Y, Z) in 3D space.When the object is placed in a certain posture in the real world and captured as an image, these 3D key points will be projected onto the image plane to form 2D key points, which involve the internal parameters (such as focal length, principal point, etc.) and external parameters (such as rotation matrix and translation vector) of the phase machine.These parameters can map points in 3D space to the 2D image plane. Step 6.Before predicting the feature map, the global attention mechanism is used to enhance the extraction of useful features and increase contextual information, which is used to extract input feature map information more effectively and better correspond to the 2D−3D relationship of the target object. Step 7. CoS-PVNet generates a voting vector for each detected key point, uses the Gaussian kernel function to balance the importance of different votes, aggregates all voting vectors in the image space, and can form a voting density map or voting cloud, which reflects the 3D position and 3D pose of the target object in the image. Step 8.In CoS-PVNet, PnP is used to calculate the 3D position of an object from the centroid position of the vote, and the relative position relationship between key points is used to estimate the rotation of an object. Step 9. CoS-PVNet can estimate the pose parameter matrix of the camera, including rotation matrix, translation vector, or quaternion, based on a set of known 3D points and their projections in the image and apply it to AR, VR, robotics, and autonomous driving. Experimental Environment Configuration CoS-PVNet provides accurate initial pose estimation based on RGB images, aiming to accurately locate and estimate the 3D direction and 3D translation relationship of objects.This article conducts experimental analysis on PVNet, CoS-PVNet, and the latest 6D pose estimation algorithm and uses ablation experiments to analyze the performance of each module of CoS-PVNet.The configuration of the experimental environment in this article is shown in Table 1.This article conducts experiments on two benchmark datasets, LineMod [27] and Occupation LineMod [28], which are widely used in 6D pose estimation experiments to evaluate the performance of CoS-PVNet.The LineMod dataset exhibits significant clutter, diversity, multiview, and true pose annotation but only slight occlusion.The Occlusion LineMod dataset introduces interference of different occlusion levels based on LineMod, which is characterized by the complex relationship between the target object and the background.This provides more information for the performance evaluation of the 6D object pose estimation network. (1) LineMod is a benchmark dataset used for 6D object pose estimation, as shown in Figure 6, consisting of 15 objects, each consisting of over 1200 images, with a total of 15,783 images.It not only annotates the central object in each RGB image but also provides the inherent characteristics of 3D CAD models and cameras for each object.The complex factors of LineMod include background clutter, textureless objects, and lighting changes.(2) The Occupation LineMod dataset, as a subset of LineMod, contains 1214 images of 8 objects and provides additional pose annotations on non-central objects.Compared with LineMod, the images in the Occlusion LineMod dataset contain multiple objects under severe occlusion, making 6D pose estimation extremely challenging.In order to ensure fairness in conducting comparative experiments with PVNet and related algorithms, the same training test segmentation is used on the LineMod dataset (15% for training and 85% for testing), while the Occlusion LineMod dataset is only used for testing.In addition to the training images provided by LineMod, synthetic images are used to enhance training data.Moreover, this article adopts data augmentation techniques to prevent overfitting, including rotating images between certain angles (−30°, 30°), randomly blurring and cropping with a 50% probability, and randomly changing the original brightness and contrast of each image from 0.9 to 1.1 times.In addition, during the training dataset stage, an Adam optimizer with an initial learning rate of 0.001 is used, and the batch size is set to 20.The network is trained with 100 epochs. Evaluation Indicators The performance of CoS-PVNet can be evaluated using two metrics: 2D projection metric and a model point average 3D distance (ADD) metric [29], which can measure pose errors in 2D−3D space.The 2D projection metric mainly measures the average distance between the estimated pose and the 3D model point projection under the real pose, specifically: where M represents the set of 3D model points, m is the number of points.K is the inherent matrix of the camera.R and T are estimated rotation and translation matrices, while R and T are real poses.When the average distance after 2D projection measurement is within 5 pixels, it is considered that the estimated 6D pose is correct. Two common metrics-ADD (average distance) metric and ADD-S metric-are used to estimate attitude in the ADD metric, which is represented uniformly with ADD (-S) in this paper. (1) ADD metric: Convert model points based on estimated and ground true attitudes and calculate the average distance between the two conversion point sets.When the distance is less than 10% of the model diameter, the estimated attitude is correct, as shown in Formula (16). where W represents the set of sampling points for the target 3D model, y represents the point in W, and m represents the total number of sampling point sets.(2) ADD-S metric: For symmetric objects, use ADD-S metric, where the average distance is calculated based on the distance to the nearest point.Evaluate the target using ADD-S accuracy and AUC (Area Under Curve) area, where AUC is the area under the accuracy threshold curve, obtained by changing the distance threshold in the evaluation.ADD-S metric is represented by ADD-S, as shown in Formula (17). LineMod Dataset Experimental Results The visualization results of CoS-PVNet on the LineMod dataset for pose estimation are shown in Figure 7.The green 3D border represents the true pose, and the blue border represents the estimated pose.It can be seen from the figure that CoS-PVNet has high accuracy, with the estimated target object almost overlapping with the estimated bounding box. Occlusion LineMod Dataset Experimental Results The Occlusion LineMod dataset is only used as a testing set, and the previously trained model can be used for experimental testing.The pose estimation results of the Occupation LineMod dataset are shown in Figure 8.The green 3D border also represents the true pose, and the blue border also represents the estimated pose.Compared with the baseline method PVNet, it can be seen that CoS-PVNet can produce accurate results even in severe occlusion.However, the last column also shows that CoS-PVNet cannot provide sufficient information for 6D pose estimation when the target area is too small.Therefore, testing the renderings on the LineMod dataset and Occlusion LineMod dataset shows that CoS-PVNet has a good overlap effect, indicating that CoS-PVNet still has high accuracy in complex backgrounds.However, when the object target is too small, it cannot accurately estimate its 6D pose, which is related to overfitting caused by the weight self-learning module in CoS-PVNet.Correspondingly, this article uses data augmentation to prevent this from occurring. Comparison Experiment of 2D Projection Metrics CoS-PVNet is compared with relevant RGB image-based pose estimation methods.On the LineMod dataset and Occlusion LineMod dataset, CoS-PVNet is compared quantitatively with BB8 [30], YOLO-6D [12], and PVNet [21] in 2D projection metrics.The experimental results of 2D projection metric comparisons are shown in Table 2.As shown in Table 2, BB8 and YOLO-6D use the eight corners of a 3D bounding box plus an object center as the key point and directly regress its coordinates, while PVNet and CoS-PVNet apply a voting strategy to locate eight surface key points and one object center from the predicted vector field.When using the same loss as PVNet, CoS-PVNet achieves better performance on most objects, increasing the average accuracy of BB8 by 10.08% on the LineMod dataset, especially increasing the accuracy of target objects can and cat by more than 15%.This indicates that CoS-PVNet is also more accurate for smallscale object pose estimation.In the case of target occlusion, CoS-PVNet is improved by 37.46% on the 2D projection metric measurement of target object categories compared to YOLO-6D.CoS-PVNet also shows better performance compared to the PVNet, with an improvement of 1.32% in the 2D Projection metric evaluation.Therefore, compared with the indicator evaluation results on the LineMod dataset mentioned above, CoS-PVNet performs better than PVNet in complex occlusion scenes on the Occlusion LineMod dataset, which also proves the correctness of the CoS-PVNet pose estimation in this paper. Comparative Experiment of CoS-PVNet Algorithm ADD (-S) (1) LineMod Dataset ADD (-S) Comparative Experiment Experiments are conducted on the LineMod dataset, comparing CoS-PVNet with algorithms such as YOLO-6D [12], PoseCNN [17], DenseFusion [31], Dual Stream [32], and PVNet [21].Two symmetrical objects, egg-box and glue, are evaluated using the ADD-S metric, while other objects are evaluated using the ADD metric.The comparative experimental results are shown in Table 3.As shown in Table 3, CoS-PVNet has improved the average values of YOLO-6D, PoseCNN, DenseFusion, DualStream, and PVNet algorithms by 39.5%, 6.8%, 1.1%, 0.6%, and 9.1% respectively.For four types of objects: ape, cat, duck, and hole puncher, the accuracy improvement of pose estimation is relatively small.The reason for this is that CoS-PVNet has certain advantages in extracting features for large-scale target objects, and when the ADD metric is less than 10% of the maximum diameter of the target during testing, the pose estimation is considered to be correct.The maximum diameter of the above four types of targets is small, resulting in relatively small improvement.CoS-PVNet performs better in estimating the pose of other target objects, indicating that CoS-PVNet fully extracts features of the target object and can effectively improve the accuracy of 6D pose estimation for objects in complex scenes. (2) Comparison Experiment of the Occupation LineMod Dataset ADD (-S) Experiments are conducted on the Occlusion LineMod dataset to compare CoS-PVNet with HybridPose [33], SSPE [34], RePOSE [35], SegDriven [36], PoseCNN [17] and PVNet [21].Using the same indicators as the test LineMod dataset, the comparative experimental results are shown in Table 4.As shown in Table 4, CoS-PVNet outperforms HybridPose, SSPE, SegDriven, PoseCNN, and PVNet on the average mean values, with improvements of 1.7%, 5.9%, 22.2%, 24.3%, and 8.4%, respectively.However, CoS-PVNet is 2.4% lower than RePOSE on the average mean value, mainly focusing on three aspects: can, cat, and duck.This indicates that RePOSE can quickly and accurately refine the pose by minimizing the feature measurement error between input and rendered image representations.However, when small targets are severely occluded, or the extracted features are insufficient to recognize the target object well, the performance of CoS-PVNet is even better. CoS-PVNet Ablation Experiment To verify the effectiveness of each module, ablation experiments are conducted to analyze each module of CoS-PVNet.Table 5 shows the results of gradually adding the CoS-PVNet algorithm modules separately for comparison.Due to the significant improvement in some categories on the LineMod dataset, ablation experiments have a certain representativeness.Therefore, this paper uses the LineMod dataset for accuracy and speed testing.As shown in Table 5, the accuracy and velocity of pose estimation for different modules of CoS-PVNet are shown.If the predicted translation and rotation errors with the actual pose are less than 5 cm and 5°, respectively, it is considered that the predicted object pose is correct.If the CoS-PvNet weight self-learning module is directly added, the accuracy and speed of using the PnP algorithm to solve pose are 46.3% and 14 FPS.If the CoS-PvNet global attention mechanism is added to infer a pose, the accuracy is improved by 17.3%, and the FPS is improved by 7 FPS.Therefore, if the CoS-PVNet weight self-learning module is directly added, the PnP algorithm is prone to incomplete simulation or overfitting, resulting in lower accuracy of CoS-PVNet pose estimation.However, CoS-PVNet directly utilizes the local and global context information of the global attention mechanism feature map, further improving the robustness of CoS-PVNet pose estimation. Discussion Object 6D pose estimation is a core technology for applications such as AR, VR, robotics, and autonomous driving.However, due to complex scene factors, such as background clutter, target occlusion, and weak texture features, it can easily lead to inaccurate 6D pose estimation.This article proposes a robust CoS-PVNet pose estimation network for complex scenes.Firstly, by adding a pixel-weight self-learning layer on the basis of the PVNet network structure, the pixel-weight values are predicted to be selected for voting.Then, stable and robust useful features are extracted using the global attention mechanism of local and global contextual information in the input feature map.Finally, the PnP algorithm is used to solve the 6D pose, which improves the accuracy and robustness of 6D object pose estimation in complex scenes. 6D object pose estimation is an important research topic in the field of computer vision, which determines the 3D position and direction of an object in the camera center coordinate system.In the field of AR, virtual elements can be superimposed on objects to maintain their relative pose as they move.With the maturity of technologies such as SLAM, robots have been able to perform good positioning in 3D space, but 6D pose estimation technology is still needed for object grasping interaction.In the field of autonomous driving, the 6D pose estimation assistance mode can achieve dynamic 360° panoramic driving.In this paper, by adding pixel-weight layers on the basis of the PVNet network, more accurate pixel point vectors are selected, and the pose of the object is estimated based on local and global contextual information of the feature map, and then the coordinate system transformation matrix is solved.CoS-PVNet for virtual real fusion interactive application framework is shown in Figure 9.By using feature detection operators to extract key feature points and descriptors from real-world scene images and matching them with the corresponding natural feature templates constructed offline, CoS-PVNet is used to solve the pose of AR cameras and assembly objects through geometric visual transformation [37], and 3D virtual real interaction technology is used to empower stable and robust virtual real fusion interactive applications of AR, VR, robotics, and autonomous driving.In recent years, 6D pose estimation methods have made significant progress in fields such as AR registration, robot grasping, and autonomous driving navigation.However, the lack of higher dimensional semantic modeling and understanding of specific complex interactive application scenarios has made it difficult to meet the accuracy and robustness of 6D pose estimation in different job scenarios [38].On the other hand, with the optimization of deep learning models and the development of new architectures, a 6D pose estimation algorithm will be able to process object recognition and pose estimation in complex scenes more quickly.Although the CoS-PVNet pose estimation algorithm proposed in this article has achieved good results on the LineMod and Occlusion LineMod dataset, the dynamic uncertainty of "human-machine-object" in AR, VR, robotics, and autonomous driving [39] makes pose estimation for severely occluded and truncated target objects still difficult and important in the field of 6D object pose estimation [40].Therefore, there is still a lot of room for improvement in the accuracy of 6D pose estimation in complex scenes.Future work will utilize the latest advances in target area semantic segmentation models to accelerate the inference process and consider combining reinforcement learning to achieve active 6D object pose estimation [3].This will also provide support for improving the system performance of AR, VR, robotics, and autonomous driving, effectively promoting the digital and intelligent transformation and upgrading of manufacturing, transportation, and other industries. Conclusions We propose a robust CoS-PVNet pose estimation network for complex scenes to address the low accuracy in object 6D pose estimation.By adding pixel-weight self-learning layers on the basis of PVNet, more accurate pixel point vectors are selected, and a global attention mechanism is proposed to improve the performance of feature extraction by adding contextual information, thereby estimating the pose of CoS-PVNet target objects and solving the CoS-PVNet coordinate system transformation matrix, providing support for the implementation of AR, VR, robotics, and autonomous driving.The performance of CoS-PVNet is evaluated on the LineMod and Occlusion LineMod datasets.The experimental results show that CoS-PVNet can accurately estimate the 6D pose of target objects and effectively estimate the 6D pose of occluded objects with higher accuracy and robustness.However, this study also has limitations in not fully integrating geometric, normal, and other multivariate features.The next step is to deeply integrate industry application context feature information to adapt to more complex industry application scenarios. Figure 1 . Figure 1.Relationship between real world and virtual information in an AR system. t R and 1 T rep- resent the actual rotation and translation, while v R and 2 T represent the predicted rota- tion and translation respectively. Table 2 . Comparison results of 2D projection metrics (unit: %).Bold represents the maximum value of each row, and the meaning expressed in subsequent tables is the same.
8,793.4
2024-05-27T00:00:00.000
[ "Computer Science", "Engineering" ]